<output id="qn6qe"></output>

    1. <output id="qn6qe"><tt id="qn6qe"></tt></output>
    2. <strike id="qn6qe"></strike>

      亚洲 日本 欧洲 欧美 视频,日韩中文字幕有码av,一本一道av中文字幕无码,国产线播放免费人成视频播放,人妻少妇偷人无码视频,日夜啪啪一区二区三区,国产尤物精品自在拍视频首页,久热这里只有精品12

      K8S常見的微服務中間件部署之zookeeper集群

                                                    作者:尹正杰

      版權聲明:原創作品,謝絕轉載!否則將追究法律責任。

      一.sts控制器實戰篇

      1 StatefulSets概述

      以Nginx的為例,當任意一個Nginx掛掉,其處理的邏輯是相同的,即僅需重新創建一個Pod副本即可,這類服務我們稱之為無狀態服務。
      
      以MySQL主從同步為例,master,slave兩個庫任意一個庫掛掉,其處理邏輯是不相同的,這類服務我們稱之為有狀態服務。
      
      有狀態服務面臨的難題:
      	(1)啟動/停止順序;
      	(2)pod實例的數據是獨立存儲;
      	(3)需要固定的IP地址或者主機名;
      	
       
      StatefulSet一般用于有狀態服務,StatefulSets對于需要滿足以下一個或多個需求的應用程序很有價值。
      	(1)穩定唯一的網絡標識符。
      	(2)穩定獨立持久的存儲。
      	(3)有序優雅的部署和縮放。
      	(4)有序自動的滾動更新。	
      	
      	
      穩定的網絡標識:
      	其本質對應的是一個service資源,只不過這個service沒有定義VIP,我們稱之為headless service,即"無頭服務"。
      	通過"headless service"來維護Pod的網絡身份,會為每個Pod分配一個數字編號并且按照編號順序部署。
      	綜上所述,無頭服務("headless service")要求滿足以下兩點:
      		(1)將svc資源的clusterIP字段設置None,即"clusterIP: None";
      		(2)將sts資源的serviceName字段聲明為無頭服務的名稱;
      			
      			
      獨享存儲:
      	Statefulset的存儲卷使用VolumeClaimTemplate創建,稱為"存儲卷申請模板"。
      	當sts資源使用VolumeClaimTemplate創建一個PVC時,同樣也會為每個Pod分配并創建唯一的pvc編號,每個pvc綁定對應pv,從而保證每個Pod都有獨立的存儲。
      
      
      參考鏈接:
      	https://kubernetes.io/zh-cn/docs/concepts/workloads/controllers/statefulset/
      	https://kubernetes.io/zh-cn/docs/tutorials/stateful-application/basic-stateful-set/
      	
      

      2 StatefulSets控制器-網絡唯一標識之headless

      2.1 編寫資源清單

      [root@master231 statefulsets]# cat 01-statefulset-headless-network.yaml 
      apiVersion: v1
      kind: Service
      metadata:
        name: svc-headless
      spec:
        ports:
        - port: 80
          name: web
        # 將clusterIP字段設置為None表示為一個無頭服務,即svc將不會分配VIP。
        clusterIP: None
        selector:
          app: nginx
      
      
      ---
      
      apiVersion: apps/v1
      kind: StatefulSet
      metadata:
        name: sts-xiuxian
      spec:
        selector:
          matchLabels:
            app: nginx
        # 聲明無頭服務    
        serviceName: svc-headless
        replicas: 3 
        template:
          metadata:
            labels:
              app: nginx
          spec:
            containers:
            - name: nginx
              image: registry.cn-hangzhou.aliyuncs.com/yinzhengjie-k8s/apps:v1
              imagePullPolicy: Always
      [root@master231 statefulsets]# 
      [root@master231 statefulsets]# kubectl apply -f  01-statefulset-headless-network.yaml 
      service/svc-headless created
      statefulset.apps/sts-xiuxian created
      [root@master231 statefulsets]# 
      [root@master231 statefulsets]# kubectl get sts,svc,po -o wide
      NAME                           READY   AGE   CONTAINERS   IMAGES
      statefulset.apps/sts-xiuxian   3/3     8s    nginx        registry.cn-hangzhou.aliyuncs.com/yinzhengjie-k8s/apps:v1
      
      NAME                   TYPE        CLUSTER-IP   EXTERNAL-IP   PORT(S)   AGE   SELECTOR
      service/kubernetes     ClusterIP   10.200.0.1   <none>        443/TCP   10d   <none>
      service/svc-headless   ClusterIP   None         <none>        80/TCP    8s    app=nginx
      
      NAME                READY   STATUS    RESTARTS   AGE   IP               NODE        NOMINATED NODE   READINESS GATES
      pod/sts-xiuxian-0   1/1     Running   0          8s    10.100.203.186   worker232   <none>           <none>
      pod/sts-xiuxian-1   1/1     Running   0          6s    10.100.140.87    worker233   <none>           <none>
      pod/sts-xiuxian-2   1/1     Running   0          4s    10.100.160.134   master231   <none>           <none>
      [root@master231 statefulsets]# 
      

      2.測試驗證

      [root@master231 statefulsets]# kubectl exec -it sts-xiuxian-0 -- sh
      / # ping sts-xiuxian-1.svc-headless -c 3
      PING sts-xiuxian-1.svc-headless (10.100.140.87): 56 data bytes
      64 bytes from 10.100.140.87: seq=0 ttl=62 time=0.570 ms
      64 bytes from 10.100.140.87: seq=1 ttl=62 time=3.882 ms
      64 bytes from 10.100.140.87: seq=2 ttl=62 time=0.208 ms
      
      --- sts-xiuxian-1.svc-headless ping statistics ---
      3 packets transmitted, 3 packets received, 0% packet loss
      round-trip min/avg/max = 0.208/1.553/3.882 ms
      / # 
      / # 
      / # ping sts-xiuxian-2.svc-headless -c 3
      PING sts-xiuxian-2.svc-headless (10.100.160.134): 56 data bytes
      64 bytes from 10.100.160.134: seq=0 ttl=62 time=2.551 ms
      64 bytes from 10.100.160.134: seq=1 ttl=62 time=0.324 ms
      64 bytes from 10.100.160.134: seq=2 ttl=62 time=0.720 ms
      
      --- sts-xiuxian-2.svc-headless ping statistics ---
      3 packets transmitted, 3 packets received, 0% packet loss
      round-trip min/avg/max = 0.324/1.198/2.551 ms
      / # 
      / # 
      / # ping sts-xiuxian-2.svc-headless.default.svc.oldboyedu.com -c 3
      PING sts-xiuxian-2.svc-headless.default.svc.oldboyedu.com (10.100.160.134): 56 data bytes
      64 bytes from 10.100.160.134: seq=0 ttl=62 time=1.647 ms
      64 bytes from 10.100.160.134: seq=1 ttl=62 time=0.255 ms
      64 bytes from 10.100.160.134: seq=2 ttl=62 time=1.025 ms
      
      --- sts-xiuxian-2.svc-headless.default.svc.oldboyedu.com ping statistics ---
      3 packets transmitted, 3 packets received, 0% packet loss
      round-trip min/avg/max = 0.255/0.975/1.647 ms
      / # 
      / # 
      [root@master231 statefulsets]# 
      [root@master231 statefulsets]# kubectl delete pods --all
      pod "sts-xiuxian-0" deleted
      pod "sts-xiuxian-1" deleted
      pod "sts-xiuxian-2" deleted
      [root@master231 statefulsets]#  
      [root@master231 statefulsets]# kubectl get pods -o wide
      NAME            READY   STATUS    RESTARTS   AGE   IP               NODE        NOMINATED NODE   READINESS GATES
      sts-xiuxian-0   1/1     Running   0          6s    10.100.203.160   worker232   <none>           <none>
      sts-xiuxian-1   1/1     Running   0          4s    10.100.140.100   worker233   <none>           <none>
      sts-xiuxian-2   1/1     Running   0          3s    10.100.160.135   master231   <none>           <none>
      [root@master231 statefulsets]# 
      [root@master231 statefulsets]# kubectl exec -it sts-xiuxian-0 -- sh
      / # ping sts-xiuxian-1.svc-headless -c 3
      PING sts-xiuxian-1.svc-headless (10.100.140.100): 56 data bytes
      64 bytes from 10.100.140.100: seq=0 ttl=62 time=0.483 ms
      64 bytes from 10.100.140.100: seq=1 ttl=62 time=0.272 ms
      64 bytes from 10.100.140.100: seq=2 ttl=62 time=0.297 ms
      
      --- sts-xiuxian-1.svc-headless ping statistics ---
      3 packets transmitted, 3 packets received, 0% packet loss
      round-trip min/avg/max = 0.272/0.350/0.483 ms
      / # 
      / # ping sts-xiuxian-2.svc-headless -c 3
      PING sts-xiuxian-2.svc-headless (10.100.160.135): 56 data bytes
      64 bytes from 10.100.160.135: seq=0 ttl=62 time=1.443 ms
      64 bytes from 10.100.160.135: seq=1 ttl=62 time=0.255 ms
      64 bytes from 10.100.160.135: seq=2 ttl=62 time=0.257 ms
      
      --- sts-xiuxian-2.svc-headless ping statistics ---
      3 packets transmitted, 3 packets received, 0% packet loss
      round-trip min/avg/max = 0.255/0.651/1.443 ms
      / # 
      [root@master231 statefulsets]#
      [root@master231 statefulsets]# kubectl delete -f 01-statefulset-headless-network.yaml 
      service "svc-headless" deleted
      statefulset.apps "sts-xiuxian" deleted
      [root@master231 statefulsets]# 
      

      3 StatefulSets控制器-獨享存儲

      3.1 編寫資源清單

      [root@master231 statefulsets]# cat 02-statefulset-headless-volumeClaimTemplates.yaml
      apiVersion: v1
      kind: Service
      metadata:
        name: svc-headless
      spec:
        ports:
        - port: 80
          name: web
        clusterIP: None
        selector:
          app: nginx
      ---
      apiVersion: apps/v1
      kind: StatefulSet
      metadata:
        name: sts-xiuxian
      spec:
        selector:
          matchLabels:
            app: nginx
        serviceName: svc-headless
        replicas: 3 
        # 卷申請模板,會為每個Pod去創建唯一的pvc并與之關聯喲!
        volumeClaimTemplates:
        - metadata:
            name: data
          spec:
            accessModes: [ "ReadWriteOnce" ]
            # 聲明咱們自定義的動態存儲類,即sc資源。
            storageClassName: "oldboyedu-sc-xixi"
            resources:
              requests:
                storage: 2Gi
        template:
          metadata:
            labels:
              app: nginx
          spec:
            containers:
            - name: nginx
              image: registry.cn-hangzhou.aliyuncs.com/yinzhengjie-k8s/apps:v1
              ports:
              - containerPort: 80
                name: xiuxian
              volumeMounts:
              - name: data
                mountPath: /usr/share/nginx/html
      ---
      apiVersion: v1
      kind: Service
      metadata:
        name: svc-sts-xiuxian
      spec:
        type: ClusterIP
        clusterIP: 10.200.0.200
        selector:
           app: nginx
        ports:
        - port: 80
          targetPort: xiuxian
      [root@master231 statefulsets]# 
      

      3.2 測試驗證

      [root@master231 statefulsets]# kubectl apply -f  02-statefulset-headless-volumeClaimTemplates.yaml
      service/svc-headless created
      statefulset.apps/sts-xiuxian created
      service/svc-sts-xiuxian created
      [root@master231 statefulsets]# 
      [root@master231 statefulsets]# kubectl get sts,svc,po -o wide
      NAME                           READY   AGE   CONTAINERS   IMAGES
      statefulset.apps/sts-xiuxian   3/3     6s    nginx        registry.cn-hangzhou.aliyuncs.com/yinzhengjie-k8s/apps:v1
      
      NAME                      TYPE        CLUSTER-IP     EXTERNAL-IP   PORT(S)   AGE   SELECTOR
      service/kubernetes        ClusterIP   10.200.0.1     <none>        443/TCP   10d   <none>
      service/svc-headless      ClusterIP   None           <none>        80/TCP    6s    app=nginx
      service/svc-sts-xiuxian   ClusterIP   10.200.0.200   <none>        80/TCP    6s    app=nginx
      
      NAME                READY   STATUS    RESTARTS   AGE   IP               NODE        NOMINATED NODE   READINESS GATES
      pod/sts-xiuxian-0   1/1     Running   0          6s    10.100.203.163   worker232   <none>           <none>
      pod/sts-xiuxian-1   1/1     Running   0          5s    10.100.140.92    worker233   <none>           <none>
      pod/sts-xiuxian-2   1/1     Running   0          3s    10.100.160.132   master231   <none>           <none>
      [root@master231 statefulsets]# 
      [root@master231 statefulsets]# kubectl exec -it sts-xiuxian-0  -- sh
      / # echo AAA > /usr/share/nginx/html/index.html
      / # 
      [root@master231 statefulsets]# 
      [root@master231 statefulsets]# kubectl exec -it sts-xiuxian-1  -- sh
      / # echo BBB > /usr/share/nginx/html/index.html 
      / # 
      [root@master231 statefulsets]# 
      [root@master231 statefulsets]# kubectl exec -it sts-xiuxian-2  -- sh
      / # echo CCC > /usr/share/nginx/html/index.html 
      / # 
      [root@master231 statefulsets]# 
      [root@master231 statefulsets]# for i in `seq 10`; do curl 10.200.0.200; done
      CCC
      BBB
      AAA
      CCC
      BBB
      AAA
      CCC
      BBB
      AAA
      CCC
      [root@master231 statefulsets]# 
      [root@master231 statefulsets]# kubectl delete pods --all
      pod "sts-xiuxian-0" deleted
      pod "sts-xiuxian-1" deleted
      pod "sts-xiuxian-2" deleted
      [root@master231 statefulsets]# 
      [root@master231 statefulsets]# kubectl get pods -o wide
      NAME            READY   STATUS    RESTARTS   AGE   IP               NODE        NOMINATED NODE   READINESS GATES
      sts-xiuxian-0   1/1     Running   0          6s    10.100.203.171   worker232   <none>           <none>
      sts-xiuxian-1   1/1     Running   0          4s    10.100.140.99    worker233   <none>           <none>
      sts-xiuxian-2   1/1     Running   0          3s    10.100.160.178   master231   <none>           <none>
      [root@master231 statefulsets]# 
      [root@master231 statefulsets]# for i in `seq 10`; do curl 10.200.0.200; done
      CCC
      BBB
      AAA
      CCC
      BBB
      AAA
      CCC
      BBB
      AAA
      CCC
      [root@master231 statefulsets]# 
      

      3.3 驗證后端存儲

      [root@master231 statefulsets]# kubectl get pod -o wide
      NAME            READY   STATUS    RESTARTS   AGE     IP               NODE        NOMINATED NODE   READINESS GATES
      sts-xiuxian-0   1/1     Running   0          4m25s   10.100.203.171   worker232   <none>           <none>
      sts-xiuxian-1   1/1     Running   0          4m23s   10.100.140.99    worker233   <none>           <none>
      sts-xiuxian-2   1/1     Running   0          4m22s   10.100.160.178   master231   <none>           <none>
      [root@master231 statefulsets]# 
      [root@master231 statefulsets]# kubectl get pvc -l app=nginx  | awk 'NR>=2{print $3}' | xargs kubectl describe pv  | grep VolumeHandle
          VolumeHandle:      10.0.0.231#oldboyedu/data/nfs-server/sc-xixi#pvc-dafd74ce-50b0-475d-94e1-2a64512c62ed##
          VolumeHandle:      10.0.0.231#oldboyedu/data/nfs-server/sc-xixi#pvc-a10dcb52-cd54-4d14-b666-068238359f0e##
          VolumeHandle:      10.0.0.231#oldboyedu/data/nfs-server/sc-xixi#pvc-8c84e2d8-3da8-4ecb-8902-e11bd8bd33b0##
      [root@master231 statefulsets]# 
      [root@master231 statefulsets]# cat  /oldboyedu/data/nfs-server/sc-xixi/pvc-dafd74ce-50b0-475d-94e1-2a64512c62ed/index.html 
      AAA
      [root@master231 statefulsets]# 
      [root@master231 statefulsets]# cat  /oldboyedu/data/nfs-server/sc-xixi/pvc-a10dcb52-cd54-4d14-b666-068238359f0e/index.html 
      BBB
      [root@master231 statefulsets]# 
      [root@master231 statefulsets]# cat  /oldboyedu/data/nfs-server/sc-xixi/pvc-8c84e2d8-3da8-4ecb-8902-e11bd8bd33b0/index.html 
      CCC
      [root@master231 statefulsets]# 
      

      二.sts的分段更新

      1.編寫資源清單

      [root@master231 statefulsets]# cat > 03-statefuleset-updateStrategy-partition.yaml <<EOF
      apiVersion: v1
      kind: Service
      metadata:
        name: sts-headless
      spec:
        ports:
        - port: 80
          name: web
        clusterIP: None
        selector:
          app: web
      
      ---
      
      apiVersion: apps/v1
      kind: StatefulSet
      metadata:
        name: oldboyedu-sts-web
      spec:
        # 指定sts資源的更新策略
        updateStrategy:
          # 配置滾動更新
          rollingUpdate:
            # 當編號小于3時不更新,說白了,就是Pod編號大于等于3的Pod會被更新!
            partition: 3
        selector:
          matchLabels:
            app: web
        serviceName: sts-headless
        replicas: 5
        template:
          metadata:
            labels:
              app: web
          spec:
            containers:
            - name: c1
              ports:
              - containerPort: 80
                name: xiuxian	  
              image: registry.cn-hangzhou.aliyuncs.com/yinzhengjie-k8s/apps:v1
      ---
      apiVersion: v1
      kind: Service
      metadata:
        name: oldboyedu-sts-svc
      spec:
        selector:
           app: web
        ports:
        - port: 80
          targetPort: xiuxian
      EOF
      

      2.驗證

      [root@master231 statefulsets]# kubectl get pods -o wide
      NAME                  READY   STATUS    RESTARTS   AGE   IP               NODE        NOMINATED NODE   READINESS GATES
      oldboyedu-sts-web-0   1/1     Running   0          30s   10.100.203.183   worker232   <none>           <none>
      oldboyedu-sts-web-1   1/1     Running   0          28s   10.100.140.102   worker233   <none>           <none>
      oldboyedu-sts-web-2   1/1     Running   0          28s   10.100.160.180   master231   <none>           <none>
      oldboyedu-sts-web-3   1/1     Running   0          26s   10.100.203.185   worker232   <none>           <none>
      oldboyedu-sts-web-4   1/1     Running   0          25s   10.100.140.93    worker233   <none>           <none>
      [root@master231 statefulsets]# 
      [root@master231 statefulsets]# kubectl get pods -l app=web -o yaml | grep "\- image:"
          - image: registry.cn-hangzhou.aliyuncs.com/yinzhengjie-k8s/apps:v1
          - image: registry.cn-hangzhou.aliyuncs.com/yinzhengjie-k8s/apps:v1
          - image: registry.cn-hangzhou.aliyuncs.com/yinzhengjie-k8s/apps:v1
          - image: registry.cn-hangzhou.aliyuncs.com/yinzhengjie-k8s/apps:v1
          - image: registry.cn-hangzhou.aliyuncs.com/yinzhengjie-k8s/apps:v1
      [root@master231 statefulsets]# 
      [root@master231 statefulsets]# grep hangzhou 03-statefuleset-updateStrategy-partition.yaml 
              image: registry.cn-hangzhou.aliyuncs.com/yinzhengjie-k8s/apps:v1
      [root@master231 statefulsets]# 
      [root@master231 statefulsets]# sed -i '/hangzhou/s#v1#v2#' 03-statefuleset-updateStrategy-partition.yaml 
      [root@master231 statefulsets]# 
      [root@master231 statefulsets]# grep hangzhou 03-statefuleset-updateStrategy-partition.yaml 
              image: registry.cn-hangzhou.aliyuncs.com/yinzhengjie-k8s/apps:v2
      [root@master231 statefulsets]# 
      [root@master231 statefulsets]# kubectl apply -f 03-statefuleset-updateStrategy-partition.yaml
      service/sts-headless unchanged
      statefulset.apps/oldboyedu-sts-web configured
      service/oldboyedu-sts-svc unchanged
      [root@master231 statefulsets]# 
      [root@master231 statefulsets]# kubectl get pods -o wide
      NAME                  READY   STATUS    RESTARTS   AGE     IP               NODE        NOMINATED NODE   READINESS GATES
      oldboyedu-sts-web-0   1/1     Running   0          2m23s   10.100.203.183   worker232   <none>           <none>
      oldboyedu-sts-web-1   1/1     Running   0          2m21s   10.100.140.102   worker233   <none>           <none>
      oldboyedu-sts-web-2   1/1     Running   0          2m21s   10.100.160.180   master231   <none>           <none>
      oldboyedu-sts-web-3   1/1     Running   0          12s     10.100.203.174   worker232   <none>           <none>
      oldboyedu-sts-web-4   1/1     Running   0          14s     10.100.140.101   worker233   <none>           <none>
      [root@master231 statefulsets]# 
      [root@master231 statefulsets]# kubectl get pods -l app=web -o yaml | grep "\- image:"
          - image: registry.cn-hangzhou.aliyuncs.com/yinzhengjie-k8s/apps:v1
          - image: registry.cn-hangzhou.aliyuncs.com/yinzhengjie-k8s/apps:v1
          - image: registry.cn-hangzhou.aliyuncs.com/yinzhengjie-k8s/apps:v1
          - image: registry.cn-hangzhou.aliyuncs.com/yinzhengjie-k8s/apps:v2
          - image: registry.cn-hangzhou.aliyuncs.com/yinzhengjie-k8s/apps:v2
      [root@master231 statefulsets]# 
      [root@master231 statefulsets]# kubectl delete -f 03-statefuleset-updateStrategy-partition.yaml 
      service "sts-headless" deleted
      statefulset.apps "oldboyedu-sts-web" deleted
      service "oldboyedu-sts-svc" deleted
      [root@master231 statefulsets]# 
      [root@master231 statefulsets]# 
      

      三.基于sts部署zookeeper集群

      1 K8S所有節點導入鏡像

      wget http://192.168.14.253/Resources/Kubernetes/Case-Demo/oldboyedu-kubernetes-zookeeper-v1.0-3.4.10.tar.gz
      docker load  -i oldboyedu-kubernetes-zookeeper-v1.0-3.4.10.tar.gz 
      
      參考鏈接:
      	https://kubernetes.io/zh-cn/docs/tutorials/stateful-application/zookeeper/
      
      

      2 編寫資源清單

      [root@master231 sts]# cat > 04-sts-zookeeper.yaml << 'EOF'
      apiVersion: v1
      kind: Service
      metadata:
        name: zk-hs
        labels:
          app: zk
      spec:
        ports:
        - port: 2888
          name: server
        - port: 3888
          name: leader-election
        clusterIP: None
        selector:
          app: zk
      ---
      apiVersion: v1
      kind: Service
      metadata:
        name: zk-cs
        labels:
          app: zk
      spec:
        ports:
        - port: 2181
          name: client
        selector:
          app: zk
      ---
      apiVersion: policy/v1
      # 此類型用于定義可以對一組Pod造成的最大中斷,說白了就是最大不可用的Pod數量。
      # 一般情況下,對于分布式集群而言,假設集群故障容忍度為N,則集群最少需要2N+1個Pod。
      kind: PodDisruptionBudget
      metadata:
        name: zk-pdb
      spec:
        # 匹配Pod
        selector:
          matchLabels:
            app: zk
        # 最大不可用的Pod數量。這意味著將來zookeeper集群,最少要2*1 +1 = 3個Pod數量。
        maxUnavailable: 1
      ---
      apiVersion: apps/v1
      kind: StatefulSet
      metadata:
        name: zk
      spec:
        selector:
          matchLabels:
            app: zk
        serviceName: zk-hs
        replicas: 3
        updateStrategy:
          type: RollingUpdate
        podManagementPolicy: OrderedReady
        template:
          metadata:
            labels:
              app: zk
          spec:
            tolerations:
            - key: node-role.kubernetes.io/master
              operator: Exists
              effect: NoSchedule
            affinity:
              podAntiAffinity:
                requiredDuringSchedulingIgnoredDuringExecution:
                  - labelSelector:
                      matchExpressions:
                        - key: "app"
                          operator: In
                          values:
                          - zk
                    topologyKey: "kubernetes.io/hostname"
            containers:
            - name: kubernetes-zookeeper
              imagePullPolicy: IfNotPresent
              image: "registry.k8s.io/kubernetes-zookeeper:1.0-3.4.10"
              resources:
                requests:
                  memory: "1Gi"
                  cpu: "0.5"
              ports:
              - containerPort: 2181
                name: client
              - containerPort: 2888
                name: server
              - containerPort: 3888
                name: leader-election
              command:
              - sh
              - -c
              - "start-zookeeper \
                --servers=3 \
                --data_dir=/var/lib/zookeeper/data \
                --data_log_dir=/var/lib/zookeeper/data/log \
                --conf_dir=/opt/zookeeper/conf \
                --client_port=2181 \
                --election_port=3888 \
                --server_port=2888 \
                --tick_time=2000 \
                --init_limit=10 \
                --sync_limit=5 \
                --heap=512M \
                --max_client_cnxns=60 \
                --snap_retain_count=3 \
                --purge_interval=12 \
                --max_session_timeout=40000 \
                --min_session_timeout=4000 \
                --log_level=INFO"
              readinessProbe:
                exec:
                  command:
                  - sh
                  - -c
                  - "zookeeper-ready 2181"
                initialDelaySeconds: 10
                timeoutSeconds: 5
              livenessProbe:
                exec:
                  command:
                  - sh
                  - -c
                  - "zookeeper-ready 2181"
                initialDelaySeconds: 10
                timeoutSeconds: 5
              volumeMounts:
              - name: datadir
                mountPath: /var/lib/zookeeper
            securityContext:
              runAsUser: 1000
              fsGroup: 1000
        volumeClaimTemplates:
        - metadata:
            name: datadir
          spec:
            accessModes: [ "ReadWriteOnce" ]
            resources:
              requests:
                storage: 10Gi
      EOF
      

      3 實時觀察Pod狀態

      [root@master231 statefulsets]# kubectl apply -f 04-sts-zookeeper.yaml 
      service/zk-hs created
      service/zk-cs created
      poddisruptionbudget.policy/zk-pdb created
      statefulset.apps/zk created
      [root@master231 statefulsets]# 
      [root@master231 statefulsets]#  kubectl get pods -o wide -w -l app=zk
      NAME   READY   STATUS    RESTARTS   AGE   IP       NODE     NOMINATED NODE   READINESS GATES
      zk-0   0/1     Pending   0          0s    <none>   <none>   <none>           <none>
      zk-0   0/1     Pending   0          1s    <none>   worker233   <none>           <none>
      zk-0   0/1     ContainerCreating   0          1s    <none>   worker233   <none>           <none>
      zk-0   0/1     ContainerCreating   0          3s    <none>   worker233   <none>           <none>
      zk-0   0/1     Running             0          7s    10.100.140.125   worker233   <none>           <none>
      zk-0   1/1     Running             0          22s   10.100.140.125   worker233   <none>           <none>
      zk-1   0/1     Pending             0          0s    <none>           <none>      <none>           <none>
      zk-1   0/1     Pending             0          0s    <none>           master231   <none>           <none>
      zk-1   0/1     ContainerCreating   0          0s    <none>           master231   <none>           <none>
      zk-1   0/1     ContainerCreating   0          1s    <none>           master231   <none>           <none>
      zk-1   0/1     Running             0          5s    10.100.160.189   master231   <none>           <none>
      zk-1   1/1     Running             0          21s   10.100.160.189   master231   <none>           <none>
      zk-2   0/1     Pending             0          0s    <none>           <none>      <none>           <none>
      zk-2   0/1     Pending             0          0s    <none>           worker232   <none>           <none>
      zk-2   0/1     ContainerCreating   0          0s    <none>           worker232   <none>           <none>
      zk-2   0/1     ContainerCreating   0          1s    <none>           worker232   <none>           <none>
      zk-2   0/1     Running             0          5s    10.100.203.188   worker232   <none>           <none>
      zk-2   1/1     Running             0          21s   10.100.203.188   worker232   <none>           <none>
      ...
      

      4 檢查后端的存儲

      [root@master231 ~]# kubectl get pods -o wide 
      NAME   READY   STATUS    RESTARTS   AGE   IP               NODE        NOMINATED NODE   READINESS GATES
      zk-0   1/1     Running   0          85s   10.100.140.125   worker233   <none>           <none>
      zk-1   1/1     Running   0          63s   10.100.160.189   master231   <none>           <none>
      zk-2   1/1     Running   0          42s   10.100.203.188   worker232   <none>           <none>
      [root@master231 ~]# 
      [root@master231 ~]# kubectl get pvc -l app=zk
      NAME           STATUS   VOLUME                                     CAPACITY   ACCESS MODES   STORAGECLASS   AGE
      datadir-zk-0   Bound    pvc-b6072f27-637a-4c5d-9604-7095c8143f15   10Gi       RWO            nfs-csi        43m
      datadir-zk-1   Bound    pvc-10fdeb29-70b9-41a6-ae8c-f3b540ffcbdc   10Gi       RWO            nfs-csi        42m
      datadir-zk-2   Bound    pvc-db936b79-be79-4155-b2d0-ccc05a7e4531   10Gi       RWO            nfs-csi        37m
      [root@master231 ~]# 
      

      5.驗證集群是否正常

      [root@master231 sts]# for i in 0 1 2; do kubectl exec zk-$i -- hostname; done
      zk-0
      zk-1
      zk-2
      [root@master231 sts]# 
      [root@master231 sts]# for i in 0 1 2; do echo "myid zk-$i";kubectl exec zk-$i -- cat /var/lib/zookeeper/data/myid; done
      myid zk-0
      1
      myid zk-1
      2
      myid zk-2
      3
      [root@master231 sts]# 
      [root@master231 sts]# for i in 0 1 2; do kubectl exec zk-$i -- hostname -f; done
      zk-0.zk-hs.default.svc.oldboyedu.com
      zk-1.zk-hs.default.svc.oldboyedu.com
      zk-2.zk-hs.default.svc.oldboyedu.com
      [root@master231 sts]# 
      [root@master231 sts]# kubectl exec zk-0 -- cat /opt/zookeeper/conf/zoo.cfg
      #This file was autogenerated DO NOT EDIT
      clientPort=2181
      dataDir=/var/lib/zookeeper/data
      dataLogDir=/var/lib/zookeeper/data/log
      tickTime=2000
      initLimit=10
      syncLimit=5
      maxClientCnxns=60
      minSessionTimeout=4000
      maxSessionTimeout=40000
      autopurge.snapRetainCount=3
      autopurge.purgeInteval=12
      server.1=zk-0.zk-hs.default.svc.oldboyedu.com:2888:3888
      server.2=zk-1.zk-hs.default.svc.oldboyedu.com:2888:3888
      server.3=zk-2.zk-hs.default.svc.oldboyedu.com:2888:3888
      [root@master231 sts]# 
      [root@master231 sts]# kubectl exec zk-1 -- cat /opt/zookeeper/conf/zoo.cfg
      #This file was autogenerated DO NOT EDIT
      clientPort=2181
      dataDir=/var/lib/zookeeper/data
      dataLogDir=/var/lib/zookeeper/data/log
      tickTime=2000
      initLimit=10
      syncLimit=5
      maxClientCnxns=60
      minSessionTimeout=4000
      maxSessionTimeout=40000
      autopurge.snapRetainCount=3
      autopurge.purgeInteval=12
      server.1=zk-0.zk-hs.default.svc.oldboyedu.com:2888:3888
      server.2=zk-1.zk-hs.default.svc.oldboyedu.com:2888:3888
      server.3=zk-2.zk-hs.default.svc.oldboyedu.com:2888:3888
      [root@master231 sts]# 
      [root@master231 sts]# 
      [root@master231 sts]# kubectl exec zk-2 -- cat /opt/zookeeper/conf/zoo.cfg
      #This file was autogenerated DO NOT EDIT
      clientPort=2181
      dataDir=/var/lib/zookeeper/data
      dataLogDir=/var/lib/zookeeper/data/log
      tickTime=2000
      initLimit=10
      syncLimit=5
      maxClientCnxns=60
      minSessionTimeout=4000
      maxSessionTimeout=40000
      autopurge.snapRetainCount=3
      autopurge.purgeInteval=12
      server.1=zk-0.zk-hs.default.svc.oldboyedu.com:2888:3888
      server.2=zk-1.zk-hs.default.svc.oldboyedu.com:2888:3888
      server.3=zk-2.zk-hs.default.svc.oldboyedu.com:2888:3888
      [root@master231 sts]# 
      

      6.創建數據測試

      6.1 在一個Pod寫入數據

      [root@master231 statefulsets]# kubectl exec -it zk-1 -- zkCli.sh
      ...
      [zk: localhost:2181(CONNECTED) 0] ls /
      [zookeeper]
      [zk: localhost:2181(CONNECTED) 1] 
      [zk: localhost:2181(CONNECTED) 1] 
      [zk: localhost:2181(CONNECTED) 1] create /school oldboyedu
      Created /school
      [zk: localhost:2181(CONNECTED) 2] 
      [zk: localhost:2181(CONNECTED) 2] create /school/linux97 XIXI
      Created /school/linux97
      [zk: localhost:2181(CONNECTED) 3] 
      [zk: localhost:2181(CONNECTED) 3] ls /  
      [zookeeper, school]
      [zk: localhost:2181(CONNECTED) 4] 
      [zk: localhost:2181(CONNECTED) 4] ls /school
      [linux97]
      [zk: localhost:2181(CONNECTED) 5] 
      

      6.2 在另一個Pod查看下數據

      [root@master231 statefulsets]# kubectl exec -it zk-2 -- zkCli.sh
      ...
      [zk: localhost:2181(CONNECTED) 0] ls /
      [zookeeper, school]
      [zk: localhost:2181(CONNECTED) 1] get /school/linux97
      XIXI
      cZxid = 0x100000003
      ctime = Mon Jun 09 03:10:51 UTC 2025
      mZxid = 0x100000003
      mtime = Mon Jun 09 03:10:51 UTC 2025
      pZxid = 0x100000003
      cversion = 0
      dataVersion = 0
      aclVersion = 0
      ephemeralOwner = 0x0
      dataLength = 4
      numChildren = 0
      [zk: localhost:2181(CONNECTED) 2] 
      

      7 查看start-zookeeper 腳本邏輯

      [root@master231 statefulsets]# kubectl exec -it zk-0 -- bash
      zookeeper@zk-0:/$ 
      zookeeper@zk-0:/$ which start-zookeeper
      /usr/bin/start-zookeeper
      zookeeper@zk-0:/$ 
      zookeeper@zk-0:/$ wc -l /usr/bin/start-zookeeper 
      320 /usr/bin/start-zookeeper
      zookeeper@zk-0:/$ 
      zookeeper@zk-0:/$ cat /usr/bin/start-zookeeper ;echo
      #!/usr/bin/env bash
      # Copyright 2017 The Kubernetes Authors.
      #
      # Licensed under the Apache License, Version 2.0 (the "License");
      # you may not use this file except in compliance with the License.
      # You may obtain a copy of the License at
      #
      #     http://www.apache.org/licenses/LICENSE-2.0
      #
      # Unless required by applicable law or agreed to in writing, software
      # distributed under the License is distributed on an "AS IS" BASIS,
      # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
      # See the License for the specific language governing permissions and
      # limitations under the License.
      #
      #
      #Usage: start-zookeeper [OPTIONS]
      # Starts a ZooKeeper server based on the supplied options.
      #     --servers           The number of servers in the ensemble. The default 
      #                         value is 1.
      
      #     --data_dir          The directory where the ZooKeeper process will store its
      #                         snapshots. The default is /var/lib/zookeeper/data.
      
      #     --data_log_dir      The directory where the ZooKeeper process will store its 
      #                         write ahead log. The default is 
      #                         /var/lib/zookeeper/data/log.
      
      #     --conf_dir          The directoyr where the ZooKeeper process will store its
      #                         configuration. The default is /opt/zookeeper/conf.
      
      #     --client_port       The port on which the ZooKeeper process will listen for 
      #                         client requests. The default is 2181.
      
      #     --election_port     The port on which the ZooKeeper process will perform 
      #                         leader election. The default is 3888.
      
      #     --server_port       The port on which the ZooKeeper process will listen for 
      #                         requests from other servers in the ensemble. The 
      #                         default is 2888. 
      
      #     --tick_time         The length of a ZooKeeper tick in ms. The default is 
      #                         2000.
      
      #     --init_limit        The number of Ticks that an ensemble member is allowed 
      #                         to perform leader election. The default is 10.
      
      #     --sync_limit        The maximum session timeout that the ensemble will 
      #                         allows a client to request. The default is 5.
      
      #     --heap              The maximum amount of heap to use. The format is the 
      #                         same as that used for the Xmx and Xms parameters to the 
      #                         JVM. e.g. --heap=2G. The default is 2G.
      
      #     --max_client_cnxns  The maximum number of client connections that the 
      #                         ZooKeeper process will accept simultaneously. The 
      #                         default is 60.
      
      #     --snap_retain_count The maximum number of snapshots the ZooKeeper process 
      #                         will retain if purge_interval is greater than 0. The 
      #                         default is 3.
      
      #     --purge_interval    The number of hours the ZooKeeper process will wait 
      #                         between purging its old snapshots. If set to 0 old 
      #                         snapshots will never be purged. The default is 0.
      
      #     --max_session_timeout The maximum time in milliseconds for a client session 
      #                         timeout. The default value is 2 * tick time.
      
      #     --min_session_timeout The minimum time in milliseconds for a client session 
      #                         timeout. The default value is 20 * tick time.
      
      #     --log_level         The log level for the zookeeeper server. Either FATAL,
      #                         ERROR, WARN, INFO, DEBUG. The default is INFO.
      
      
      USER=`whoami`
      HOST=`hostname -s`
      DOMAIN=`hostname -d`
      LOG_LEVEL=INFO
      DATA_DIR="/var/lib/zookeeper/data"
      DATA_LOG_DIR="/var/lib/zookeeper/log"
      LOG_DIR="/var/log/zookeeper"
      CONF_DIR="/opt/zookeeper/conf"
      CLIENT_PORT=2181
      SERVER_PORT=2888
      ELECTION_PORT=3888
      TICK_TIME=2000
      INIT_LIMIT=10
      SYNC_LIMIT=5
      HEAP=2G
      MAX_CLIENT_CNXNS=60
      SNAP_RETAIN_COUNT=3
      PURGE_INTERVAL=0
      SERVERS=1
      
      function print_usage() {
      echo "\
      Usage: start-zookeeper [OPTIONS]
      Starts a ZooKeeper server based on the supplied options.
          --servers           The number of servers in the ensemble. The default 
                              value is 1.
      
          --data_dir          The directory where the ZooKeeper process will store its
                              snapshots. The default is /var/lib/zookeeper/data.
      
          --data_log_dir      The directory where the ZooKeeper process will store its 
                              write ahead log. The default is 
                              /var/lib/zookeeper/data/log.
      
          --conf_dir          The directoyr where the ZooKeeper process will store its
                              configuration. The default is /opt/zookeeper/conf.
      
          --client_port       The port on which the ZooKeeper process will listen for 
                              client requests. The default is 2181.
      
          --election_port     The port on which the ZooKeeper process will perform 
                              leader election. The default is 3888.
      
          --server_port       The port on which the ZooKeeper process will listen for 
                              requests from other servers in the ensemble. The 
                              default is 2888. 
      
          --tick_time         The length of a ZooKeeper tick in ms. The default is 
                              2000.
      
          --init_limit        The number of Ticks that an ensemble member is allowed 
                              to perform leader election. The default is 10.
      
          --sync_limit        The maximum session timeout that the ensemble will 
                              allows a client to request. The default is 5.
      
          --heap              The maximum amount of heap to use. The format is the 
                              same as that used for the Xmx and Xms parameters to the 
                              JVM. e.g. --heap=2G. The default is 2G.
      
          --max_client_cnxns  The maximum number of client connections that the 
                              ZooKeeper process will accept simultaneously. The 
                              default is 60.
      
          --snap_retain_count The maximum number of snapshots the ZooKeeper process 
                              will retain if purge_interval is greater than 0. The 
                              default is 3.
      
          --purge_interval    The number of hours the ZooKeeper process will wait 
                              between purging its old snapshots. If set to 0 old 
                              snapshots will never be purged. The default is 0.
      
          --max_session_timeout The maximum time in milliseconds for a client session 
                              timeout. The default value is 2 * tick time.
      
          --min_session_timeout The minimum time in milliseconds for a client session 
                              timeout. The default value is 20 * tick time.
      
          --log_level         The log level for the zookeeeper server. Either FATAL,
                              ERROR, WARN, INFO, DEBUG. The default is INFO.
      "
      }
      
      function create_data_dirs() {
          if [ ! -d $DATA_DIR  ]; then
              mkdir -p $DATA_DIR
              chown -R $USER:$USER $DATA_DIR
          fi
      
          if [ ! -d $DATA_LOG_DIR  ]; then
              mkdir -p $DATA_LOG_DIR
              chown -R $USER:USER $DATA_LOG_DIR
          fi
      
          if [ ! -d $LOG_DIR  ]; then
              mkdir -p $LOG_DIR
              chown -R $USER:$USER $LOG_DIR
          fi
          if [ ! -f $ID_FILE ] && [ $SERVERS -gt 1 ]; then
              echo $MY_ID >> $ID_FILE
          fi
      }
      
      function print_servers() {
          for (( i=1; i<=$SERVERS; i++ ))
          do
              echo "server.$i=$NAME-$((i-1)).$DOMAIN:$SERVER_PORT:$ELECTION_PORT"
          done
      }
      
      function create_config() {
          rm -f $CONFIG_FILE
          echo "#This file was autogenerated DO NOT EDIT" >> $CONFIG_FILE
          echo "clientPort=$CLIENT_PORT" >> $CONFIG_FILE
          echo "dataDir=$DATA_DIR" >> $CONFIG_FILE
          echo "dataLogDir=$DATA_LOG_DIR" >> $CONFIG_FILE
          echo "tickTime=$TICK_TIME" >> $CONFIG_FILE
          echo "initLimit=$INIT_LIMIT" >> $CONFIG_FILE
          echo "syncLimit=$SYNC_LIMIT" >> $CONFIG_FILE
          echo "maxClientCnxns=$MAX_CLIENT_CNXNS" >> $CONFIG_FILE
          echo "minSessionTimeout=$MIN_SESSION_TIMEOUT" >> $CONFIG_FILE
          echo "maxSessionTimeout=$MAX_SESSION_TIMEOUT" >> $CONFIG_FILE
          echo "autopurge.snapRetainCount=$SNAP_RETAIN_COUNT" >> $CONFIG_FILE
          echo "autopurge.purgeInteval=$PURGE_INTERVAL" >> $CONFIG_FILE
           if [ $SERVERS -gt 1 ]; then
              print_servers >> $CONFIG_FILE
          fi
          cat $CONFIG_FILE >&2
      }
      
      function create_jvm_props() {
          rm -f $JAVA_ENV_FILE
          echo "ZOO_LOG_DIR=$LOG_DIR" >> $JAVA_ENV_FILE
          echo "JVMFLAGS=\"-Xmx$HEAP -Xms$HEAP\"" >> $JAVA_ENV_FILE
      }
      
      function create_log_props() {
          rm -f $LOGGER_PROPS_FILE
          echo "Creating ZooKeeper log4j configuration"
          echo "zookeeper.root.logger=CONSOLE" >> $LOGGER_PROPS_FILE
          echo "zookeeper.console.threshold="$LOG_LEVEL >> $LOGGER_PROPS_FILE
          echo "log4j.rootLogger=\${zookeeper.root.logger}" >> $LOGGER_PROPS_FILE
          echo "log4j.appender.CONSOLE=org.apache.log4j.ConsoleAppender" >> $LOGGER_PROPS_FILE
          echo "log4j.appender.CONSOLE.Threshold=\${zookeeper.console.threshold}" >> $LOGGER_PROPS_FILE
          echo "log4j.appender.CONSOLE.layout=org.apache.log4j.PatternLayout" >> $LOGGER_PROPS_FILE
          echo "log4j.appender.CONSOLE.layout.ConversionPattern=%d{ISO8601} [myid:%X{myid}] - %-5p [%t:%C{1}@%L] - %m%n" >> $LOGGER_PROPS_FILE
      }
      
      optspec=":hv-:"
      while getopts "$optspec" optchar; do
      
          case "${optchar}" in
              -)
                  case "${OPTARG}" in
                      servers=*)
                          SERVERS=${OPTARG##*=}
                          ;;
                      data_dir=*)
                          DATA_DIR=${OPTARG##*=}
                          ;;
                      data_log_dir=*)
                          DATA_LOG_DIR=${OPTARG##*=}
                          ;;
                      log_dir=*)
                          LOG_DIR=${OPTARG##*=}
                          ;;
                      conf_dir=*)
                          CONF_DIR=${OPTARG##*=}
                          ;;
                      client_port=*)
                          CLIENT_PORT=${OPTARG##*=}
                          ;;
                      election_port=*)
                          ELECTION_PORT=${OPTARG##*=}
                          ;;
                      server_port=*)
                          SERVER_PORT=${OPTARG##*=}
                          ;;
                      tick_time=*)
                          TICK_TIME=${OPTARG##*=}
                          ;;
                      init_limit=*)
                          INIT_LIMIT=${OPTARG##*=}
                          ;;
                      sync_limit=*)
                          SYNC_LIMIT=${OPTARG##*=}
                          ;;
                      heap=*)
                          HEAP=${OPTARG##*=}
                          ;;
                      max_client_cnxns=*)
                          MAX_CLIENT_CNXNS=${OPTARG##*=}
                          ;;
                      snap_retain_count=*)
                          SNAP_RETAIN_COUNT=${OPTARG##*=}
                          ;;
                      purge_interval=*)
                          PURGE_INTERVAL=${OPTARG##*=}
                          ;;
                      max_session_timeout=*)
                          MAX_SESSION_TIMEOUT=${OPTARG##*=}
                          ;;
                      min_session_timeout=*)
                          MIN_SESSION_TIMEOUT=${OPTARG##*=}
                          ;;
                      log_level=*)
                          LOG_LEVEL=${OPTARG##*=}
                          ;;
                      *)
                          echo "Unknown option --${OPTARG}" >&2
                          exit 1
                          ;;
                  esac;;
              h)
                  print_usage
                  exit
                  ;;
              v)
                  echo "Parsing option: '-${optchar}'" >&2
                  ;;
              *)
                  if [ "$OPTERR" != 1 ] || [ "${optspec:0:1}" = ":" ]; then
                      echo "Non-option argument: '-${OPTARG}'" >&2
                  fi
                  ;;
          esac
      done
      
      MIN_SESSION_TIMEOUT=${MIN_SESSION_TIMEOUT:- $((TICK_TIME*2))}
      MAX_SESSION_TIMEOUT=${MAX_SESSION_TIMEOUT:- $((TICK_TIME*20))}
      ID_FILE="$DATA_DIR/myid"
      CONFIG_FILE="$CONF_DIR/zoo.cfg"
      LOGGER_PROPS_FILE="$CONF_DIR/log4j.properties"
      JAVA_ENV_FILE="$CONF_DIR/java.env"
      if [[ $HOST =~ (.*)-([0-9]+)$ ]]; then
          NAME=${BASH_REMATCH[1]}
          ORD=${BASH_REMATCH[2]}
      else
          echo "Fialed to parse name and ordinal of Pod"
          exit 1
      fi
      
      MY_ID=$((ORD+1))
      
      create_config && create_jvm_props && create_log_props && create_data_dirs && exec zkServer.sh start-foreground
      zookeeper@zk-0:/$ 
      zookeeper@zk-0:/$ 
      [root@master231 statefulsets]# 
      [root@master231 statefulsets]# kubectl delete -f 04-sts-zookeeper.yaml 
      service "zk-hs" deleted
      service "zk-cs" deleted
      poddisruptionbudget.policy "zk-pdb" deleted
      statefulset.apps "zk" deleted
      [root@master231 statefulsets]# 
      

      8.后記

      溫馨提示:
      	業界對于sts控制器有點望而卻步,我們知道這個控制器用做有狀態服務部署,但是我們不用~
      		
      	于是coreOS公司有研發出來了Operator(sts+crd)框架,大家可以基于該框架部署各種服務。
      
      posted @ 2025-07-20 22:23  尹正杰  閱讀(100)  評論(0)    收藏  舉報
      主站蜘蛛池模板: 国产精品自拍一二三四区| 国产成人精品一区二三区| 亚洲第一区二区三区av| 97精品国产91久久久久久久| 国产熟睡乱子伦视频在线播放| 亚洲成av人最新无码不卡短片| 《特殊的精油按摩》3| 勃利县| 国内精品九九久久久精品| 国精品午夜福利视频| 沈阳市| 日本三级香港三级人妇99| 青青草原国产AV福利网站| 汉沽区| 国产99久久亚洲综合精品西瓜tv| 国产草草影院ccyycom| 亚洲韩欧美第25集完整版| 蜜臀av性久久久久蜜臀aⅴ麻豆| 国产极品粉嫩学生一线天| 99在线精品国自产拍中文字幕| 亚洲熟妇自偷自拍另类| 国内精品免费久久久久电影院97| 日日噜噜夜夜狠狠久久无码区| 国内精品无码一区二区三区| 久久精产国品一二三产品| 亚洲色在线V中文字幕| 辉县市| 加勒比久久综合网天天| 国产一区二区三区av在线无码观看| 天堂中文最新版在线官网在线| 尤物国产精品福利在线网| 国产一区二区三区黄色片| 久久99热只有频精品8| 午夜福利日本一区二区无码| 亚洲中文字幕综合小综合| 国产一区二区三区黄色片| 一本久久a久久精品综合| 国产av综合色高清自拍| 18禁无遮挡啪啪无码网站| 农村老熟妇乱子伦视频| 精品天堂色吊丝一区二区|