<output id="qn6qe"></output>

    1. <output id="qn6qe"><tt id="qn6qe"></tt></output>
    2. <strike id="qn6qe"></strike>

      亚洲 日本 欧洲 欧美 视频,日韩中文字幕有码av,一本一道av中文字幕无码,国产线播放免费人成视频播放,人妻少妇偷人无码视频,日夜啪啪一区二区三区,国产尤物精品自在拍视频首页,久热这里只有精品12

      k8s-部署

      1 k8s 組件功能介紹

      1.2 K8s 集群API: kube-apiserver

      如果需要與您的 Kubernetes 集群進(jìn)行交互,就要通過 API。是 Kubernetes 控制平面的前端,用于處理內(nèi)部和外部請求。API 服務(wù)器會(huì)確定請求是否有效,如果有效,則對其進(jìn)行處理。您可以通過 REST 調(diào)用、kubectl 命令行界面或其他dashboard工具來訪問 API。

      該端口默認(rèn)值為6443,可通過啟動(dòng)參數(shù)“--secure-port”的值來修改默認(rèn)值。

      默認(rèn)IP地址為非本地Non-Localhost網(wǎng)絡(luò)端口,通過啟動(dòng)參數(shù)“--bind-address”設(shè)置該值

      該端口用于接收客戶端、dashboard等外部HTTPS請求。

      用于其于Token文件或客戶端證書及HTTP Base的認(rèn)證。

      用于基于策略的授權(quán)。

      訪問認(rèn)證

      身份認(rèn)證token --> 驗(yàn)證權(quán)限 --> 驗(yàn)證指令 --> 執(zhí)行操作 --> 返回結(jié)果。

      (身份認(rèn)證可以是證書、tocken或用戶名密碼)

      kubernetes API測試

      # kubectl get secrets -A | grep admin
      # kubectl describe secrets admin-user-token-z487q
      # curl --cacert /etc/kubernetes/ssl/ca.pem -H "Authorization:Bearer 一大串token字符" https://172.0.0.1:6443
      # curl 省略 https://172.0.0.1:6443/  返回所有的API列表
      # curl 省略 https://172.0.0.1:6443/apis  分組API
      # curl 省略 https://172.0.0.1:6443/api/v1 帶具體版本號(hào)的API
      # curl 省略 https://172.0.0.1:6443/version API版本信息
      # curl 省略 https://172.0.0.1:6443/healthz/etcd 與etcd的心跳監(jiān)測
      # curl 省略 https://172.0.0.1:6443/apis/autoscaling/v1  API的詳細(xì)信息
      # curl 省略 https://172.0.0.1:6443/metrics  指標(biāo)數(shù)據(jù)
      

      1.3 K8s 調(diào)度程序:kube-scheduler

      您的集群是否狀況良好?如果需要新的容器,要將它們放在哪里?這些是 Kubernetes 調(diào)度程序所要關(guān)注的問題。

      調(diào)度程序會(huì)考慮容器集的資源需求(例如 CPU 或內(nèi)存)以及集群的運(yùn)行狀況。隨后,它會(huì)將容器集安排到適當(dāng)?shù)挠?jì)算節(jié)點(diǎn)。

      1.4 K8s 控制器:kube-controller-manager

      控制器負(fù)責(zé)實(shí)際運(yùn)行集群,而 Kubernetes 控制器管理器則是將多個(gè)控制器功能合而為一。控制器用于查詢調(diào)度程序,并確保有正確數(shù)量的容器集在運(yùn)行。如果有容器集停止運(yùn)行,另一個(gè)控制器會(huì)發(fā)現(xiàn)并做出響應(yīng)。控制器會(huì)將服務(wù)連接至容器集,以便讓請求前往正確的端點(diǎn)。還有一些控制器用于創(chuàng)建帳戶和 API 訪問令牌。

      制器包括(副本控制器、節(jié)點(diǎn)控制器、命名空間控制器和服務(wù)賬號(hào)控制器等),控制器作為集群內(nèi)部的管理控制中心,負(fù)責(zé)集群內(nèi)的Node、Pod副本、服務(wù)端點(diǎn)(Endpoint)、命名空間(Namespace)、服務(wù)賬號(hào)(ServiceAccount)、資源定額(ResourceQuota)的管理,當(dāng)某個(gè)Node意外宕機(jī)時(shí),Controller Manager會(huì)及時(shí)發(fā)現(xiàn)并執(zhí)行自動(dòng)化修復(fù)流程,確保集群中的pod副本始終處理于預(yù)期的工作狀態(tài)。

      1.5 鍵值存儲(chǔ)數(shù)據(jù)庫 etcd

      配置數(shù)據(jù)以及有關(guān)集群狀態(tài)的信息位于 etcd(一個(gè)鍵值存儲(chǔ)數(shù)據(jù)庫)中。etcd 采用分布式、容錯(cuò)設(shè)計(jì),被視為集群的最終事實(shí)來源。

      1.6 K8s 節(jié)點(diǎn)

      Kubernetes 集群中至少需要一個(gè)計(jì)算節(jié)點(diǎn),但通常會(huì)有多個(gè)計(jì)算節(jié)點(diǎn)。容器集經(jīng)過調(diào)度和編排后,就會(huì)在節(jié)點(diǎn)上運(yùn)行。如果需要擴(kuò)展集群的容量,那就要添加更多的節(jié)點(diǎn)。

      1.6.1 容器集

      容器集是 Kubernetes 對象模型中最小、最簡單的單元。它代表了應(yīng)用的單個(gè)實(shí)例。每個(gè)容器集都由一個(gè)容器(或一系列緊密耦合的容器)以及若干控制容器運(yùn)行方式的選件組成。容器集可以連接至持久存儲(chǔ),以運(yùn)行有狀態(tài)應(yīng)用。

      1.6.2 容器運(yùn)行時(shí)引擎

      為了運(yùn)行容器,每個(gè)計(jì)算節(jié)點(diǎn)都有一個(gè)容器運(yùn)行時(shí)引擎。比如 Docker,但 Kubernetes 也支持其他符合開源容器運(yùn)動(dòng)(OCI)標(biāo)準(zhǔn)的運(yùn)行時(shí),例如 rkt 和 CRI-O。

      1.6.3 kubelet

      每個(gè)計(jì)算節(jié)點(diǎn)中都包含一個(gè) kubelet,這是一個(gè)與控制平面通信的微型應(yīng)用。kublet 可確保容器在容器集內(nèi)運(yùn)行。當(dāng)控制平面需要在節(jié)點(diǎn)中執(zhí)行某個(gè)操作時(shí),kubelet 就會(huì)執(zhí)行該操作。

      是運(yùn)行在每個(gè)worker節(jié)點(diǎn)的代理組件,它會(huì)監(jiān)視已分配給節(jié)點(diǎn)的pod,具體功能如下:

      • 向master匯報(bào)node節(jié)點(diǎn)的狀態(tài)信息;
      • 授受指令并在Pod中創(chuàng)建docker容器;
      • 準(zhǔn)備pod所需的數(shù)據(jù)卷;
      • 返回pod的運(yùn)行狀態(tài);
      • 在node節(jié)點(diǎn)執(zhí)行容器健康檢查 ;

      (負(fù)責(zé)POD/容器 生命周期,創(chuàng)建或刪除pod)

      1.6.3.1 常用命令

      常用命令:

      # kubectl get services --all-namespaces -o wide
      # kubectl get pods --all-namespaces -o wide
      # kubectl get nodes --all-namespaces -o wide
      # kubectl get deployment --all-namespaces
      # kubectl get deployment -n devpos -o wide 更改顯示格式
      # kubectl desribe pods devpos-tomcat-appy-deployment -n devpos 查看某個(gè)資源詳細(xì)信息
      # kubectl create -f tomcat-app1.yaml
      # kubectl apply -f tomcat-app1.yaml
      # kubectl delete -f tomcat-app1.yaml
      # kubectl create -f tomcat-app1.yaml --save-config --record
      # kubectl apply -f tomcat-app1.yaml --record  推薦命令
      # kubectl exec -it devpos-tomcat-app1-deployment-aaabbb-ddd bash -n devpos
      # kubectl logs devpos-tomcat-app1-deployment-aaabbb-ddd bash -n devpos
      # kubectl delete pods devpos-tomcat-app1-deployment-aaabbb-ddd bash -n devpos
      

      1.6.4 kube-proxy

      kube-proxy:Kubernetes網(wǎng)絡(luò)代理運(yùn)行在node上,它反映了node上Kubernetes API中定義的服務(wù),并可以通過一組后端進(jìn)行簡單的TCP、UDP和SCTP流轉(zhuǎn)發(fā)或者在一組后端進(jìn)行循環(huán)TCP、UDP和SCTP轉(zhuǎn)發(fā),用戶必須使用apiserver API創(chuàng)建一個(gè)服務(wù)來配置代理,其實(shí)就是kube-proxy通過在主機(jī)上維護(hù)網(wǎng)絡(luò)規(guī)則并執(zhí)行連接轉(zhuǎn)發(fā)實(shí)現(xiàn)Kubernetes服務(wù)訪問。

      kube-proxy運(yùn)行在每個(gè)節(jié)點(diǎn)上,監(jiān)聽API Server中服務(wù)對象的變化,再通過管理Iptables或者IPVS規(guī)則來實(shí)現(xiàn)網(wǎng)絡(luò)的轉(zhuǎn)發(fā)。

      2 k8s 中創(chuàng)建pod的調(diào)度流程

      用戶-> kubectl 發(fā)起命令請求-> 通過kubeconfig進(jìn)行認(rèn)證-> apiserver 認(rèn)證-> apiserver 將yaml中的信息存儲(chǔ)到etcd中-> controller-manager 這里判斷是否是create、update-> scheduler 決定調(diào)度到那個(gè)工作節(jié)點(diǎn) -> kubelet 匯報(bào)自身狀態(tài)和watch apiserver 接口中的pod調(diào)度請求

      3 基于二進(jìn)制部署k8s集群網(wǎng)絡(luò)組件、coredns、dashboard

      集群規(guī)劃

      IP 主機(jī)名稱
      192.168.2.200 kube-master
      192.168.2.201 kube-node1
      192.168.2.202 kube-node2
      192.168.2.203 kube-node3
      192.168.2.206 kube-harbor01

      3.1 部署harbor

      (1)安裝docker和docker-compose
      root@harbor01:~# apt-get install apt-transport-https ca-certificates curl gnupg lsb-release   
      
      #配置apt使用https
      root@harbor01:~# curl -fsSL https://mirrors.aliyun.com/docker-ce/linux/ubuntu/gpg | sudo apt-key add -   
      
      #添加docker GPG key
      root@harbor01:~# add-apt-repository "deb [arch=amd64] https://mirrors.aliyun.com/docker-ce/linux/ubuntu $(lsb_release -cs) stable"  添加docker的stable安裝源
      
      root@harbor01:~# apt-get update
      root@harbor01:~# apt install docker-ce=5:19.03.15~3-0~ubuntu-focal docker-ce-cli=5:19.03.15~3-0~ubuntu-focal
      
      root@harbor01:~# apt install python3-pip
      
      root@harbor01:~# pip3 install docker-compose
      
      (2)下載安裝harbor;
      root@k8s-harbor1:~# mkdir -pv /etc/harbor/ssl/
      root@k8s-harbor1:/etc/harbor/ssl/# ls   
      上傳域名證書,這里是有自己的公網(wǎng)域名,從云廠商申請的免費(fèi)證書;(自己簽發(fā)證書命令,見文末)-rw-r--r-- 1 root root 1350 Sep 13 18:09 ca.pem
      -rw-r--r-- 1 root root 1679 Sep 13 18:09 harbor-key.pem
      -rw-r--r-- 1 root root 1444 Sep 13 18:09 harbor.pem
      
      root@harbor01:~# cd /var/data
      root@harbor01:/var/data# wget https://github.com/goharbor/harbor/releases/download/v2.3.2/harbor-offline-installer-v2.3.2.tgz
      
      root@harbor01:/var/data# tar zxvf harbor-offline-installer-v2.3.2.tgz
      root@harbor01:/var/data# ln -sv /var/data/harbor /usr/local/
      /var/data/harbor'`-> '/usr/local/harbor'
      
      root@harbor01:/var/data# cd /var/data/harbor
      
      root@harbor01:/var/data# cp harbor.yml.tmpl harbor.yml 
      
      #設(shè)置hostname、注釋https、設(shè)置網(wǎng)頁登錄密碼
      
      root@k8s-harbor1:/usr/local/harbor# grep -v "#" /var/data/harbor/harbor.yml|grep -v "^$"
      hostname: harbor.yourdomain.com 
      https:
        port: 443
        certificate: /etc/harbor/ssl/harbor.pem
        private_key: /etc/harbor/ssl/harbor-key.pem
      harbor_admin_password: Harbor12345
      database:
        password: dzLtHS6vr7kZpCy_
        max_idle_conns: 50
        max_open_conns: 1000
      data_volume: /var/data
      clair:
        updaters_interval: 12
      trivy:
        ignore_unfixed: false
        skip_update: false
        insecure: false
      jobservice:
        max_job_workers: 10
      notification:
        webhook_job_max_retry: 10
      chart:
        absolute_url: disabled
      log:
        level: info
        local:
          rotate_count: 3 
          rotate_size: 100M
          location: /var/log/harbor
      _version: 2.0.0
      proxy:
        http_proxy:
        https_proxy:
        no_proxy:
        components:
          - core
          - jobservice
          - clair
          - trivy
      
      root@harbor01:/var/data/harbor# ./install.sh --with-trivy  安裝harbor
      
      root@k8s-harbor1:/var/data/harbor# cat /usr/lib/systemd/system/harbor.service   設(shè)置Harbor開機(jī)啟動(dòng)
      [Unit]
      Description=Harbor
      After=docker.service systemd-networkd.service systemd-resolved.service
      Requires=docker.service
      Documentation=http://github.com/vmware/harbor 
      
      [Service]
      Type=simple
      Restart=on-failure
      RestartSec=5
      ExecStart=/usr/local/bin/docker-compose -f /var/data/harbor/docker-compose.yml up
      ExecStop=/usr/local/bin/docker-compose -f /var/data/harbor/docker-compose.yml down
      
      [Install]
      WantedBy=multi-user.target
      
      root@k8s-harbor1:/var/data/harbor# systemctl enable harbor.service
      

        

      客戶端驗(yàn)證,瀏覽器登錄harbor創(chuàng)建項(xiàng)目;

      echo "192.168.2.206 harbor.yourdomain.com" >> /etc/hosts
      
      docker login https://harbor.yourdomain.com --username=admin --password=Harbor12345
      

        

      自行簽發(fā)證書20年(后續(xù)使用自簽證書操作,云廠商免費(fèi)證書只有1年有效期)

      (1)自簽發(fā)證書
      root@k8s-harbor1:/etc/harbor/ssl# openssl genrsa -out harbor-key.pem
      
      root@k8s-harbor1:/etc/harbor/ssl# openssl req -x509 -new -nodes -key harbor-key.pem -subj "/CN=harbor.yourdomain.com" -days 7120 -out harbor.pem
      
      (2)使用自簽發(fā)證書
      root@k8s-harbor1:/usr/local/harbor# grep -v "#" /var/data/harbor/harbor.yml|grep -v "^$"
      hostname: harbor.yourdomain.com 
      https:
        port: 443
        certificate: /etc/harbor/ssl/harbor.pem
        private_key: /etc/harbor/ssl/harbor-key.pem
      harbor_admin_password: Harbor12345
      
      root@k8s-harbor1:# docker-compose start
      
      (4)客戶端瀏覽器訪問harbor可以看到自簽發(fā)的證書;為了在Linux docker客戶端可以正常訪問https://harbor.yourdomain.com,需要把自簽發(fā)的crt文件拷貝到客戶端;
      
      root@node1:~# mkdir /etc/docker/certs.d/harbor.yourdomain.com -p 
      
      root@k8s-harbor1:/etc/harbor/ssl/# scp harbor.pem 192.168.2.200:/etc/docker/certs.d/harbor.yourdomain.com
      
      在/etc/docker/daemon.json
      中添加倉庫harbor.yourdomain.com,重啟docker重啟;
      root@kube-master:~# cat /etc/docker/daemon.json 
      {
        "exec-opts": ["native.cgroupdriver=systemd"],
        "registry-mirrors": [
          "https://docker.mirrors.ustc.edu.cn",
          "http://hub-mirror.c.163.com"
        ],
        "insecure-registries": ["192.168.2.0/24"],
        "max-concurrent-downloads": 10,
        "log-driver": "json-file",
        "log-level": "warn",
        "log-opts": {
          "max-size": "10m",
          "max-file": "3"
          },
        "data-root": "/var/lib/docker"
      }
      root@kube-master:~# systemctl restart docker
      
      root@kube-master:~# docker login https://harbor.yourdomain.com --username=admin --password=Harbor12345
      WARNING! Using --password via the CLI is insecure. Use --password-stdin.
      WARNING! Your password will be stored unencrypted in /root/.docker/config.json.
      Configure a credential helper to remove this warning. See
      https://docs.docker.com/engine/reference/commandline/login/#credentials-store
      
      Login Succeeded
      

      3.2 在部署節(jié)點(diǎn)安裝ansible及準(zhǔn)備ssh免密登陸

      (1)安裝ansible;
      root@kube-master:~# apt install python3-pip -y
      root@kube-master:~# pip3 install ansible
      
      (2)在ansible中配置其它節(jié)點(diǎn)的免密登錄,包含自己的;
      root@kube-master:~# ssh-keygen
      root@kube-master:~# apt install sshpass
      root@kube-master:~# cat scp-key.sh
      #!/bin/bash
      IP="
      192.168.2.200
      192.168.2.201
      192.168.2.202
      192.168.2.203
      192.168.2.206
      "
       
      for node in ${IP};do
              sshpass -p 123123 ssh-copy-id ${node} -o StrictHostKeyChecking=no
              if [ $? -eq 0 ];then
                echo "${node} 秘鑰copy完成"
              else
                echo "${node} 秘鑰copy失敗"
              fi
      done
       
      root@kube-master:~# bash scp-key.sh
      

      3.3 在部署節(jié)點(diǎn)編排k8s安裝

      網(wǎng)絡(luò)組件、coredns、dashoard

      3.3.1 下載kubeasz腳本并配置

      (1)下載easzlab安裝腳本和下載安裝文件;
      root@kube-master:~#  export release=3.1.0
      root@kube-master:~#  curl -C- -fLO --retry 3 https://github.com/easzlab/kubeasz/releases/download/${release}/ezdown
      root@kube-master:~#  cp ezdown bak.ezdown
      root@kube-master:~#  vi ezdown
      DOCKER_VER=19.03.15 #指定的docker版本
      K8S_BIN_VER=v1.21.0  #指定k8s版本,這個(gè)腳本是到hub.docker.com/easzlab/kubeasz-k8s-bin:v1.21.0對應(yīng)的版本,可以到hub.docker.com查詢要使用的版本;
      BASE="/etc/kubeasz"  #下載相關(guān)配置文件和鏡像
       
        option: -{DdekSz}
          -C         stop&clean all local containers
          -D         download all into "$BASE"
          -P         download system packages for offline installing
          -R         download Registry(harbor) offline installer
          -S         start kubeasz in a container
          -d <ver>   set docker-ce version, default "$DOCKER_VER"
          -e <ver>   set kubeasz-ext-bin version, default "$EXT_BIN_VER"
          -k <ver>   set kubeasz-k8s-bin version, default "$K8S_BIN_VER"
          -m <str>   set docker registry mirrors, default "CN"(used in Mainland,China)
          -p <ver>   set kubeasz-sys-pkg version, default "$SYS_PKG_VER"
          -z <ver>   set kubeasz version, default "$KUBEASZ_VER"
      root@kube-master:~#  chmod +x ezdown
      root@kube-master:~#  bash ./ezdown -D   會(huì)下載所有文件,默認(rèn)是下載到/etc/kubeasz目錄下;
      root@kube-master:~# cd /etc/kubeasz/
      root@kube-master:/etc/kubeasz# ls
      README.md  ansible.cfg  bin  docs  down  example  ezctl  ezdown  manifests  pics  playbooks  roles  tools
      root@kube-master:/etc/kubeasz# ./ezctl new k8s-01
      2021-09-12 16:36:36 DEBUG generate custom cluster files in /etc/kubeasz/clusters/k8s-01
      2021-09-12 16:36:36 DEBUG set version of common plugins
      2021-09-12 16:36:36 DEBUG cluster k8s-01: files successfully created.
      2021-09-12 16:36:36 INFO next steps 1: to config '/etc/kubeasz/clusters/k8s-01/hosts'
      2021-09-12 16:36:36 INFO next steps 2: to config '/etc/kubeasz/clusters/k8s-01/config.yml'
      root@kube-master:/etc/kubeasz#
      root@kube-master:/etc/kubeasz# tree clusters/
      clusters/
      └── k8s-01
          ├── config.yml
          └── hosts
           
           
      (2)修改配置文件hosts
      root@kube-master:~# cat /etc/kubeasz/clusters/k8s-01/hosts 
      # 'etcd' cluster should have odd member(s) (1,3,5,...)
      [etcd]
      192.168.2.200
      
      # master node(s)
      [kube_master]
      192.168.2.200
      
      # work node(s)
      [kube_node]
      192.168.2.201
      192.168.2.202
      192.168.2.203
      192.168.2.204
      
      # [optional] harbor server, a private docker registry
      # 'NEW_INSTALL': 'true' to install a harbor server; 'false' to integrate with existed one
      [harbor]
      #192.168.2.206 NEW_INSTALL=true
      
      # [optional] loadbalance for accessing k8s from outside
      [ex_lb]
      #192.168.1.6 LB_ROLE=backup EX_APISERVER_VIP=192.168.1.250 EX_APISERVER_PORT=8443
      #192.168.1.7 LB_ROLE=master EX_APISERVER_VIP=192.168.1.250 EX_APISERVER_PORT=8443
      
      # [optional] ntp server for the cluster
      [chrony]
      #192.168.1.1
      
      [all:vars]
      # --------- Main Variables ---------------
      # Secure port for apiservers
      SECURE_PORT="6443"
      
      # Cluster container-runtime supported: docker, containerd
      CONTAINER_RUNTIME="docker"
      
      # Network plugins supported: calico, flannel, kube-router, cilium, kube-ovn
      CLUSTER_NETWORK="calico"
      
      # Service proxy mode of kube-proxy: 'iptables' or 'ipvs'
      PROXY_MODE="ipvs"
      
      # K8S Service CIDR, not overlap with node(host) networking
      SERVICE_CIDR="10.68.0.0/16"
      
      # Cluster CIDR (Pod CIDR), not overlap with node(host) networking
      CLUSTER_CIDR="172.20.0.0/16"
      
      # NodePort Range
      NODE_PORT_RANGE="30000-40000"
      
      # Cluster DNS Domain
      CLUSTER_DNS_DOMAIN="cluster.local"
      
      # -------- Additional Variables (don't change the default value right now) ---
      # Binaries Directory
      bin_dir="/opt/kube/bin"
      
      # Deploy Directory (kubeasz workspace)
      base_dir="/etc/kubeasz"
      
      # Directory for a specific cluster
      cluster_dir="{{ base_dir }}/clusters/k8s-01"
      
      # CA and other components cert/key Directory
      ca_dir="/etc/kubernetes/ssl"
       
       
      (3)鏡像下載到本地,上傳到自建harbor倉庫;這里做的操作是給第4部的配置文件部分使用;
      root@kube-master:~# docker pull easzlab/pause-amd64:3.4.1
      root@kube-master:~# docker tag easzlab/pause-amd64:3.4.1 harbor.yourdomain.com/baseimages/pause-amd64:3.4.1
      root@kube-master:~# docker push harbor.yourdomain.com/baseimages/pause-amd64:3.4.1
      The push refers to repository [harbor.yourdomain.com/baseimages/pause-amd64]
      915e8870f7d1: Pushed 
      3.4.1: digest: sha256:9ec1e780f5c0196af7b28f135ffc0533eddcb0a54a0ba8b32943303ce76fe70d size: 526
       
      (4)修改集群目錄配置文件config.yml
      root@kube-master:/etc/kubeasz# vi clusters/k8s-01/config.yml
       
      # [containerd]基礎(chǔ)容器鏡像
      SANDBOX_IMAGE: "harbor.yourdomain.com/baseimages/pause-amd64:3.4.1"
       
      # default: certs issued by the ca will expire in 50 years
      CA_EXPIRY: "876000h"
      CERT_EXPIRY: "438000h"
       
      # node節(jié)點(diǎn)最大pod 數(shù)
      MAX_PODS: 300
       
      # [docker]信任的HTTP倉庫
      INSECURE_REG: '["192.168.2.0/24"]'
       
      # ------------------------------------------- calico
      # [calico]設(shè)置 CALICO_IPV4POOL_IPIP=“off”,可以提高網(wǎng)絡(luò)性能,條件限制詳見 docs/setup/calico.md
      CALICO_IPV4POOL_IPIP: "Always"
       
      # coredns 自動(dòng)安裝
      dns_install: "yes"
      corednsVer: "1.8.0"
      ENABLE_LOCAL_DNS_CACHE: false
       
      # metric server 自動(dòng)安裝
      metricsserver_install: "yes"
       
      # dashboard 自動(dòng)安裝
      dashboard_install: "yes"
       
      # ingress 自動(dòng)安裝
      ingress_install: "no"
       
      # prometheus 自動(dòng)安裝
      prom_install: "no"
      
      
      

      3.3.3 修改模板和部分配置

      # 調(diào)用的ansible腳本如下
      root@kube-master:~# tree /etc/kubeasz/roles/prepare/
      /etc/kubeasz/roles/prepare/
      ├── files
      │   └── sctp.conf
      ├── tasks
      │   ├── centos.yml
      │   ├── common.yml
      │   ├── main.yml
      │   ├── offline.yml
      │   └── ubuntu.yml
      └── templates
          ├── 10-k8s-modules.conf.j2
          ├── 30-k8s-ulimits.conf.j2
          ├── 95-k8s-journald.conf.j2
          └── 95-k8s-sysctl.conf.j2
      
      3 directories, 10 files
      
      root@kube-master:~# ls /etc/kubeasz/roles/
      calico  chrony  cilium  clean  cluster-addon  cluster-restore  containerd  deploy  docker  etcd  ex-lb  flannel  harbor  kube-lb  kube-master  kube-node  kube-ovn  kube-router  os-harden  prepare
      
      root@kube-master:~# ls /etc/kubeasz/roles/deploy/tasks/
      add-custom-kubectl-kubeconfig.yml  create-kube-controller-manager-kubeconfig.yml  create-kube-proxy-kubeconfig.yml  create-kube-scheduler-kubeconfig.yml  create-kubectl-kubeconfig.yml  main.yml
      
      # 驗(yàn)證etcd當(dāng)前狀態(tài)
      root@kube-master:~# ETCDCTL_API=3 etcdctl --endpoints=https://192.168.2.200:2379 --cacert=/etc/kubernetes/ssl/ca.pem  --cert=/etc/kubernetes/ssl/etcd.pem --key=/etc/kubernetes/ssl/etcd-key.pem endpoint health
      https://192.168.2.200:2379 is healthy: successfully committed proposal: took = 11.923478ms
      
      # 查看當(dāng)前支持的運(yùn)行時(shí)
      root@kube-master:~# cat /etc/kubeasz/playbooks/03.runtime.yml
      # to install a container runtime
      - hosts:
        - kube_master
        - kube_node
        roles:
        - { role: docker, when: "CONTAINER_RUNTIME == 'docker'" }
        - { role: containerd, when: "CONTAINER_RUNTIME == 'containerd'" }
        
      # 查看docker的daemon.json模板
      root@kube-master:~# cat /etc/kubeasz/roles/docker/templates/daemon.json.j2
      {
        "data-root": "{{ DOCKER_STORAGE_DIR }}",
        "exec-opts": ["native.cgroupdriver={{ CGROUP_DRIVER }}"],
      {% if ENABLE_MIRROR_REGISTRY %}
        "registry-mirrors": [
          "https://docker.mirrors.ustc.edu.cn",
          "http://hub-mirror.c.163.com"
        ], 
      {% endif %}
      {% if ENABLE_REMOTE_API %}
        "hosts": ["tcp://0.0.0.0:2376", "unix:///var/run/docker.sock"],
      {% endif %}
        "insecure-registries": {{ INSECURE_REG }},
        "max-concurrent-downloads": 10,
        "live-restore": true,
        "log-driver": "json-file",
        "log-level": "warn",
        "log-opts": {
          "max-size": "50m",
          "max-file": "1"
          },
        "storage-driver": "overlay2"
      }
      
      # docker的service模板文件路徑
      root@kube-master:~# cat /etc/kubeasz/roles/docker/templates/docker.service.j2 
      [Unit]
      Description=Docker Application Container Engine
      Documentation=http://docs.docker.io
      
      [Service]
      Environment="PATH={{ bin_dir }}:/bin:/sbin:/usr/bin:/usr/sbin"
      ExecStart={{ bin_dir }}/dockerd  # --iptables=false
      ExecStartPost=/sbin/iptables -I FORWARD -s 0.0.0.0/0 -j ACCEPT
      ExecReload=/bin/kill -s HUP $MAINPID
      Restart=always
      RestartSec=5
      LimitNOFILE=infinity
      LimitNPROC=infinity
      LimitCORE=infinity
      Delegate=yes
      KillMode=process
      
      [Install]
      WantedBy=multi-user.target
      
      # 默認(rèn)會(huì)給master設(shè)置不可調(diào)度(設(shè)警線)
      root@kube-master:~# cat /etc/kubeasz/playbooks/04.kube-master.yml
      # to set up 'kube_master' nodes
      - hosts: kube_master
        roles:
        - kube-lb
        - kube-master
        - kube-node
        tasks:
        - name: Making master nodes SchedulingDisabled
          shell: "{{ bin_dir }}/kubectl cordon {{ inventory_hostname }} "
          when: "inventory_hostname not in groups['kube_node']"
          ignore_errors: true
      
        - name: Setting master role name 
          shell: "{{ bin_dir }}/kubectl label node {{ inventory_hostname }} kubernetes.io/role=master --overwrite"
          ignore_errors: true
          
      # 可以替換一些基礎(chǔ)鏡像加快構(gòu)建
      /etc/kubeasz/clusters/k8s-01/config.yml
      [containerd]基礎(chǔ)容器鏡像
      SANDBOX_IMAGE: "easzlab/pause-amd64:3.4.1"
      
      # kube-proxy 初始化安裝前可以指定scheduler算法,根據(jù)安裝需求設(shè)置;不修改也可以
      /etc/kubeaszk/roles/kube-node/templates/kube-proxy-config.yaml.j2
      比如設(shè)置
      mode: "{{ PROXY_MODE }}"
      ipvs:
        scheduler: wrr    默認(rèn)是rr
        
      # 使用自己倉庫的network鏡像
      cat /etc/kubeasz/playbooks/06.network.yml
      # to install network plugin, only one can be choosen
      - hosts:
        - kube_master
        - kube_node
        roles:
        - { role: calico, when: "CLUSTER_NETWORK == 'calico'" }
        - { role: cilium, when: "CLUSTER_NETWORK == 'cilium'" }
        - { role: flannel, when: "CLUSTER_NETWORK == 'flannel'" }
        - { role: kube-router, when: "CLUSTER_NETWORK == 'kube-router'" }
        - { role: kube-ovn, when: "CLUSTER_NETWORK == 'kube-ovn'" }
      
      cat /etc/kubeasz/roles/calico/templates/
      calico-csr.json.j2  calico-v3.15.yaml.j2  calico-v3.3.yaml.j2  calico-v3.4.yaml.j2  calico-v3.8.yaml.j2  calicoctl.cfg.j2
      
      vim /etc/kubeasz/roles/calico/templates/calico-v3.15.yaml.j2
      image: calico/kube-controllers:v3.15.3   ==>harbor.yourdomain.com/baseimages/calico-kube-controllers:v3.15.3
      image: calico/cni:v3.15.3 ==> harbor.yourdomain.com/baseimages/calico-cni:v3.15.3
      image: calico/pod2daemon-flexvol:v3.15.3 ==>harbor.yourdomain.com/baseimages/calico-pod2daemon-flexvol:v3.15.3
      image: calico/node:v3.15.3 ==>harbor.yourdomain.com/baseimages/calico-node:v3.15.3
      
      # coredns 模板位置
      root@kube-master:~# cat /etc/kubeasz/clusters/k8s-01/yml/coredns.yaml
      apiVersion: v1
      kind: ServiceAccount
      metadata:
        name: coredns
        namespace: kube-system
      ---
      apiVersion: rbac.authorization.k8s.io/v1
      kind: ClusterRole
      metadata:
        labels:
          kubernetes.io/bootstrapping: rbac-defaults
        name: system:coredns
      rules:
      - apiGroups:
        - ""
        resources:
        - endpoints
        - services
        - pods
        - namespaces
        verbs:
        - list
        - watch
      - apiGroups:
        - ""
        resources:
        - nodes
        verbs:
        - get
      ---
      apiVersion: rbac.authorization.k8s.io/v1
      kind: ClusterRoleBinding
      metadata:
        annotations:
          rbac.authorization.kubernetes.io/autoupdate: "true"
        labels:
          kubernetes.io/bootstrapping: rbac-defaults
        name: system:coredns
      roleRef:
        apiGroup: rbac.authorization.k8s.io
        kind: ClusterRole
        name: system:coredns
      subjects:
      - kind: ServiceAccount
        name: coredns
        namespace: kube-system
      ---
      apiVersion: v1
      kind: ConfigMap
      metadata:
        name: coredns
        namespace: kube-system
      data:
        Corefile: |
          .:53 {
              errors
              health {
                  lameduck 5s
              }
              ready
              kubernetes cluster.local in-addr.arpa ip6.arpa {
                  pods insecure
                  fallthrough in-addr.arpa ip6.arpa
                  ttl 30
              }
              prometheus :9153
              forward . /etc/resolv.conf {
                  max_concurrent 1000
              }
              cache 30
              reload
              loadbalance
          }
      ---
      apiVersion: apps/v1
      kind: Deployment
      metadata:
        name: coredns
        namespace: kube-system
        labels:
          k8s-app: kube-dns
          kubernetes.io/name: "CoreDNS"
      spec:
        replicas: 1
        strategy:
          type: RollingUpdate
          rollingUpdate:
            maxUnavailable: 1
        selector:
          matchLabels:
            k8s-app: kube-dns
        template:
          metadata:
            labels:
              k8s-app: kube-dns
          spec:
            securityContext:
              seccompProfile:
                type: RuntimeDefault
            priorityClassName: system-cluster-critical
            serviceAccountName: coredns
            affinity:
              podAntiAffinity:
                preferredDuringSchedulingIgnoredDuringExecution:
                - weight: 100
                  podAffinityTerm:
                    labelSelector:
                      matchExpressions:
                        - key: k8s-app
                          operator: In
                          values: ["kube-dns"]
                    topologyKey: kubernetes.io/hostname
            tolerations:
              - key: "CriticalAddonsOnly"
                operator: "Exists"
            nodeSelector:
              kubernetes.io/os: linux
            containers:
            - name: coredns
              image: coredns/coredns:1.8.0 
              imagePullPolicy: IfNotPresent
              resources:
                limits:
                  memory: 200Mi
                requests:
                  cpu: 100m
                  memory: 70Mi
              args: [ "-conf", "/etc/coredns/Corefile" ]
              volumeMounts:
              - name: config-volume
                mountPath: /etc/coredns
                readOnly: true
              ports:
              - containerPort: 53
                name: dns
                protocol: UDP
              - containerPort: 53
                name: dns-tcp
                protocol: TCP
              - containerPort: 9153
                name: metrics
                protocol: TCP
              livenessProbe:
                httpGet:
                  path: /health
                  port: 8080
                  scheme: HTTP
                initialDelaySeconds: 60
                timeoutSeconds: 5
                successThreshold: 1
                failureThreshold: 5
              readinessProbe:
                httpGet:
                  path: /ready
                  port: 8181
                  scheme: HTTP
              securityContext:
                allowPrivilegeEscalation: false
                capabilities:
                  add:
                  - NET_BIND_SERVICE
                  drop:
                  - all
                readOnlyRootFilesystem: true
            dnsPolicy: Default
            volumes:
              - name: config-volume
                configMap:
                  name: coredns
                  items:
                  - key: Corefile
                    path: Corefile
      ---
      apiVersion: v1
      kind: Service
      metadata:
        name: kube-dns
        namespace: kube-system
        annotations:
          prometheus.io/port: "9153"
          prometheus.io/scrape: "true"
        labels:
          k8s-app: kube-dns
          kubernetes.io/cluster-service: "true"
          kubernetes.io/name: "CoreDNS"
      spec:
        selector:
          k8s-app: kube-dns
        clusterIP: 10.68.0.2
        ports:
        - name: dns
          port: 53
          protocol: UDP
        - name: dns-tcp
          port: 53
          protocol: TCP
        - name: metrics
          port: 9153
          protocol: TCP
          
      # dashboard的模板位置
      root@kube-master:~# cat /etc/kubeasz/roles/cluster-addon/templates/dashboard/
      admin-user-sa-rbac.yaml    kubernetes-dashboard.yaml  read-user-sa-rbac.yaml     
      root@kube-master:~# cat /etc/kubeasz/roles/cluster-addon/templates/dashboard/kubernetes-dashboard.yaml
      # Copyright 2017 The Kubernetes Authors.
      #
      # Licensed under the Apache License, Version 2.0 (the "License");
      # you may not use this file except in compliance with the License.
      # You may obtain a copy of the License at
      #
      #     http://www.apache.org/licenses/LICENSE-2.0
      #
      # Unless required by applicable law or agreed to in writing, software
      # distributed under the License is distributed on an "AS IS" BASIS,
      # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
      # See the License for the specific language governing permissions and
      # limitations under the License.
      
      ---
      
      apiVersion: v1
      kind: ServiceAccount
      metadata:
        labels:
          k8s-app: kubernetes-dashboard
        name: kubernetes-dashboard
        namespace: kube-system
      
      ---
      
      kind: Service
      apiVersion: v1
      metadata:
        labels:
          k8s-app: kubernetes-dashboard
          kubernetes.io/cluster-service: "true"
        name: kubernetes-dashboard
        namespace: kube-system
      spec:
        ports:
          - port: 443
            targetPort: 8443
        selector:
          k8s-app: kubernetes-dashboard
        type: NodePort
      
      ---
      
      apiVersion: v1
      kind: Secret
      metadata:
        labels:
          k8s-app: kubernetes-dashboard
        name: kubernetes-dashboard-certs
        namespace: kube-system
      type: Opaque
      
      ---
      
      apiVersion: v1
      kind: Secret
      metadata:
        labels:
          k8s-app: kubernetes-dashboard
        name: kubernetes-dashboard-csrf
        namespace: kube-system
      type: Opaque
      data:
        csrf: ""
      
      ---
      
      apiVersion: v1
      kind: Secret
      metadata:
        labels:
          k8s-app: kubernetes-dashboard
        name: kubernetes-dashboard-key-holder
        namespace: kube-system
      type: Opaque
      
      ---
      
      kind: ConfigMap
      apiVersion: v1
      metadata:
        labels:
          k8s-app: kubernetes-dashboard
        name: kubernetes-dashboard-settings
        namespace: kube-system
      
      ---
      
      kind: Role
      apiVersion: rbac.authorization.k8s.io/v1
      metadata:
        labels:
          k8s-app: kubernetes-dashboard
        name: kubernetes-dashboard
        namespace: kube-system
      rules:
        # Allow Dashboard to get, update and delete Dashboard exclusive secrets.
        - apiGroups: [""]
          resources: ["secrets"]
          resourceNames: ["kubernetes-dashboard-key-holder", "kubernetes-dashboard-certs", "kubernetes-dashboard-csrf"]
          verbs: ["get", "update", "delete"]
          # Allow Dashboard to get and update 'kubernetes-dashboard-settings' config map.
        - apiGroups: [""]
          resources: ["configmaps"]
          resourceNames: ["kubernetes-dashboard-settings"]
          verbs: ["get", "update"]
          # Allow Dashboard to get metrics.
        - apiGroups: [""]
          resources: ["services"]
          resourceNames: ["heapster", "dashboard-metrics-scraper"]
          verbs: ["proxy"]
        - apiGroups: [""]
          resources: ["services/proxy"]
          resourceNames: ["heapster", "http:heapster:", "https:heapster:", "dashboard-metrics-scraper", "http:dashboard-metrics-scraper"]
          verbs: ["get"]
      
      ---
      
      kind: ClusterRole
      apiVersion: rbac.authorization.k8s.io/v1
      metadata:
        labels:
          k8s-app: kubernetes-dashboard
        name: kubernetes-dashboard
      rules:
        # Allow Metrics Scraper to get metrics from the Metrics server
        - apiGroups: ["metrics.k8s.io"]
          resources: ["pods", "nodes"]
          verbs: ["get", "list", "watch"]
      
      ---
      
      apiVersion: rbac.authorization.k8s.io/v1
      kind: RoleBinding
      metadata:
        labels:
          k8s-app: kubernetes-dashboard
        name: kubernetes-dashboard
        namespace: kube-system
      roleRef:
        apiGroup: rbac.authorization.k8s.io
        kind: Role
        name: kubernetes-dashboard
      subjects:
        - kind: ServiceAccount
          name: kubernetes-dashboard
          namespace: kube-system
      
      ---
      
      apiVersion: rbac.authorization.k8s.io/v1
      kind: ClusterRoleBinding
      metadata:
        name: kubernetes-dashboard
      roleRef:
        apiGroup: rbac.authorization.k8s.io
        kind: ClusterRole
        name: kubernetes-dashboard
      subjects:
        - kind: ServiceAccount
          name: kubernetes-dashboard
          namespace: kube-system
      
      ---
      
      kind: Deployment
      apiVersion: apps/v1
      metadata:
        labels:
          k8s-app: kubernetes-dashboard
        name: kubernetes-dashboard
        namespace: kube-system
      spec:
        replicas: 1
        revisionHistoryLimit: 10
        selector:
          matchLabels:
            k8s-app: kubernetes-dashboard
        template:
          metadata:
            labels:
              k8s-app: kubernetes-dashboard
          spec:
            containers:
              - name: kubernetes-dashboard
                image: kubernetesui/dashboard:v2.1.0
                ports:
                  - containerPort: 8443
                    protocol: TCP
                args:
                  - --auto-generate-certificates
                  - --namespace=kube-system
                  # Uncomment the following line to manually specify Kubernetes API server Host
                  # If not specified, Dashboard will attempt to auto discover the API server and connect
                  # to it. Uncomment only if the default does not work.
                  # - --apiserver-host=http://my-address:port
                volumeMounts:
                  - name: kubernetes-dashboard-certs
                    mountPath: /certs
                    # Create on-disk volume to store exec logs
                  - mountPath: /tmp
                    name: tmp-volume
                livenessProbe:
                  httpGet:
                    scheme: HTTPS
                    path: /
                    port: 8443
                  initialDelaySeconds: 30
                  timeoutSeconds: 30
                securityContext:
                  allowPrivilegeEscalation: false
                  readOnlyRootFilesystem: true
                  runAsUser: 1001
                  runAsGroup: 2001
            volumes:
              - name: kubernetes-dashboard-certs
                secret:
                  secretName: kubernetes-dashboard-certs
              - name: tmp-volume
                emptyDir: {}
            serviceAccountName: kubernetes-dashboard
            # Comment the following tolerations if Dashboard must not be deployed on master
            tolerations:
              - key: node-role.kubernetes.io/master
                effect: NoSchedule
      
      ---
      
      kind: Service
      apiVersion: v1
      metadata:
        labels:
          k8s-app: dashboard-metrics-scraper
        name: dashboard-metrics-scraper
        namespace: kube-system
      spec:
        ports:
          - port: 8000
            targetPort: 8000
        selector:
          k8s-app: dashboard-metrics-scraper
      
      ---
      
      kind: Deployment
      apiVersion: apps/v1
      metadata:
        labels:
          k8s-app: dashboard-metrics-scraper
        name: dashboard-metrics-scraper
        namespace: kube-system
      spec:
        replicas: 1
        revisionHistoryLimit: 10
        selector:
          matchLabels:
            k8s-app: dashboard-metrics-scraper
        template:
          metadata:
            labels:
              k8s-app: dashboard-metrics-scraper
            annotations:
              seccomp.security.alpha.kubernetes.io/pod: 'runtime/default'
          spec:
            containers:
              - name: dashboard-metrics-scraper
                image: kubernetesui/metrics-scraper:v1.0.6
                ports:
                  - containerPort: 8000
                    protocol: TCP
                livenessProbe:
                  httpGet:
                    scheme: HTTP
                    path: /
                    port: 8000
                  initialDelaySeconds: 30
                  timeoutSeconds: 30
                volumeMounts:
                - mountPath: /tmp
                  name: tmp-volume
                securityContext:
                  allowPrivilegeEscalation: false
                  readOnlyRootFilesystem: true
                  runAsUser: 1001
                  runAsGroup: 2001
            serviceAccountName: kubernetes-dashboard
            nodeSelector:
              "kubernetes.io/os": linux
            # Comment the following tolerations if Dashboard must not be deployed on master
            tolerations:
              - key: node-role.kubernetes.io/master
                effect: NoSchedule
            volumes:
              - name: tmp-volume
                emptyDir: {}
      
      posted @ 2021-09-13 21:47  聽_風(fēng)~  閱讀(374)  評論(0)    收藏  舉報(bào)
      主站蜘蛛池模板: 国产熟睡乱子伦午夜视频| 鲁丝片一区二区三区免费| 性人久久久久| 最近2019免费中文字幕8| 青海省| 九九热精品在线免费视频| 精品国产中文字幕在线| 狠狠躁夜夜躁人人爽蜜桃| 中文字幕日韩有码第一页| 东京热一精品无码av| 日韩69永久免费视频| 老女老肥熟国产在线视频| 亚洲精品日韩在线观看| 日韩中文字幕亚洲精品| 亚洲国产精品综合久久网络| 久久69国产精品久久69软件| 成人3D动漫一区二区三区| 91人妻无码成人精品一区91| 成年在线观看免费人视频| 国产亚洲综合一区二区三区| 人妻丰满熟妇av无码区| 亚洲AV无码秘?蜜桃蘑菇| 中文乱码人妻系列一区二区| 国产精品福利自产拍久久| 国产精品剧情亚洲二区| 精品视频在线观看免费观看| 国产美女在线精品免费观看| 欧美在线一区二区三区精品| 不卡无码人妻一区三区音频| 91中文字幕一区在线| 中文字幕无码视频手机免费看| 国产草草影院ccyycom| 久久久www免费人成精品| 好吊视频在线一区二区三区 | 国产成人午夜一区二区三区| 狠狠色综合久久丁香婷婷| 国产精品入口麻豆| 最近中文字幕国产精选| 日本国产精品第一页久久 | 国产 亚洲 制服 无码 中文| 亚洲成人一区|