k8s(pod,控制器,service)詳解
一:Pod介紹
pod資源的各種配置和原理
關于很多yaml文件的編寫,都是基于配置引出來的
1:pod的結構和定義

每個Pod中都可以包含一個或者多個容器,這些容器可以分為2大類:
1:用戶所在的容器,數量可多可少(用戶容器)
2:pause容器,這是每個pod都會有的一個跟容器,作用有2個
1、可以以它為根據,評估整個pod的健康狀態
2、可以在根容器上面設置ip地址,其他容器都以此ip,實現Pod內部的網絡通信
這里的Pod內部通訊是,pod之間采用二層網絡技術來實現
;其他容器都共享這個根容器的ip地址,外界訪問這個根容器ip地址+端口即可
2:pod定義
pod的資源清單:
屬性,依次類推的進行查找
[root@master /]# kubectl explain pod #查看二級屬性 [root@master /]# kubectl explain pod.metadata
介紹
apiVersion 版本 #查看所有的版本 [root@master /]# kubectl api-versions admissionregistration.k8s.io/v1 apiextensions.k8s.io/v1 apiregistration.k8s.io/v1 apps/v1 authentication.k8s.io/v1 authorization.k8s.io/v1 autoscaling/v1 autoscaling/v2 batch/v1 certificates.k8s.io/v1 coordination.k8s.io/v1 discovery.k8s.io/v1 events.k8s.io/v1 flowcontrol.apiserver.k8s.io/v1beta2 flowcontrol.apiserver.k8s.io/v1beta3 networking.k8s.io/v1 node.k8s.io/v1 policy/v1 rbac.authorization.k8s.io/v1 scheduling.k8s.io/v1 storage.k8s.io/v1 v1 kind 類型 #查看資源的類型 [root@master /]# kubectl api-resources metadata 元數據,資源的名字,標簽等等 [root@master /]# kubectl explain pod.metadata status 狀態信息,自動的進行生成,不需要自己定義 [root@master /]# kubectl get pods -o yaml spec 定義資源的詳細信息, 下面的子屬性 containers:object 容器列表,用于定義容器的詳細信息 nodename:string 根據nodename的值將pod的調度到指定的node節點,pod部署在哪個Pod上面 nodeselector:pod標簽選擇器,可以將pod調度到包含這些label的Node上 hostnetwork:默認是false,k8s自動的分配一個ip地址,如果設置為true,就使用宿主機的ip volumes:存儲卷,用于定義pod上面掛載的存儲信息 restartpolicy:重啟策略,表示pod在遇到故障的時候處理的策略
3:pod配置
主要關于pod.spec.containers屬性
里面有的是數組,就是可以選擇多個值,在里面的話,有的只是一個值,看情況進行區分
[root@master /]# kubectl explain pod.spec.containers KIND: Pod VERSION: v1 name:容器名稱 image:容器需要的鏡像地址 imagePullPolicy:鏡像拉取策略 本地的還是遠程的 command:容器的啟動命令列表,如不指定,使用打包時使用的啟動命令 string args:容器的啟動命令需要的參數列表,也就是上面的列表的命令 string env:容器環境變量的配置 object ports:容器需要暴露的端口列表 object resources:資源限制和資源請求的設置 object
1、基本配置
[root@master ~]# cat pod-base.yaml
apiVersion: v1
kind: Pod
metadata:
name: pod-base
namespace: dev
labels:
user: qqqq
spec:
containers:
- name: nginx
image: nginx:1.17.1
- name: busybox
image: busybox:1.30
簡單的Pod的配置,里面有2個容器
nginx輕量級的web軟件
busybox:就是一個小巧的Linux命令集合
[root@master ~]# kubectl create -f pod-base.yaml
pod/pod-base created
#查看Pod狀態,
ready:只有里面有2個容器,但是只有一個是準備就緒的,還有一個沒有啟動
restarts:重啟的次數,因為有一個容器故障了,Pod一直重啟試圖恢復它
[root@master ~]# kubectl get pods -n dev
NAME READY STATUS RESTARTS AGE
pod-base 1/2 CrashLoopBackOff 4 (29s ago) 2m36s
#可以查看pod詳情
[root@master ~]# kubectl describe pods pod-base -n dev
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled 4m51s default-scheduler Successfully assigned dev/pod-base to node2
Normal Pulling 4m51s kubelet Pulling image "nginx:1.17.1"
Normal Pulled 4m17s kubelet Successfully pulled image "nginx:1.17.1" in 33.75s (33.75s including waiting)
Normal Created 4m17s kubelet Created container nginx
Normal Started 4m17s kubelet Started container nginx
Normal Pulling 4m17s kubelet Pulling image "busybox:1.30"
Normal Pulled 4m9s kubelet Successfully pulled image "busybox:1.30" in 8.356s (8.356s including waiting)
Normal Created 3m27s (x4 over 4m9s) kubelet Created container busybox
Normal Started 3m27s (x4 over 4m9s) kubelet Started container busybox
Warning BackOff 2m59s (x7 over 4m7s) kubelet Back-off restarting failed container busybox in pod pod-base_dev(2e9aeb3f-2bec-4af5-853e-2d8473e115a7)
Normal Pulled 2m44s (x4 over 4m8s) kubelet Container image "busybox:1.30" already present on machine
之后再來進行解決
2、鏡像拉取
imagePullPolicy
就是pod里面有個容器,一個有本地鏡像,一個沒有,可以使用這個參數來進行控制是本地還是遠程的
imagePullPolicy的值,
Always:總是從遠程倉庫進行拉取鏡像(一直用遠程下載)
ifNotPresent:本地有則使用本地的鏡像,本地沒有則使用從遠程倉庫拉取鏡像
Never:一直使用本地的,不使用遠程下載
如果鏡像的tag為具體的版本號:默認值是ifNotPresent,
如果是latest:默認策略是always
[root@master ~]# cat pod-policy.yaml
apiVersion: v1
kind: Pod
metadata:
name: pod-imagepullpolicy
namespace: dev
labels:
user: qqqq
spec:
containers:
- name: nginx
image: nginx:1.17.2
imagePullPolicy: Never
- name: busybox
image: busybox:1.30
[root@master ~]# kubectl create -f pod-policy.yaml
pod/pod-imagepullpolicy created
#查看pods狀態
[root@master ~]# kubectl get pods -n dev
NAME READY STATUS RESTARTS AGE
pod-base 1/2 CrashLoopBackOff 9 (3m59s ago) 25m
pod-imagepullpolicy 0/2 CrashLoopBackOff 1 (9s ago) 19s
#查看詳細的信息
[root@master ~]# kubectl describe pods pod-imagepullpolicy -n dev
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled 64s default-scheduler Successfully assigned dev/pod-imagepullpolicy to node1
Normal Pulling 64s kubelet Pulling image "busybox:1.30"
Normal Pulled 56s kubelet Successfully pulled image "busybox:1.30" in 8.097s (8.097s including waiting)
Normal Created 39s (x3 over 56s) kubelet Created container busybox
Normal Started 39s (x3 over 56s) kubelet Started container busybox
Normal Pulled 39s (x2 over 55s) kubelet Container image "busybox:1.30" already present on machine
Warning ErrImageNeverPull 38s (x6 over 64s) kubelet Container image "nginx:1.17.2" is not present with pull policy of Never
Warning Failed 38s (x6 over 64s) kubelet Error: ErrImageNeverPull
Warning BackOff 38s (x3 over 54s) kubelet Back-off restarting failed container busybox in pod pod-imagepullpolicy_dev(38d5d2ff-6155-4ff3-ad7c-8b7f4a370107)
#直接報了一個錯誤,就是鏡像拉取失敗了
#解決的措施,修改里面的策略為ifnotpresent即可
[root@master ~]# kubectl delete -f pod-policy.yaml
[root@master ~]# kubectl apply -f pod-policy.yaml
[root@master ~]# kubectl get pods -n dev
[root@master ~]# kubectl get pods -n dev
NAME READY STATUS RESTARTS AGE
pod-base 1/2 CrashLoopBackOff 11 (2m34s ago) 34m
pod-imagepullpolicy 1/2 CrashLoopBackOff 4 (63s ago) 2m55s
這樣就拉取成功了
3、啟動命令
command:容器啟動的命令列表,如果不指定的話,使用打包時使用的啟動命令
args:容器的啟動命令需要的參數列表
為什么沒有busybox運行了,busybox并不是一個程序,而是類似于一個工具類的集合,他會自動的進行關閉,解決的方法就是讓其一直的運行,這就要使用command命令了
[root@master ~]# cat command.yaml
apiVersion: v1
kind: Pod
metadata:
name: pod-command
namespace: dev
spec:
containers:
- name: nginx
image: nginx:1.17.1
- name: busybox
image: busybox:1.30
command: ["/bin/sh","-c","touch /tmp/hello.txt;while true;do /bin/echo $(date +%T) >> /tmp/hell0.txt;sleep 3;done;"]
#/bin/sh 命令行腳本
-c 之后的字符串作為一個命令來執行
向這個文件里面執行時間,然后執行結束后,休息3秒鐘,這個就是一個進程一直在運行
[root@master ~]# kubectl create -f command.yaml
pod/pod-command created
#這樣就好了,都啟動了
[root@master ~]# kubectl get pods -n dev
NAME READY STATUS RESTARTS AGE
pod-command 2/2 Running 0 6s
#進入這個容器
[root@master ~]# kubectl exec pod-command -n dev -it -c busybox /bin/sh
kubectl exec [POD] [COMMAND] is DEPRECATED and will be removed in a future version. Use kubectl exec [POD] -- [COMMAND] instead.
/ #
這樣就成功的進入里面去了
/ # cat /tmp/hell0.txt ,因為有這個進程的存在,就不是關閉掉
說明:發現command已經完成啟動命令后和傳遞參數后的功能,為什么還需要提供一個args的選項了,用于傳遞參數呢,這其實跟docker有點關系,整個個就是覆蓋dockerfile中的entrypoint的功能
k8s拉取鏡像的時候,里面有一個dockerfile來構建鏡像,然后k8s的command和args會替換
情況:
1,如果command和args沒有寫,那么用dockerfile的配置
2、如果command寫了,但是args沒有寫,那么用dockerfile默認配置會被忽略,執行輸入的command命令
3、如果command沒寫,但是args寫了,那么dockerfile中的配置的entrypoint命令會被執行,使用當前的args的參數
4、如果都寫了,那么dockerfile的配置被忽略,執行command并追上args參數
4、環境變量(了解即可)
env向容器里面傳入環境變量,object類型的數組
鍵值對,就是一個鍵加上一個值即可
[root@master ~]# cat pod-env.yaml
apiVersion: v1
kind: Pod
metadata:
name: pod-command
namespace: dev
spec:
containers:
- name: nginx
image: nginx:1.17.1
- name: busybox
image: busybox:1.30
command: ["/bin/sh","-c","touch /tmp/hello.txt;while true;do /bin/echo $(date +%T) >> /tmp/hell0.txt;sleep 3;done;"]
env:
- name: "username"
vaule : "admin"
- name: "password"
vaule: "123456"
#創建Pod
[root@master ~]# kubectl create -f pod-env.yaml
pod/pod-command created
[root@master ~]# kubectl get pods -n dev
NAME READY STATUS RESTARTS AGE
pod-command 2/2 Running 0 47s
#進入容器里面
-c選項,只有一個容器的話,可以省略掉即可
[root@master ~]# kubectl exec -ti pod-command -n dev -c busybox /bin/sh
kubectl exec [POD] [COMMAND] is DEPRECATED and will be removed in a future version. Use kubectl exec [POD] -- [COMMAND] instead.
/ # ls
bin dev etc home proc root sys tmp usr var
/ # echo $username
admin
/ # echo password
password
5、端口設置(ports)
查看端口一些選項
[root@master ~]# kubectl explain pod.spec.containers.ports ports name:端口的名稱,必須是在Pod中是唯一的 containerport 容器要監聽的端口 hostport 容器要在主機上公開的端口,如果設置,主機上只能運行容器的一個副本,會有沖突,多個Pod會占用一個端口 hostip 要將外部端口綁定到主機的Ip(一般省略了) protocol 端口協議,默認是TCP,UTP,SCTP

案例:
[root@master ~]# cat pod-port.yaml
apiVersion: v1
kind: Pod
metadata:
name: pod-ports
namespace: dev
spec:
containers:
- name: nginx
image: nginx:1.17.1
ports:
- name: nginx-port
containerPort: 80
protocol: TCP
kubectl create -f pod-port.yaml
[root@master ~]# kubectl get pod -n dev -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
pod-command 2/2 Running 0 27m 10.244.1.2 node2 <none> <none>
pod-ports 1/1 Running 0 2m58s 10.244.2.2 node1 <none> <none>
#訪問容器里面的程序的話,需要使用Pod的ip加上容器的端口即可,進行訪問
[root@master ~]# curl 10.244.2.2:80
<!DOCTYPE html>
<html>
<head>
<title>Welcome to nginx!</title>
<style>
body {
width: 35em;
margin: 0 auto;
font-family: Tahoma, Verdana, Arial, sans-serif;
}
</style>
</head>
<body>
<h1>Welcome to nginx!</h1>
<p>If you see this page, the nginx web server is successfully installed and
working. Further configuration is required.</p>
<p>For online documentation and support please refer to
<a >nginx.org</a>.<br/>
Commercial support is available at
<a >nginx.com</a>.</p>
<p><em>Thank you for using nginx.</em></p>
</body>
</html>
6、資源限制(resources)
因為容器的運行需要占用一些資源,就是對某些容器進行資源的限制,如果某個資源突然大量的值內存的話,其他的容器就不能正常的工作了,就會出現問題
就是規定A容器只需要600M內存,如果大于的話,就出現了問題,進行重啟容器的操作
有2個字選項:
limits:用于限制運行時容器的最大占用資源,當容器占用的資源超過了limits會被終止,并就進行重啟(上限)
requests:用于設置容器需要的最小資源,如果環境資源不夠的話,容器無法進行啟動(下限)
作用:
1、只針對cpu,內存
案例:
[root@master ~]# cat pod-r.yaml
apiVersion: v1
kind: Pod
metadata:
name: pod-resources
namespace: dev
spec:
containers:
- name: nginx
image: nginx:1.17.1
resources:
limits:
cpu: "2"
memory: "10Gi"
requests:
cpu: "1"
memory: "10Mi"
kubectl create -f pod-r.yaml
[root@master ~]# kubectl get pods -n dev
NAME READY STATUS RESTARTS AGE
pod-command 2/2 Running 0 41m
pod-ports 1/1 Running 0 16m
pod-resources 1/1 Running 0 113s
#規定最少需要10G才能啟動容器,但是不會進行啟動
[root@master ~]# cat pod-r.yaml
apiVersion: v1
kind: Pod
metadata:
name: pod-resources
namespace: dev
spec:
containers:
- name: nginx
image: nginx:1.17.1
resources:
limits:
cpu: "2"
memory: "10Gi"
requests:
cpu: "1"
memory: "10G"
[root@master ~]# kubectl create -f pod-r.yaml
pod/pod-resources created
#查找狀態
[root@master ~]# kubectl get pods -n dev
NAME READY STATUS RESTARTS AGE
pod-command 2/2 Running 0 44m
pod-ports 1/1 Running 0 19m
pod-resources 0/1 Pending 0 89s
#查看詳細的信息
[root@master ~]# kubectl describe pods pod-resources -n dev
cpu和內存的單位
cpu為整數
內存為Gi Mi G M等形式
二:pod生命周期
1:概念
一般是指Pod對象從創建至終的時間范圍稱為pod的生命周期,主要包含一下過程
1、pod創建過程
2、運行初始化容器過程,它是容器的一種,可多可少,一定在主容器運行之前執行
3、運行主容器過程
容器啟動后鉤子,容器終止前鉤子,就是啟動之后的一些命令,2個特殊的點
容器的存活性探測,就緒性探測
4、pod終止過程

在整個生命周期中,pod會出現5中狀態
掛起(pending):apiserver,已經創建了pod資源對象,但它尚未被調度,或者仍然處于下載鏡像的過程中;創建一個pod,里面有容器,需要拉取
運行中(running):pod已經被調度至某一個節點,并且所有的容器都已經被kubelet創建完成
成功(succeeded):Pod中的所有容器都已經被成功終止,并且不會被重啟;就是運行一個容器,30秒后,打印,然后退出
失敗(failed):所有容器都已經被終止,但至少有一個容器終止失敗,即容器返回非0的退出狀態
未知(unknown):apiserver無法正常的獲取到pod對象的狀態信息,通常由網絡通信失敗所導致的
2:pod創建和終止
pod的創建過程:

都監聽到apiserver上面了
開始創建就已經返回一個信息了,給etcd了,
scheduler:開始為pod分配主機,將結果告訴apiserver
node節點上面發現有pod調度過來,調用docker啟動容器,并將結果告訴apiserver
apiserver將接收的信息pod狀態信息存入etcd中
pod的終止過程:

service就是Pod的代理,訪問pod通過service即可
向apiserver發送一個請求,apiserver更新pod的狀態,將pod標記為terminating狀態,kubelet監聽到為terminating,就啟動關閉pod過程
3:初始化容器
主要做的就是主容器的前置工作(環境的準備),2個特點
1、初始化容器必須運行在完成直至結束,若某初始化容器運行失敗了,那么k8s需要重啟它知道成功完成
2、初始化容器必須按照定義的順序執行,當且僅當前一個成功了,后面的一個才能運行,否則不運行
初始化容器應用場景:
提供主容器進行不具備工具程序或自定義代碼
初始化容器需要先于應用容器串行啟動并運行成功,因此,可應用容器的啟動直至依賴的條件得到滿足
nginx,mysql,redis, 先連mysql,不成功,則會一直處于連接, 一直連成功了,就會去連接redis,這2個條件都滿足了,nginx這個主容器就會啟動了
測試:
規定mysql 192.168.109.201 redis 192.168.109.202
[root@master ~]# cat pod-init.yaml
apiVersion: v1
kind: Pod
metadata:
name: pod-init
namespace: dev
spec:
containers:
- name: main-container
image: nginx:1.17.1
ports:
- name: nginx-port
containerPort: 80
initContainers:
- name: test-mysql
image: busybox:1.30
command: ['sh','-c','util ping 192.168.109.201 -c 1;do echo waiting for mysql;sleep 2;done;']
- name: test-redis
image: busybox:1.30
command: ['sh','-c','util ping 192.168.109.202 -c 1;di echo waiting for redis;sleep 2;done']
#由于沒有地址,所以的話,初始化失敗
[root@master ~]# kubectl get pods -n dev
NAME READY STATUS RESTARTS AGE
pod-init 0/1 Init:CrashLoopBackOff 3 (27s ago) 83s
#添加地址,第一個初始化容器就能運行了
[root@master ~]# ifconfig ens33:1 192.168.109.201 netmask 255.255.255.0 up
#再次添加地址,第二個初始化容器也能運行了
[root@master ~]# ifconfig ens33:2 192.168.109.202 netmask 255.255.255.0 up
[root@master ~]# kubectl get pods -n dev -w
NAME READY STATUS RESTARTS AGE
pod-init 0/1 Init:0/2 0 6s
pod-init 0/1 Init:1/2 0 13s
pod-init 0/1 Init:1/2 0 14s
pod-init 0/1 PodInitializing 0 27s
pod-init 1/1 Running 0 28s
主容器就運行成功了
4:主容器鉤子函數
就是主容器上面的一些點,能夠允許用戶使用一些代碼
2個點

post start:容器啟動后鉤子,容器啟動之后會立即的執行,成功了,則啟動,否則,會重啟
prestop:容器終止前鉤子,容器在刪除之前執行,就是terming狀態,會阻塞容器刪除,執行成功了,就會刪除
1、鉤子處理器(三種方式定義動作)
exec命令:在容器內執行一次命令
用的最多的exec方式
lifecycle:
podstart:
exec:
command:
- cat
- /tmp/healthy
tcpsocket:在當前容器內嘗試訪問指定socket,在容器內部訪問8080端口
lifecycle:
podstart:
tcpsocket:
port:8080 #會嘗試連接8080端口
httpget:在當前容器中向某url發起http請求
lifecycle:
poststart:
httpGet:
path: url地址
port: 80
host: 主機地址
schme: HTTP 支持的協議
案例:
apiVersion: v1
kind: Pod
metadata:
name: pod-exec
namespace: dev
spec:
containers:
- name: main-container
image: nginx:1.17.1
ports:
- name: nginx-port
containerPort: 80 #容器內部的端口,一般是service將公開pod端口,將pod端口映射到主機上面
lifecycle:
postStart:
exec: ###在啟動的時候,執行一個命令,修改默認網頁內容
command: ["/bin/sh","-c","echo poststart > /usr/share/nginx/html/index.html"]
preStop:
exec: ###停止容器的時候,-s傳入一個參數,優雅的停止nginx服務
command: ["/usr/sbin/nginx","-s","quit"]
[root@master ~]# kubectl create -f pod-exec.yaml
pod/pod-exec created
[root@master ~]# kubectl get pods -n dev -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
pod-exec 1/1 Running 0 53s 10.244.1.7 node1 <none> <none>
pod-init 1/1 Running 0 27m 10.244.1.6 node1 <none> <none>
訪問一下pod里面容器的服務即可
格式為pod的ip+容器的端口
[root@master ~]# curl 10.244.1.7:80
poststart
5:容器探測
主容器探測:用于檢測容器中的應用實例是否正常的工作,是保障業務可用性的一種傳統機制,如果經過了探測,實例的狀態不符合預期,那么k8s就會把問題的實例摘除,不承擔業務的流量,k8s提供了2種探針來實現容器探測,
分別是:
liveness probes:存活性探針,用于檢測應用實例,是否處于正常的運行狀態,如果不是,k8s會重啟容器;用于決定是否重啟容器
readiness probes:就緒性探針,用于檢測應用實例是否可以接受請求,如果不能,k8s不會轉發流量;nginx需要讀取很多的web文件,在讀取的過程中,service認為nginx已經成功了,如果有個請求的話,那么就無法提供了服務;所以就不會將請求轉發到這里了
就是一個service來代理許多的pod,請求來到了pod,如果有一個pod出現了問題,如果沒有了探針的話,就會出現了問題
作用:
1、找出這些出了問題的pod
2、服務是否已經準備成功了
三種探測方式:
exec:退出碼為0,則正常
livenessProbe
exec:
command:
- cat
- /tmp/healthy
tcpsocket:
livenessProbe:
tcpSocket:
port: 8080
httpget:
返回的狀態碼在200個399之間,則認為程序正常,否則不正常
livenessProbe:
httpGet:
path: / url地址
port:80 主機端口
host:主機地址
scheme:http
案例:
exec案例:
[root@master ~]# cat pod-live-exec.yaml
apiVersion: v1
kind: Pod
metadata:
name: pod-liveness-exec
namespace: dev
spec:
containers:
- name: main-container
image: nginx:1.17.1
ports:
- name: nginx-port
containerPort: 80
livenessProbe:
exec:
command: ["/bin/cat","/tmp/hello.txt"] #由于沒有這個文件,所以就會一直進行重啟
#出現了問題,就會處于一直重啟的狀態
[root@master ~]# kubectl get pods -n dev
NAME READY STATUS RESTARTS AGE
pod-exec 1/1 Running 0 38m
pod-init 1/1 Running 0 65m
pod-liveness-exec 1/1 Running 2 (27s ago) 97s
#查看pod的詳細信息
[root@master ~]# kubectl describe pod -n dev pod-liveness-exec
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled 2m13s default-scheduler Successfully assigned dev/pod-liveness-exec to node2
Normal Pulling 2m12s kubelet Pulling image "nginx:1.17.1"
Normal Pulled 2m kubelet Successfully pulled image "nginx:1.17.1" in 12.606s (12.606s including waiting)
Normal Created 33s (x4 over 2m) kubelet Created container main-container
Normal Started 33s (x4 over 2m) kubelet Started container main-container
Warning Unhealthy 33s (x9 over 113s) kubelet Liveness probe failed: /bin/cat: /tmp/hello.txt: No such file or directory
Normal Killing 33s (x3 over 93s) kubelet Container main-container failed liveness probe, will be restarted
Normal Pulled 33s (x3 over 93s) kubelet Container image "nginx:1.17.1" already present on machine
#一直在重啟
[root@master ~]# kubectl get pods -n dev
NAME READY STATUS RESTARTS AGE
pod-exec 1/1 Running 0 39m
pod-init 1/1 Running 0 66m
pod-liveness-exec 0/1 CrashLoopBackOff 4 (17s ago) 2m57s
#一個正常的案例
[root@master ~]# cat pod-live-exec.yaml
apiVersion: v1
kind: Pod
metadata:
name: pod-liveness-exec
namespace: dev
spec:
containers:
- name: main-container
image: nginx:1.17.1
ports:
- name: nginx-port
containerPort: 80
livenessProbe:
exec:
command: ["/bin/ls","/tmp/"]
[root@master ~]# kubectl create -f pod-live-exec.yaml
pod/pod-liveness-exec created
#就不會一直重啟了
[root@master ~]# kubectl get pods -n dev
NAME READY STATUS RESTARTS AGE
pod-exec 1/1 Running 0 42m
pod-init 1/1 Running 0 69m
pod-liveness-exec 1/1 Running 0 56s
#查看詳細的信息,發現沒有錯誤
tcpsocket:
[root@master ~]# cat tcp.yaml
apiVersion: v1
kind: Pod
metadata:
name: pod-liveness-tcp
namespace: dev
spec:
containers:
- name: main-container
image: nginx:1.17.1
ports:
- name: nginx-port
containerPort: 80
livenessProbe:
tcpSocket:
port: 8080 訪問容器的8080端口
kubectl create -f tcp.yaml
#發現一直在進行重啟,沒有訪問到8080端口
[root@master ~]# kubectl get pods -n dev
NAME READY STATUS RESTARTS AGE
pod-liveness-tcp 1/1 Running 5 (72s ago) 3m43s
#查看詳細的信息
[root@master ~]# kubectl describe pod -n dev pod-liveness-tcp
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled 3m22s default-scheduler Successfully assigned dev/pod-liveness-tcp to node2
Normal Pulled 112s (x4 over 3m22s) kubelet Container image "nginx:1.17.1" already present on machine
Normal Created 112s (x4 over 3m22s) kubelet Created container main-container
Normal Started 112s (x4 over 3m22s) kubelet Started container main-container
Normal Killing 112s (x3 over 2m52s) kubelet Container main-container failed liveness probe, will be restarted
Warning Unhealthy 102s (x10 over 3m12s) kubelet Liveness probe failed: dial tcp 1
正常的案例:
[root@master ~]# cat tcp.yaml
apiVersion: v1
kind: Pod
metadata:
name: pod-liveness-tcp
namespace: dev
spec:
containers:
- name: main-container
image: nginx:1.17.1
ports:
- name: nginx-port
containerPort: 80
livenessProbe:
tcpSocket:
port: 80
#查看效果,沒有任何的問題
[root@master ~]# kubectl describe pods -n dev pod-liveness-tcp
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled 27s default-scheduler Successfully assigned dev/pod-liveness-tcp to node2
Normal Pulled 28s kubelet Container image "nginx:1.17.1" already present on machine
Normal Created 28s kubelet Created container main-container
Normal Started 28s kubelet Started container main-container
httpget
[root@master ~]# cat tcp.yaml
apiVersion: v1
kind: Pod
metadata:
name: pod-liveness-http
namespace: dev
spec:
containers:
- name: main-container
image: nginx:1.17.1
ports:
- name: nginx-port
containerPort: 80
livenessProbe:
httpGet:
scheme: HTTP
port: 80
path: /hello # http://127.0.0.1:80/hello
#發現一直在進行重啟的操作
[root@master ~]# kubectl describe pod -n dev pod-liveness-http
[root@master ~]# kubectl get pods -n dev
NAME READY STATUS RESTARTS AGE
pod-liveness-http 1/1 Running 1 (17s ago) 48s
pod-liveness-tcp 1/1 Running 0 4m21s
#正常的情況
[root@master ~]# cat tcp.yaml
apiVersion: v1
kind: Pod
metadata:
name: pod-liveness-http
namespace: dev
spec:
containers:
- name: main-container
image: nginx:1.17.1
ports:
- name: nginx-port
containerPort: 80
livenessProbe:
httpGet:
scheme: HTTP
port: 80
path: /
[root@master ~]# kubectl describe pods -n dev pod-liveness-http
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled 21s default-scheduler Successfully assigned dev/pod-liveness-http to node1
Normal Pulled 22s kubelet Container image "nginx:1.17.1" already present on machine
Normal Created 22s kubelet Created container main-container
Normal Started 22s kubelet Started container main-container
容器探測補充
[root@master ~]# kubectl explain pod.spec.containers.livenessProbe initialDelaySeconds <integer> 容器啟動后等待多少秒執行第一次探測 timeoutSeconds <integer> 探測超時時間,默認是1秒,最小1秒 periodSeconds <integer> 執行探測的頻率,默認是10秒,最小是1秒 failureThreshold <integer> 連續探測失敗多少次后才被認為失敗,默認是3,最小值是1 successThreshold <integer> 連續探測成功多少次后才被認定為成功,默認是1
案例:
6:重啟策略
就是容器探測出現了問題,k8s就會對容器所在的Pod進行重啟,這個由pod的重啟策略決定的,pod的重啟策略有三種
always:容器失效時,自動重啟該容器,默認值
onfailure:容器終止運行且退出碼不為0時重啟,異常終止
never:不論狀態為何,都不重啟該容器
重啟策略適用于Pod對象中的所有容器,首次需要重啟的容器,將在需要時立即重啟,隨后再次需要重啟的操作由kubelet延遲一段時間進行,且反復的重啟操作的延遲時長為10S,20S,300s為最大的延遲時長
案例:
apiVersion: v1
kind: Pod
metadata:
name: restart-pod
namespace: dev
spec:
containers:
- name: main-container
image: nginx:1.17.1
ports:
- name: nginx-port
containerPort: 80
livenessProbe:
httpGet:
scheme: HTTP
port: 80
path: /hello # http://127.0.0.1:80/hello
restartPolicy: Always
#會一直進行重啟
#改為Never
容器監聽失敗了,就不會進行重啟,直接停止了
狀態是完成的狀態,
[root@master ~]# kubectl get pods -n dev
NAME READY STATUS RESTARTS AGE
pod-liveness-http 1/1 Running 1 (16h ago) 16h
pod-liveness-tcp 1/1 Running 1 (22m ago) 16h
restart-pod 0/1 Completed 0 41s
[root@master ~]# kubectl describe pod -n dev restart-pod
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled 84s default-scheduler Successfully assigned dev/restart-pod to node1
Normal Pulled 84s kubelet Container image "nginx:1.17.1" already present on machine
Normal Created 84s kubelet Created container main-container
Normal Started 84s kubelet Started container main-container
Warning Unhealthy 55s (x3 over 75s) kubelet Liveness probe failed: HTTP probe failed with statuscode: 404
Normal Killing 55s kubelet Stopping container main-container
三:pod調度
默認的情況下,一個Pod在哪個節點上面運行,是有scheduler組件采用相應的算法計算出來,這個過程是不受人工控制的,但是在實際中,這不滿足需求,需要控制pod在哪個節點上面運行,這個就需要調度的規則了,四大類調度的方式
自動調度:經過算法自動的調度
定向調度:通過nodename屬性(node的名字),nodeselector(標簽)
親和性調度:nodeAffinity(node的親和性),podAffinity(pod的親和性),podANtiAffinity(這個就是跟Pod的親和性差,所以就去相反的一側)
污點(容忍調度):站在node節點上面完成的,有一個污點,別人就不能在;容忍站在pod上面來說的,可以在node上面的污點進行就是容忍調度
1:定向調度
指定的是pod聲明nodename,或者nodeselector,依次將pod調度到指定的node節點上面,這個是強制性的,即使node不存在,也會被調度,只不過是pod運行失敗而已
1、nodename
強制的調度,直接跳過了scheduler的調度邏輯,直接將pod調度到指定的節點上面
[root@master ~]# cat pod-nodename.yaml
apiVersion: v1
kind: Pod
metadata:
name: pod-nodename
namespace: dev
spec:
containers:
- name: main-container
image: nginx:1.17.1
ports:
nodeName: node1
[root@master ~]# kubectl create -f pod-nodename.yaml
pod/pod-nodename created
#運行在node1上面運行
[root@master ~]# kubectl get pods -n dev -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
pod-liveness-http 1/1 Running 1 (16h ago) 17h 10.244.2.8 node1 <none> <none>
pod-liveness-tcp 1/1 Running 1 (42m ago) 17h 10.244.1.7 node2 <none> <none>
pod-nodename 1/1 Running 0 41s 10.244.2.10 node1 <none> <none>
#將節點改為不存在的,pod會失敗而已
[root@master ~]# kubectl get pods -n dev -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
pod-liveness-http 1/1 Running 1 (16h ago) 17h 10.244.2.8 node1 <none> <none>
pod-liveness-tcp 1/1 Running 1 (43m ago) 17h 10.244.1.7 node2 <none> <none>
pod-nodename 0/1 Pending 0 9s <none> node3 <none> <none>
2、nodeselector
看的就是節點上面的標簽,標簽選擇器,強制性的
[root@master ~]# kubectl label nodes node1 nodeenv=pro
node/node1 labeled
[root@master ~]# kubectl label nodes node2 nodeenv=test
node/node2 labeled
[root@master ~]# cat pod-selector.yaml
apiVersion: v1
kind: Pod
metadata:
name: pod-select
namespace: dev
spec:
containers:
- name: main-container
image: nginx:1.17.1
nodeSelector:
nodeenv: pro
[root@master ~]# kubectl get pods -n dev -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
pod-liveness-http 1/1 Running 1 (17h ago) 17h 10.244.2.8 node1 <none> <none>
pod-liveness-tcp 1/1 Running 1 (51m ago) 17h 10.244.1.7 node2 <none> <none>
pod-select 1/1 Running 0 2m16s 10.244.2.11 node1 <none> <none>
#不存在的標簽
改為pr1,調度失敗
[root@master ~]# kubectl get pods -n dev
NAME READY STATUS RESTARTS AGE
pod-liveness-http 1/1 Running 1 (17h ago) 17h
pod-liveness-tcp 1/1 Running 1 (51m ago) 17h
pod-select 0/1 Pending 0 5s
2:親和性調度
上面的問題,就是強制性的調度,就是如果沒有節點的話,Pod就會調度失敗
就是聲明一個調度的節點,如果找到了,就調度,否則,找其他的;這個就是親和性
nodeAffinity:node的親和性,以node為目標,主要就是標簽()
podAffinity:pod的親和性,以pod為目標,就是以正在運行的pod為目標,就是一個web的pod需要和一個mysql的pod在一起,向其中一個打個標簽,另外一個就會來找他
podAntAffinity:pod的反親和性,以pod為目標,討厭和誰在一起,就選擇其他的
場景的說明:
如果2個應用時頻繁交互,那么就有必要利用親和性讓2個應用盡可能的靠近,這樣就能減少因為網絡通信帶來的性能損耗了,調度到了pod1上面就都在一個節點上面,通信的性能就損耗減少了
反親和性的應用:
當應用的采用多副本部署時,有必要采用反親和性讓各個應用實列打散分布在各個node上面,這樣就能提高服務的高可用性
應用的功能是相同的,使用反親和性,都分布在不同的節點上面,高可用性,就是壞了一個節點,其他的節點也能正常的提供工作
參數:
[root@master ~]# kubectl explain pod.spec.affinity.nodeAffinity
requiredDuringSchedulingIgnoredDuringExecution node節點必須滿足的指定的所有規劃才可以,相當于硬限制
nodeSelectorTerms:節點選擇列表
matchFields:按節點字段列出的節點選擇器要求列表
matchExpressions 按節點標簽列出的節點選擇器要求列表(標簽)
key:
vaules:
operator:關系符,支持in, not exists
如果有符合的條件,就調度,沒有符合的條件就調度失敗
preferredDuringSchedulingIgnoredDuringExecution <NodeSelector> 軟限制,優先找這些滿足的節點
preference 一個節點選擇器,以相應的權重相關聯
matchFields:按節點字段列出的節點選擇器要求列表
matchExpressions 按節點標簽列出的節點選擇器要求列表
key:鍵
vaules:
operator:
weight:傾向權重,1~100 ##就是傾向調度
如果找不到的話,就從其他的節點調度上去
關系符
- key:nodedev 匹配存在標簽的key為noddev的節點
operator: exists
- key: nodedev 匹配標簽的key為nodedev,且vaule是xxx或者yyy的節點
operator:in
vaules:['xxx','yyy']
1、nodeAffinity
node的親和性,2大類,硬限制,軟限制,節點上面的標簽作為選擇
[root@master ~]# cat pod-aff-re.yaml
apiVersion: v1
kind: Pod
metadata:
name: pod-aff
namespace: dev
spec:
containers:
- name: main-container
image: nginx:1.17.1
affinity:
nodeAffinity: ##親和性設置
requiredDuringSchedulingIgnoredDuringExecution: #設置node親和性,硬限制
nodeSelectorTerms:
matchExpressions: 匹配nodeenv的值在[xxx,yyy]中的標簽
- key: nodeenv
operator: In
vaules: ["xxx","yyy"]
[root@master ~]# kubectl create -f pod-aff-re.yaml
pod/pod-aff created
[root@master ~]# kubectl get pod -n dev
NAME READY STATUS RESTARTS AGE
pod-aff 0/1 Pending 0 23s
pod-liveness-http 1/1 Running 1 (17h ago) 18h
pod-liveness-tcp 1/1 Running 1 (94m ago) 18h
pod-select 0/1 Pending 0 43m
#調度失敗
#值改為pro,就能在node1上面調度了
[root@master ~]# kubectl create -f pod-aff-re.yaml
pod/pod-aff created
[root@master ~]# kubectl get pods -n dev
NAME READY STATUS RESTARTS AGE
pod-aff 1/1 Running 0 5s
pod-liveness-http 1/1 Running 1 (17h ago) 18h
pod-liveness-tcp 1/1 Running 1 (96m ago) 18h
pod-select 0/1 Pending 0 45m
軟限制
#軟限制
[root@master ~]# cat pod-aff-re.yaml
apiVersion: v1
kind: Pod
metadata:
name: pod-aff
namespace: dev
spec:
containers:
- name: main-container
image: nginx:1.17.1
affinity:
nodeAffinity:
preferredDuringSchedulingIgnoredDuringExecution: #軟限制
- weight: 1
preference:
matchExpressions:
- key: nodeenv
operator: In
values: ["xxx","yyy"]
#直接調度在node2上面了
[root@master ~]# kubectl get pods -n dev -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
pod-aff 1/1 Running 0 41s 10.244.1.9 node2 <none> <none>
pod-liveness-http 1/1 Running 1 (17h ago) 18h 10.244.2.8 node1 <none> <none>
pod-liveness-tcp 1/1 Running 1 (102m ago) 18h 10.244.1.7 node2 <none> <none>
pod-select 0/1 Pending 0 50m <none> <none> <none> <none>
注意:
如果同時定義了nodeSelector和nodeAffinity,那么必須滿足這2個條件,pod才能在指定的node上面運行 如果nodeaffinity指定了多個nodeSelectorTerms,那么只要有一個能夠匹配成功即可 如果一個nodeSelectorTerms中有多個matchExpressions,則一個節點必須滿足所有的才能匹配成功 如果一個Pod所在node在pod運行期間標簽發生了改變,不符合該pod的節點親和性需求,則系統將忽略此變化
這個調度就是只在調度的時候生效,所以的話,就是如果調度成功后,標簽發生了變化,不會對這個pod進行什么樣的變化
2、podAffinitly
就是以正在運行的pod為參照,硬限制和軟限制
kubectl explain pod.spec.affinity.podAffinity
requiredDuringSchedulingIgnoredDuringExecution 硬限制
namespace:指定參照pod的名稱空間,如果不指定的話,默認的參照物pod就跟pod一眼的
topologkey:調度的作用域,靠近到節點上,還是網段上面,操作系統了
###hostname的話,就是以node節點為區分的范圍,調度到node1的節點上面
os的話,就是以操作系統為區分的,調度到跟pod1操作系統上一樣的
labeSelector:標簽選擇器
matchExpressions: 按節點列出的節點選擇器要求列表
key:
vaules:
operator:
matchLbales: 指多個matchExpressions映射的內容
preferredDuringSchedulingIgnoredDuringExecution 軟限制
namespace:指定參照pod的名稱空間,如果不指定的話,默認的參照物pod就跟pod一眼的
topologkey:調度的作用域,靠近到節點上,還是網段上面,操作系統了
###hostname的話,就是以node節點為區分的范圍,調度到node1的節點上面
os的話,就是以操作系統為區分的,調度到跟pod1操作系統上一樣的
labeSelector:標簽選擇器
matchExpressions: 按節點列出的節點選擇器要求列表
key:
vaules:
operator:
matchLbales: 指多個matchExpressions映射的內容
weight:傾向權重1~100
案例:
軟親和性:
apiVersion: v1
kind: Pod
metadata: #元數據的信息
name: pods-1 #pod的名字
namespace: dev #名稱空間
spec:
containers: #容器
- name: my-tomcat #鏡像的名字
image: tomcat #拉取的鏡像
imagePullPolicy: IfNotPresent #策略為遠程和本地都有
affinity:
podAffinity: #pod的親和性
preferredDuringSchedulingIgnoredDuringExecution: #軟限制
- weight: 1 #權重為1
podAffinityTerm: #定義了具體的pod親和性的條件
labelSelector: #標簽選擇器
matchExpressions: #一個或者多個標簽匹配式
- key: user #標簽的鍵
operator: In
values: #標簽的值
- "qqqq"
topologyKey: kubernetes.io/hostname #按照主機進行區分
就是這個pod會被調度到節點上面有pod,并且標簽為user=qqqq這個節點上面去
硬親和性:
apiVersion: v1
kind: Pod
metadata:
name: pod-5
namespace: dev
spec:
containers:
- name: my-tomcat
image: tomcat
imagePullPolicy: IfNotPresent
affinity:
podAffinity:
requiredDuringSchedulingIgnoredDuringExecution: #軟限制
- labelSelector: #標簽選擇器
matchExpressions: #匹配列表
- key: user
operator: In
values: ["qqqq"]
topologyKey: kubernetes.io/hostname #按照主機來進行劃分
3、反親和性
就是不在這個pod上面進行調度,在另外的一個pod上面進行調度即可
案例:
[root@master mnt]# cat podaff.yaml
apiVersion: v1
kind: Pod
metadata:
name: podaff
namespace: dev
spec:
containers:
- name: main-container
image: nginx:1.17.1
affinity:
podAntiAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
- labelSelector:
matchExpressions:
- key: podenv
operator: In
values: ["pro"]
topologyKey: kubernets.io/hostname
發現在node2節點上面創建了
[root@master mnt]# kubectl get pods -n dev -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
pod-podaff 1/1 Running 0 61m 10.244.2.14 node1 <none> <none>
podaff 1/1 Running 0 2m57s 10.244.1.12 node2 <none> <none>
3:污點(taints)
前面都是站在pod的角度上面來進行配置的屬性,那么就是可以站在node的節點上面,是否允許這些pod調度過來,這些在node上面的信息就是被稱為了污點
就是一個拒絕的策略
污點作用:
可以將拒絕Pod調度過來
甚至還可以將已經存在的pod趕出去
污點的格式:
key=value:effect
key和value:是污點的標簽,effect描述污點的作用
effect三種的選項為:
PreferNoSchedule:k8s盡量避免把Pod調度到具有該污點的node上面,除非沒有其他的節點可以調度了
NoSchedule:k8s不會把pod調度到該具有污點node上面,但不會影響當前node上已經存在的pod
NoExecue:k8s將不會把Pod調度該具有污點的node上面,同時也會將node已經存在的Pod驅離,一個pod也沒有了

設置污點:
#設置污點 [root@master mnt]# kubectl taint nodes node1 key=vaule:effect #去除污點 [root@master mnt]# kubectl taint nodes node1 key:effect- #去除所有的污點 [root@master mnt]# kubectl taint nodes node1 key-
案例:
準備節點node1,先暫時停止node2節點 為node1節點一個污點,tag=heima:PreferNoSchedule; 然后創建pod1 修改node1節點設置一個污點;tag=heima:NoSchedule: 然后創建pod2,不在接收新的pod,原來的也不會離開 修改node1節點設置一個污點;tag=heima:NoExecute;然后創建pod3,pod3也不會被創建,都沒有了pod了 #關掉node2節點即可 #設置node1污點 [root@master mnt]# kubectl taint nodes node1 tag=heima:PreferNoSchedule node/node1 tainted #查看污點 [root@master mnt]# kubectl describe nodes -n dev node1| grep heima Taints: tag=heima:PreferNoSchedule #第一個pod可以進行運行 [root@master mnt]# kubectl run taint1 --image=nginx:1.17.1 -n dev pod/taint1 created [root@master mnt]# kubectl get pods -n dev NAME READY STATUS RESTARTS AGE pod-podaff 1/1 Running 0 90m podaff 1/1 Terminating 0 31m taint1 1/1 Running 0 6s #修改node1的污點 [root@master mnt]# kubectl taint nodes node1 tag=heima:PreferNoSchedule- node/node1 untainted [root@master mnt]# kubectl taint nodes node1 tag=heima:NoSchedule node/node1 tainted #第一個正常的運行,第二個運行不了 [root@master mnt]# kubectl run taint2 --image=nginx:1.17.1 -n dev pod/taint2 created [root@master mnt]# kubectl get pods -n dev NAME READY STATUS RESTARTS AGE pod-podaff 1/1 Running 0 94m podaff 1/1 Terminating 0 35m taint1 1/1 Running 0 3m35s taint2 0/1 Pending 0 3s #第三種污點的級別 [root@master mnt]# kubectl taint nodes node1 tag=heima:NoSchedule- node/node1 untainted 設置級別 [root@master mnt]# kubectl taint nodes node1 tag=heima:NoExecute node/node1 tainted #新的pod也會不能創建了 [root@master mnt]# kubectl run taint3 --image=nginx:1.17.1 -n dev pod/taint3 created [root@master mnt]# kubectl get pods -n dev NAME READY STATUS RESTARTS AGE podaff 1/1 Terminating 0 39m taint3 0/1 Pending 0 4s
為什么創建pod的時候,不能往master節點上面進行調度了,因為有污點的作用
4、容忍
容忍就是忽略,node上面有污點,但是pod上面有容忍,進行忽略,可以進行調度

案例:
apiVersion: v1
kind: Pod
metadata:
name: pod-aff
namespace: dev
spec:
containers:
- name: main-container
image: nginx:1.17.1
tolerations: #添加容忍
- key: "tag" #要容忍的key
operator: "Equal" #操作符
values: "heima" #容忍的污點
effect: "NoExecute" #添加容忍的規劃,這里必須和標記的污點規則相同
#首先創建一個沒有容忍的pod,看能不能進行創建
#無法進行創建
[root@master mnt]# kubectl get pods -n dev
NAME READY STATUS RESTARTS AGE
pod-aff 0/1 Pending 0 6s
podaff 1/1 Terminating 0 55m
#有容忍的創建
[root@master mnt]# kubectl create -f to.yaml
pod/pod-aff created
[root@master mnt]# kubectl get pods -n dev
NAME READY STATUS RESTARTS AGE
pod-aff 1/1 Running 0 3s
podaff 1/1 Terminating 0 57m
容忍的詳細信息
Key:對應的容忍的污點的值,空意味著匹配的所有的鍵 value:對應著容忍的污點的值 operator:key-value的運算符,支持Equal和Exists(默認),對于所有的鍵進行操作,跟值就沒有關系了 effect:對應的污點的effect,空意味著匹配所有的影響 tolerationSeconds 容忍的時間,當effect為NoExecute時生效,表示pod在node上停留的時間
四:pod控制器
1、pod的控制器的介紹
1:pod的分類:
自主式pod,k8s直接創建出來的pod,這種pod刪除后就沒有了。也不會重建
控制器創建的pod,通過控制器創建的Pod,這種pod刪除后,還會自動重建
作用:
pod控制器管理pod的中間層,使用了pod控制器后,我們需要告訴pod控制器,想要多少個pod即可,他會創建滿足條件的pod并確保pod處于用戶期望的狀態,如果pod運行中出現了故障,控制器會基于策略重啟或者重建pod
2:控制器類型

replicaSet:保證指定數量的pod運行支持數量變更
deployment:通過控制replicaSet來控制pod,支持滾動升級,版本回退的功能
horizontal pod autoscaler:可以根據集群負載均衡自動調整pod的數量
2:控制器的詳細介紹
replicaSet(rs)
:創建的數量的Pod能夠正常的運行,會持續監聽pod的運行狀態
支持對pod數量的擴容縮容,

案例:副本數量

apiVersion: apps/v1
kind: ReplicaSet
metadata:
name: pc-replicaset #pod控制器的名字
namespace: dev
spec:
replicas: 3 #創建的pod的數量,
selector: #pod標簽選擇器規則,選擇app=nginx-pod的pod的標簽用來進行管理,用來管理pod上面有相同的標簽
matchLabels: #標簽選擇器規則
app: nginx-pod
template: 副本,也就是創建pod的模版
metadata: #pod元數據的信息
labels: #pod上面的標簽
app: nginx-pod
spec:
containers: #容器里面的名字
- name: nginx
image: nginx:1.17.1
#查看控制器
[root@master ~]# kubectl get rs -n dev
NAME DESIRED CURRENT READY AGE
pc-replicaset 3 3 3 70s
RESIRED 期望的pod數量
CURRENT:當前有幾個
READY:準備好提供服務的有多少
#查看pod
[root@master ~]# kubectl get rs,pods -n dev
NAME DESIRED CURRENT READY AGE
replicaset.apps/pc-replicaset 3 3 3 2m31s
NAME READY STATUS RESTARTS AGE
pod/pc-replicaset-448tq 1/1 Running 0 2m31s
pod/pc-replicaset-9tdhd 1/1 Running 0 2m31s
pod/pc-replicaset-9z64w 1/1 Running 0 2m31s
pod/pod-pod-affinity 1/1 Running 1 (47m ago) 12h
案例2:實現擴縮容的pod
#編輯yaml文件 edit [root@master ~]# kubectl edit rs -n dev pc-replicaset replicaset.apps/pc-replicaset edited [root@master ~]# kubectl get pods -n dev NAME READY STATUS RESTARTS AGE pc-replicaset-448tq 1/1 Running 0 10m pc-replicaset-9tdhd 1/1 Running 0 10m pc-replicaset-9z64w 1/1 Running 0 10m pc-replicaset-q6ps9 1/1 Running 0 94s pc-replicaset-w5krn 1/1 Running 0 94s pc-replicaset-zx8gw 1/1 Running 0 94s pod-pod-affinity 1/1 Running 1 (55m ago) 12h [root@master ~]# kubectl get rs -n dev NAME DESIRED CURRENT READY AGE pc-replicaset 6 6 6 10m #第二種方式 [root@master ~]# kubectl scale rs -n dev pc-replicaset --replicas=2 -n dev replicaset.apps/pc-replicaset scaled [root@master ~]# kubectl get rs,pod -n dev NAME DESIRED CURRENT READY AGE replicaset.apps/pc-replicaset 2 2 2 12m NAME READY STATUS RESTARTS AGE pod/pc-replicaset-448tq 1/1 Running 0 12m pod/pc-replicaset-9tdhd 1/1 Running 0 12m pod/pod-pod-affinity 1/1 Running 1 (57m ago) 12h
案例3、鏡像的版本的升級
#編輯鏡像的版本 [root@master ~]# kubectl edit rs -n dev pc-replicaset replicaset.apps/pc-replicaset edited [root@master ~]# kubectl get rs -n dev pc-replicaset -o wide NAME DESIRED CURRENT READY AGE CONTAINERS IMAGES SELECTOR pc-replicaset 2 2 2 15m nginx nginx:1.17.2 app=nginx-pod #命令來進行編輯,但是一般使用edit來進行編輯即可 [root@master ~]# kubectl get rs -n dev -o wide NAME DESIRED CURRENT READY AGE CONTAINERS IMAGES SELECTOR pc-replicaset 2 2 2 17m nginx nginx:1.17.1 app=nginx-pod
案例4、刪除replicaSet
就是先刪除pod再來刪除控制器
#文件來進行刪除 root@master ~]# kubectl delete -f replicas.yaml replicaset.apps "pc-replicaset" deleted [root@master ~]# kubectl get rs -n dev No resources found in dev namespace. #命令來進行刪除 [root@master ~]# kubectl delete rs -n dev pc-replicaset replicaset.apps "pc-replicaset" deleted [root@master ~]# kubectl get rs -n dev No resources found in dev namespace.
deployment(deploy)

支持所有的RS的功能
保留歷史的版本,就是可以進行回退版本
滾動更新的策略
更新策略:
案例:創建deployment
[root@master ~]# cat deploy.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: pc-deployment
namespace: dev
spec:
replicas: 3
selector:
matchLabels:
app: nginx-pod
template:
metadata:
labels:
app: nginx-pod
spec:
containers:
- name: nginx
image: nginx:1.17.1
[root@master ~]# kubectl get deploy -n dev
NAME READY UP-TO-DATE AVAILABLE AGE
pc-deployment 3/3 3 3 53s
update:最新版本的pod數量
available:當前可用的pod的數量
#所以也會創建一個rs出來
[root@master ~]# kubectl get rs -n dev
NAME DESIRED CURRENT READY AGE
pc-deployment-6cb555c765 3 3 3 2m9s
擴縮容:
基本上和之前的一樣的操作
#命令來進行編輯 [root@master ~]# kubectl scale deployment -n dev pc-deployment --replicas=5 deployment.apps/pc-deployment scaled [root@master ~]# kubectl get pods -n dev NAME READY STATUS RESTARTS AGE pc-deployment-6cb555c765-8qc9g 1/1 Running 0 4m52s pc-deployment-6cb555c765-8xss6 1/1 Running 0 4m52s pc-deployment-6cb555c765-m7wdf 1/1 Running 0 4s pc-deployment-6cb555c765-plkbf 1/1 Running 0 4m52s pc-deployment-6cb555c765-qh6gk 1/1 Running 0 4s pod-pod-affinity 1/1 Running 1 (81m ago) 13h #編輯文件 [root@master ~]# kubectl edit deployments.apps -n dev pc-deployment deployment.apps/pc-deployment edited [root@master ~]# kubectl get pods -n dev NAME READY STATUS RESTARTS AGE pc-deployment-6cb555c765-8qc9g 1/1 Running 0 5m41s pc-deployment-6cb555c765-8xss6 1/1 Running 0 5m41s pc-deployment-6cb555c765-plkbf 1/1 Running 0 5m41s pod-pod-affinity 1/1 Running 1 (82m ago) 13h
鏡像更新
分為重建更新,滾動更新
重建更新:
一次性刪除所有的來老版本的pod,然后再來創建新版本的pod
滾動更新:(默認)
先刪除一部分的內容,進行更新,老的版本越來越少,新的版本越來越多

#重建策略
#先創建pod,實時觀看
[root@master ~]# cat deploy.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: pc-deployment
namespace: dev
spec:
strategy:
type: Recreate
replicas: 3
selector:
matchLabels:
app: nginx-pod
template:
metadata:
labels:
app: nginx-pod
spec:
containers:
- name: nginx
image: nginx:1.17.1
[root@master ~]# kubectl get pods -n dev -w
#然后更新鏡像的版本
[root@master ~]# kubectl set image deploy pc-deployment nginx=nginx:1.17.2 -n dev
#查看
pc-deployment-6cb555c765-m92t8 0/1 Terminating 0 60s
pc-deployment-6cb555c765-m92t8 0/1 Terminating 0 60s
pc-deployment-6cb555c765-m92t8 0/1 Terminating 0 60s
pc-deployment-5967bb44bb-bbkzz 0/1 Pending 0 0s
pc-deployment-5967bb44bb-bbkzz 0/1 Pending 0 0s
pc-deployment-5967bb44bb-kxrn5 0/1 Pending 0 0s
pc-deployment-5967bb44bb-zxfwl 0/1 Pending 0 0s
pc-deployment-5967bb44bb-kxrn5 0/1 Pending 0 0s
pc-deployment-5967bb44bb-zxfwl 0/1 Pending 0 0s
pc-deployment-5967bb44bb-bbkzz 0/1 ContainerCreating 0 0s
pc-deployment-5967bb44bb-kxrn5 0/1 ContainerCreating 0 0s
pc-deployment-5967bb44bb-zxfwl 0/1 ContainerCreating 0 0s
pc-deployment-5967bb44bb-kxrn5 1/1 Running 0 1s
滾動更新:
[root@master ~]# cat deploy.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: pc-deployment
namespace: dev
spec:
strategy:
type: RollingUpdate
rollingUpdate:
maxUnavailable: 25%
maxSurge: 25%
replicas: 3
selector:
matchLabels:
app: nginx-pod
template:
metadata:
labels:
app: nginx-pod
spec:
containers:
- name: nginx
image: nginx:1.17.1
#更新
[root@master ~]# kubectl set image deploy pc-deployment nginx=nginx:1.17.3 -n dev
deployment.apps/pc-deployment image updated
#就會更新
[root@master ~]# kubectl set image deploy pc-deployment nginx=nginx:1.17.3 -n dev
deployment.apps/pc-deployment image updated

總結:
鏡像版本更新的話,會先創建一個新的RS,老RS也會存在,pod會在新的RS里面,老RS就會刪除一個,到最后老的rs里面沒有了pod,新的rs里面就會有pod了
留這個老的rs的作用的話,就是版本回退作用
版本回退:

undo回滾到上一個版本
#記錄整個更新的deployment過程 [root@master ~]# kubectl create -f deploy.yaml --record Flag --record has been deprecated, --record will be removed in the future deployment.apps/pc-deployment created #更新版本就會有歷史記錄 [root@master ~]# kubectl edit deployments.apps -n dev pc-deployment deployment.apps/pc-deployment edited [root@master ~]# kubectl rollout history deployment -n dev pc-deployment deployment.apps/pc-deployment REVISION CHANGE-CAUSE 1 kubectl create --filename=deploy.yaml --record=true 2 kubectl create --filename=deploy.yaml --record=true 3 kubectl create --filename=deploy.yaml --record=true #直接回退到到指定的版本,如果不指定的話,默認是上一個版本 [root@master ~]# kubectl rollout undo deployment -n dev pc-deployment --to-revision=1 deployment.apps/pc-deployment rolled back #rs也發生了變化,pod回到了老的rs里面了 [root@master ~]# kubectl get rs -n dev NAME DESIRED CURRENT READY AGE pc-deployment-5967bb44bb 0 0 0 4m11s pc-deployment-6478867647 0 0 0 3m38s pc-deployment-6cb555c765 3 3 3 5m28s [root@master ~]# kubectl rollout history deployment -n dev deployment.apps/pc-deployment REVISION CHANGE-CAUSE 2 kubectl create --filename=deploy.yaml --record=true 3 kubectl create --filename=deploy.yaml --record=true 4 kubectl create --filename=deploy.yaml --record=true #這個就相當于是1了
金絲雀發布:
deployment支持更新過程中的控制,暫停,繼續更新操作
就是在更新的過程中,僅存在一部分的更新的應用,主機部分是一些舊的版本,將這些請求發送到新的應用上面,不能接收請求就趕緊回退,能接受請求,就繼續更新,這個就被稱為金絲雀發布
#更新,并且立刻暫停 [root@master ~]# kubectl set image deploy pc-deployment nginx=nginx:1.17.2 -n dev && kubectl rollout pause deployment -n dev pc-deployment deployment.apps/pc-deployment image updated deployment.apps/pc-deployment paused #rs的變化 [root@master ~]# kubectl get rs -n dev NAME DESIRED CURRENT READY AGE pc-deployment-5967bb44bb 1 1 1 21m pc-deployment-6478867647 0 0 0 20m pc-deployment-6cb555c765 3 3 3 22m #有一個已經更新完畢了 [root@master ~]# kubectl rollout status deployment -n dev Waiting for deployment "pc-deployment" rollout to finish: 1 out of 3 new replicas have been updated... #發送一個請求 #繼續更新 [root@master ~]# kubectl rollout resume deployment -n dev pc-deployment deployment.apps/pc-deployment resumed #查看狀態 [root@master ~]# kubectl rollout status deployment -n dev Waiting for deployment "pc-deployment" rollout to finish: 1 out of 3 new replicas have been updated... Waiting for deployment spec update to be observed... Waiting for deployment spec update to be observed... Waiting for deployment "pc-deployment" rollout to finish: 1 out of 3 new replicas have been updated... Waiting for deployment "pc-deployment" rollout to finish: 1 out of 3 new replicas have been updated... Waiting for deployment "pc-deployment" rollout to finish: 2 out of 3 new replicas have been updated... Waiting for deployment "pc-deployment" rollout to finish: 2 out of 3 new replicas have been updated... Waiting for deployment "pc-deployment" rollout to finish: 1 old replicas are pending termination... Waiting for deployment "pc-deployment" rollout to finish: 1 old replicas are pending termination... deployment "pc-deployment" successfully rolled out #查看rs [root@master ~]# kubectl get rs -n dev NAME DESIRED CURRENT READY AGE pc-deployment-5967bb44bb 3 3 3 24m pc-deployment-6478867647 0 0 0 24m pc-deployment-6cb555c765 0 0 0 26m
hpa控制器


總的來說就是,就是獲取每個pod的利用率,與pod上面的hpa定義的指標進行比較,如果大于的話,就直接自動的增加pod,當訪問量減少了話,會刪除增加的pod
通過監控pod負載均衡的情況,實現pod數量的擴縮容
安裝一個軟件,拿到pod的負載
metries-server可以用來收集集群中的資源使用情況。pod。node都可以以進行監控
# 下載最新版配置軟件包 wget https://github.com/kubernetes-sigs/metrics-server/releases/download/v0.6.3/components.yaml #到每臺服務器上系在阿里云版本的相關版本 ctr image pull registry.cn-hangzhou.aliyuncs.com/google_containers/metrics-server:v0.6.3 #修改配置文件 containers: - args: - --cert-dir=/tmp - --secure-port=4443 - --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname - --kubelet-use-node-status-port - --metric-resolution=15s - --kubelet-insecure-tls #增加證書忽略 image: registry.cn-hangzhou.aliyuncs.com/google_containers/metrics-server:v0.6.3 #修改image為阿里云下載的這個 #應用下配置文件 kubectl apply -f components.yaml #查看執行結果 [root@master ~]# kubectl get pod -n kube-system NAME READY STATUS RESTARTS AGE coredns-66f779496c-88c5b 1/1 Running 33 (55m ago) 10d coredns-66f779496c-hcpp5 1/1 Running 33 (55m ago) 10d etcd-master 1/1 Running 14 (55m ago) 10d kube-apiserver-master 1/1 Running 14 (55m ago) 10d kube-controller-manager-master 1/1 Running 14 (55m ago) 10d kube-proxy-95x52 1/1 Running 14 (55m ago) 10d kube-proxy-h2qrf 1/1 Running 14 (55m ago) 10d kube-proxy-lh446 1/1 Running 15 (55m ago) 10d kube-scheduler-master 1/1 Running 14 (55m ago) 10d metrics-server-6779c94dff-dflh2 1/1 Running 0 2m6s
查看資源的使用情況
#查看node的使用情況信息 [root@master ~]# kubectl top nodes NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% master 104m 5% 1099Mi 58% node1 21m 1% 335Mi 17% node2 22m 1% 305Mi 16% #查看pod的使用情況 [root@master ~]# kubectl top pods -n dev NAME CPU(cores) MEMORY(bytes) pod-aff 3m 83Mi pod-label 0m 1Mi
實現這個hpa的操作,就是pod上面要有資源的限制才可以,
然后使用命令即可

測試:
[root@master ~]# cat deploy.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx
namespace: dev
spec:
replicas: 1 #一個副本數量
selector:
matchLabels:
app: nginx-pod #標簽選擇器
template:
metadata:
labels:
app: nginx-pod
spec:
containers:
- name: nginx
image: nginx:1.17.1
resources:
requests:
cpu: 100m #最少需要100毫核才能啟動
#創建deployment
kubectl create -f deploy.yaml
#創建service
kubectl expose deployment nginx --type=NodePort --port=80 -n dev
#創建一個hpa
[root@master ~]# cat hpa.yaml
apiVersion: autoscaling/v1
kind: HorizontalPodAutoscaler
metadata:
name: pc-hpa
namespace: dev
spec:
minReplicas: 1
maxReplicas: 10
targetCPUUtilizationPercentage: 3 #cpu的指標為%3,方便測試用的
scaleTargetRef: #選擇的控制器
apiVersion: apps/v1
kind: Deployment #deploy控制器
name: nginx
#查看hpa控制器
[root@master ~]# kubectl get hpa -n dev
NAME REFERENCE TARGETS MINPODS MAXPODS REPLICAS AGE
pc-hpa Deployment/nginx <unknown>/3% 1 10 0 5s
[root@master ~]# kubectl get hpa -n dev
NAME REFERENCE TARGETS MINPODS MAXPODS REPLICAS AGE
pc-hpa Deployment/nginx 0%/3% 1 10 1 114s
#進行壓力測試,就是超過%3
[root@master ~]# cat f.sh
while `true`:
do
curl 192.168.109.100:30843 &> /dev/null
done
[root@master ~]# kubectl get hpa -n dev -w
pc-hpa Deployment/nginx 1%/3% 1 10 1 22m
pc-hpa Deployment/nginx 0%/3% 1 10 1 22m
pc-hpa Deployment/nginx 42%/3% 1 10 1 25m
pc-hpa Deployment/nginx 92%/3% 1 10 4 25m
pc-hpa Deployment/nginx 23%/3% 1 10 8 25m
pc-hpa Deployment/nginx 0%/3% 1 10 10 26m
[root@master ~]# kubectl get deployment -n dev -w
NAME READY UP-TO-DATE AVAILABLE AGE
nginx 1/1 1 1 39m
nginx 1/4 1 1 60m
nginx 1/4 1 1 60m
nginx 1/4 1 1 60m
nginx 1/4 4 1 60m
nginx 2/4 4 2 60m
nginx 3/4 4 3 60m
nginx 4/4 4 4 60m
nginx 4/8 4 4 60m
nginx 4/8 4 4 60m
nginx 4/8 4 4 60m
nginx 4/8 8 4 60m
nginx 5/8 8 5 60m
nginx 6/8 8 6 60m
nginx 7/8 8 7 60m
nginx 8/8 8 8 60m
nginx 8/10 8 8 61m
nginx 8/10 8 8 61m
nginx 8/10 8 8 61m
nginx 8/10 10 8 61m
nginx 9/10 10 9 61m
nginx 10/10 10 10 61m
[root@master ~]# kubectl get pod-n dev -w
nginx-7f89875f58-gt67w 0/1 Pending 0 0s
nginx-7f89875f58-gt67w 0/1 Pending 0 0s
nginx-7f89875f58-545rj 0/1 Pending 0 0s
nginx-7f89875f58-gt67w 0/1 ContainerCreating 0 0s
nginx-7f89875f58-545rj 0/1 Pending 0 0s
nginx-7f89875f58-545rj 0/1 ContainerCreating 0 0s
nginx-7f89875f58-545rj 1/1 Running 0 1s
nginx-7f89875f58-gt67w 1/1 Running 0 1s
#當訪問量減少的時候,這個pod里面自動的減少,只不過需要一點時間
daemonset(DS)控制器

在每個節點上面創建一個副本(并且只能有一個),就是節點級別的,一般用于日志收集,節點監控等
當節點移除的話,自然Pod也就沒有了

案例:
[root@master ~]# cat daemonset.yaml
apiVersion: apps/v1
kind: DaemonSet
metadata:
name: daemon
namespace: dev
spec:
selector:
matchLabels:
app: nginx-pod
template:
metadata:
labels:
app: nginx-pod
spec:
containers:
- name: nginx
image: nginx:1.17.1
[root@master ~]# kubectl get pod -n dev -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
daemon-g8b4v 1/1 Running 0 2m30s 10.244.1.102 node2 <none> <none>
daemon-t5tmd 1/1 Running 0 2m30s 10.244.2.89 node1 <none> <none>
nginx-7f89875f58-prf9c 1/1 Running 0 79m 10.244.2.84 node1 <none> <none>
#每個副本上面都有一個pod
job控制器

批量處理(依次處理指定數量的任務),一次性任務(每個任務僅運行一次就結束)
由job創建的pod執行成功時,job會記錄成功結束的Pod數量
當成功結束的pod達到指定的數量時,job將完成執行
里面的job都是存放的一次性文件

重啟策略:在這里不能設置為always,因為這個是一次性任務,結束了,都要進行重啟
只能設置為onfailure和never才行
onfailure:pod出現故障時,重啟容器,不是創建pod,failed次數不變
never:出現故障,并且故障的pod不會消失也不會重啟,failed次數=1
案例:
[root@master ~]# cat jod.yaml
apiVersion: batch/v1
kind: Job
metadata:
name: pc-job
namespace: dev
spec:
manualSelector: true
completions: 6 #一次性創建6個pod
parallelism: 3 #允許三個一起執行,2輪就結束了
selector:
matchLabels:
app: counter-pod
template:
metadata:
labels:
app: counter-pod
spec:
restartPolicy: Never
containers:
- name: busybox
image: busybox:1.30
command: ["/bin/sh","-c","for i in 1 2 3 4 5 6 7 8 9;do echo $i;sleep 3;done"]
[root@master ~]# kubectl get job -n dev -w
NAME COMPLETIONS DURATION AGE
pc-job 0/6 0s
pc-job 0/6 0s 0s
pc-job 0/6 2s 2s
pc-job 0/6 29s 29s
pc-job 0/6 30s 30s
pc-job 3/6 30s 30s
pc-job 3/6 31s 31s
pc-job 3/6 32s 32s
pc-job 3/6 59s 59s
pc-job 3/6 60s 60s
pc-job 6/6 60s 60s
[root@master ~]# kubectl get pod -n dev -w
NAME READY STATUS RESTARTS AGE
daemon-g8b4v 1/1 Running 0 20m
daemon-t5tmd 1/1 Running 0 20m
nginx-7f89875f58-prf9c 1/1 Running 0 97m
pc-job-z2gmb 0/1 Pending 0 0s
pc-job-z2gmb 0/1 Pending 0 0s
pc-job-z2gmb 0/1 ContainerCreating 0 0s
pc-job-z2gmb 1/1 Running 0 1s
pc-job-z2gmb 0/1 Completed 0 28s
pc-job-z2gmb 0/1 Completed 0 29s
pc-job-z2gmb 0/1 Completed 0 30s
pc-job-z2gmb 0/1 Completed 0 30s
cronjob控制器(cj)

就是指定時間的周期執行job任務

案例:
[root@master ~]# cat cronjob.yaml
apiVersion: batch/v1
kind: CronJob
metadata:
name: pc-cronjob
namespace: dev
labels:
controller: cronjob
spec:
schedule: "*/1 * * * *"
jobTemplate:
metadata:
name: pc-cronjob
labels:
controller: cronjob
spec:
template:
spec:
restartPolicy: Never
containers:
- name: counter
image: busybox:1.30
command: ["/bin/sh","-c","for i in 1 2 3 4 5 6 7 8 9;do echo$i;sleep 3;done"]
[root@master ~]# kubectl get job -n dev -w
NAME COMPLETIONS DURATION AGE
pc-cronjob-28604363 0/1 21s 21s
pc-job 6/6 60s 33m
pc-cronjob-28604363 0/1 28s 28s
pc-cronjob-28604363 0/1 29s 29s
pc-cronjob-28604363 1/1 29s 29s
pc-cronjob-28604364 0/1 0s
pc-cronjob-28604364 0/1 0s 0s
pc-cronjob-28604364 0/1 1s 1s
pc-cronjob-28604364 0/1 29s 29s
pc-cronjob-28604364 0/1 30s 30s
pc-cronjob-28604364 1/1 30s 30s
^C[root@master ~]#
[root@master ~]# kubectl get pod -n dev -w
NAME READY STATUS RESTARTS AGE
daemon-g8b4v 1/1 Running 0 57m
daemon-t5tmd 1/1 Running 0 57m
nginx-7f89875f58-prf9c 1/1 Running 0 134m
pc-job-2p6p6 0/1 Completed 0 32m
pc-job-62z2d 0/1 Completed 0 32m
pc-job-6sm97 0/1 Completed 0 32m
pc-job-97j4j 0/1 Completed 0 31m
pc-job-lsjz5 0/1 Completed 0 31m
pc-job-pt28s 0/1 Completed 0 31m
[root@master ~]# kubectl get pod -n dev -w
pc-cronjob-28604363-fcnvr 0/1 Pending 0 0s
pc-cronjob-28604363-fcnvr 0/1 Pending 0 0s
pc-cronjob-28604363-fcnvr 0/1 ContainerCreating 0 0s
pc-cronjob-28604363-fcnvr 1/1 Running 0 0s
pc-cronjob-28604363-fcnvr 0/1 Completed 0 27s
pc-cronjob-28604363-fcnvr 0/1 Completed 0 29s
pc-cronjob-28604363-fcnvr 0/1 Completed 0 29s
#就是這個job執行結束后,每隔1分鐘再去執行
四:service詳解
流量負載組件service和ingress
serverice用于四層的負載ingress用于七層負載
1、service介紹
pod有一個ip地址,但是不是固定的,所以的話,service就是一部分的pod的代理,有一個ip地址,可以通過這個地址來進行訪問pod
service就是一個標簽選擇器的機制

kube-proxy代理

核心就是kube-proxy機制發生的作用,當創建service時,api-server向etcd存儲service相關的信息,kube-proxy監聽到發生了變化,就會將service相關的信息轉換為訪問規則
查看規則

kube-proxy支持的三種模式
userspace模式:用戶空間模式

kube-proxy會為每一個service創建一個監聽的端口,發給service的ip的請求會被iptables規則重定向到kube-proxy監聽的端口上,kube-proxy根據算法選擇一個提供服務的pod并建立連接,以將請求轉發到pod上
kube-proxy相當于一個負載均衡器的樣子
缺點:效率比較低,進行轉發處理時,增加內核和用戶空間
iptables模式:

當請求來的時候,不經過了kube-proxy了,經過clusterip(規則即可),然后進行輪詢(隨機)轉發到pod上面
缺點:沒有負載均衡,一但又問題,用戶拿到的就是錯誤的頁面
ipvs模式:

開啟ipvs模塊
編輯里面的配置文件為mode為ipvs [root@master /]# kubectl edit cm kube-proxy -n kube-system #刪除里面的pod,帶有標簽的 [root@master /]# kubectl delete pod -l k8s-app=kube-proxy -n kube-system root@master /]# ipvsadm -Ln IP Virtual Server version 1.2.1 (size=4096) Prot LocalAddress:Port Scheduler Flags -> RemoteAddress:Port Forward Weight ActiveConn InActConn TCP 172.17.0.1:30203 rr 輪詢的規則,就是將地址轉發到這里面去即可 -> 10.244.2.103:80 Masq 1 0 0 TCP 192.168.109.100:30203 rr -> 10.244.2.103:80 Masq 1 0 0 TCP 10.96.0.1:443 rr -> 192.168.109.100:6443 Masq 1 0 0 TCP 10.96.0.10:53 rr -> 10.244.0.44:53 Masq 1 0 0 -> 10.244.0.45:53 Masq 1 0 0 TCP 10.96.0.10:9153 rr -> 10.244.0.44:9153 Masq 1 0 0 -> 10.244.0.45:9153 Masq 1 0 0 TCP 10.100.248.78:80 rr -> 10.244.2.103:80 Masq 1 0 0 TCP 10.110.118.76:443 rr -> 10.244.1.108:10250 Masq 1 0 0 -> 10.244.2.102:10250 Masq 1 0 0 TCP 10.244.0.0:30203 rr
2:service類型

標簽選擇器只是一個表象,本質就是規則,通過標簽,來進行確定里面的pod的ip
session親和性,如果不配置的話,請求會將輪詢到每一個pod上面,特殊的情況下,將多個請求發送到同一個pod上面,就需要session親和性
type:就是service類型
ClusterIP:默認值,k8s自動分配的虛擬ip,只能在集群內部訪問
NodePort:將service通過指定的node上面端口暴露給外部,可以實現集群外面訪問服務,節點上面的端口暴露給外部
LoadBalancer:使用外接負載均衡器完成到服務的負載分發,注意此模式需要外部云環境
ExternalName:把集合外部的服務引入集群內部,直接使用
1、環境準備

三個pod。deploy控制器來創建,
[root@master ~]# cat service-example.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: pc-deployment
namespace: dev
spec:
replicas: 3
selector:
matchLabels:
app: nginx-pod
template:
metadata:
labels:
app: nginx-pod
spec:
containers:
- name: nginx
image: nginx:1.17.1
ports:
- containerPort: 80
[root@master ~]# kubectl get pod -n dev -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
pc-deployment-5cb65f68db-959hm 1/1 Running 0 62s 10.244.2.104 node1 <none> <none>
pc-deployment-5cb65f68db-h6v8r 1/1 Running 0 62s 10.244.1.110 node2 <none> <none>
pc-deployment-5cb65f68db-z4k2f 1/1 Running 0 62s 10.244.2.105 node1 <none> <none>
#訪問pod的ip和容器里面的端口
[root@master ~]# curl 10.244.2.104:80
修改里面的網頁文件,觀察請求發送到哪一個節點上面去了,依次修改網頁文件即可
[root@master ~]# kubectl exec -ti -n dev pc-deployment-5cb65f68db-h6v8r /bin/bash
root@pc-deployment-5cb65f68db-z4k2f:/# echo 10.244.2.10 > /usr/share/nginx/html/index.html
2、ClusterIP類型的service

service的端口可以隨便寫
[root@master ~]# cat ClusterIP.yaml
apiVersion: v1
kind: Service
metadata:
name: service-clusterip
namespace: dev
spec:
selector: #service標簽選擇器
app: nginx-pod
clusterIP: 10.96.0.100 #不寫的話,默認生成一個ip地址
type: ClusterIP
ports:
- port: 80 #service端口
targetPort: 80 #pod的端口
[root@master ~]# kubectl create -f ClusterIP.yaml
service/service-clusterip created
[root@master ~]# kubectl get svc -n dev
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
service-clusterip ClusterIP 10.96.0.100 <none> 80/TCP 2m7s
#查看service的詳細的信息,
[root@master ~]# kubectl describe svc service-clusterip -n dev
Name: service-clusterip
Namespace: dev
Labels: <none>
Annotations: <none>
Selector: app=nginx-pod
Type: ClusterIP
IP Family Policy: SingleStack
IP Families: IPv4
IP: 10.96.0.100
IPs: 10.96.0.100
Port: <unset> 80/TCP
TargetPort: 80/TCP
Endpoints: 10.244.1.110:80,10.244.2.104:80,10.244.2.105:80 #建立pod和service的關聯,主要是標簽選擇器,里面都是記錄的Pod的訪問地址,實際端點服務的集合
Session Affinity: None
Events: <none>
[root@master ~]# kubectl get pod -n dev -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
pc-deployment-5cb65f68db-959hm 1/1 Running 0 25m 10.244.2.104 node1 <none> <none>
pc-deployment-5cb65f68db-h6v8r 1/1 Running 0 25m 10.244.1.110 node2 <none> <none>
pc-deployment-5cb65f68db-z4k2f 1/1 Running 0 25m 10.244.2.105 node1 <none> <none>
[root@master ~]# kubectl get endpoints -n dev
NAME ENDPOINTS AGE
service-clusterip 10.244.1.110:80,10.244.2.104:80,10.244.2.105:80 4m48s
真正起作用的就是kube-proxy,創建service的時,會創建對應的規則
[root@master ~]# ipvsadm -Ln
TCP 10.96.0.100:80 rr
-> 10.244.1.110:80 Masq 1 0 0
-> 10.244.2.104:80 Masq 1 0 0
-> 10.244.2.105:80 Masq 1 0 0
#發送一個請求,測試是誰接收了,循環訪問,發現是輪詢環的狀態
[root@master ~]# while true;do curl 10.96.0.100:80; sleep 5;done;
10.244.2.105
10.244.2.104
10.244.1.110
10.244.2.105
10.244.2.104
10.244.1.110
訪問service的ip和主機端口
負載分發策略:(session親和性)
默認的話,訪問就是輪詢或者隨機
有設置的話,就是多個請求到同一個pod里面上面,就不會輪訓或者隨機
#設置session親和性
[root@master ~]# cat ClusterIP.yaml
apiVersion: v1
kind: Service
metadata:
name: service-clusterip
namespace: dev
spec:
sessionAffinity: ClientIP #就是通過喲個請求到同一個節點上面
selector:
app: nginx-pod
clusterIP: 10.96.0.100
type: ClusterIP
ports:
- port: 80
targetPort: 80
[root@master ~]# kubectl get svc -n dev
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
service-clusterip ClusterIP 10.96.0.100 <none> 80/TCP 78s
[root@master ~]# ipvsadm -Ln
TCP 10.96.0.100:80 rr persistent 10800 持久化
-> 10.244.1.112:80 Masq 1 0 0
-> 10.244.2.107:80 Masq 1 0 0
-> 10.244.2.108:80 Masq 1 0 0
這種類型的service,只能通過集群節點來進行訪問,就是內部進行訪問,自己的電腦訪問不了這個ip
[root@master ~]# curl 10.96.0.100:80
10.244.2.108
[root@master ~]# curl 10.96.0.100:80
10.244.2.108
[root@master ~]# curl 10.96.0.100:80
10.244.2.108
3、headliness類型的service
Cluster類型的service,默認是隨機的負載均衡分發策略,希望自己來控制這個策略,使用headliness類型的service,不會分發Clusterip。想要訪問service,只能通過service的域名來進行訪問
[root@master ~]# cat headliness.yaml
apiVersion: v1
kind: Service
metadata:
name: service-headliness
namespace: dev
spec:
selector:
app: nginx-pod
clusterIP: None #設置為None,就能生成headliness類型的service
type: ClusterIP
ports:
- port: 80
targetPort: 80
[root@master ~]# kubectl get svc -n dev
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
service-headliness ClusterIP None <none> 80/TCP 4s
#查看域名
[root@master ~]# kubectl exec -ti -n dev pc-deployment-5cb65f68db-959hm /bin/bash
root@pc-deployment-5cb65f68db-959hm:/# cat /etc/resolv.conf
search dev.svc.cluster.local svc.cluster.local cluster.local
nameserver 10.96.0.10
options ndots:5
#訪問headliness類型的service
#格式為dns服務器,加上service的名字,名稱空間,等;; ANSWER SECTION:
[root@master ~]# dig @10.96.0.10 service-headliness.dev.svc.cluster.local
service-headliness.dev.svc.cluster.local. 30 IN A 10.244.2.108
service-headliness.dev.svc.cluster.local. 30 IN A 10.244.1.112
service-headliness.dev.svc.cluster.local. 30 IN A 10.244.2.107
4、NodePort類型的service

就是將service的port映射到node節點上面,通過nodeip+node端口來實現訪問service
請求來到node的端口上面時,會將請求發送到service的端口上面,再來發送到pod上面的端口,實現訪問
就將service暴露到外部了

測試:
[root@master ~]# cat nodeport.yaml
apiVersion: v1
kind: Service
metadata:
name: service-clusterip
namespace: dev
spec:
selector:
app: nginx-pod
type: NodePort #NodePort類型的service
ports:
- port: 80 #service端口
targetPort: 80 #pod端口
nodePort: 30002 默認在一個·1范圍內
[root@master ~]# kubectl create -f nodeport.yaml
service/service-clusterip created
[root@master ~]# kubectl get svc -n dev
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
service-clusterip NodePort 10.106.183.217 <none> 80:30002/TCP 4s
#訪問節點ip+端口就能映射到Clusterip+端口了
[root@master ~]# curl 192.168.109.100:30002
10.244.2.108
[root@master ~]# curl 192.168.109.101:30002
10.244.2.108
[root@master ~]# curl 192.168.109.102:30002
10.244.2.108
就能實現訪問了service,以及內部了pod了
5、LoadBalancer類型的service

就是在nodeport的基礎上面添加了一個負載均衡的設備,經過計算后得出
6、ExternalName類型的service

將這個這個服務引入www.baidu.com這個服務

[root@master ~]# cat service-external.yaml apiVersion: v1 kind: Service metadata: name: service-externalname namespace: dev spec: type: ExternalName externalName: www.baidu.com [root@master ~]# kubectl create -f service-external.yaml service/service-externalname created [root@master ~]# kubectl get svc -n dev NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE service-clusterip NodePort 10.106.183.217 <none> 80:30002/TCP 17m service-externalname ExternalName <none> www.baidu.com <none> 7s #訪問service [root@master ~]# dig @10.96.0.10 service-externalname.dev.svc.cluster.local service-externalname.dev.svc.cluster.local. 30 IN CNAME www.baidu.com. www.baidu.com. 30 IN CNAME www.a.shifen.com. www.a.shifen.com. 30 IN A 180.101.50.188 www.a.shifen.com. 30 IN A 180.101.50.242 #這樣就能解析到了
3:Ingress介紹
service對外暴露服務主要就是2種類型的,NodePort和LoadBalancer
缺點:
NodePort暴露的是主機的端口,當集群服務很多的時候,這個端口就會更多
LB方式就是每一個service都需要LB,浪費


用戶定義這個請求到service的規則,然后ingress控制器感知將其轉換為nginx配置文件,然后動態更新到nginx-proxy里面去即可,這個過程是動態的
1、環境的準備

#下載yaml文件 kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/controller-v1.10.1/deploy/static/provider/cloud/deploy.yaml [root@master ingress-example]# kubectl get pod,svc -n ingress-nginx NAME READY STATUS RESTARTS AGE pod/ingress-nginx-admission-create-jv5n5 0/1 Completed 0 77s pod/ingress-nginx-admission-patch-tpfv6 0/1 Completed 0 77s pod/ingress-nginx-controller-597dc6d68-rww45 1/1 Running 0 77s NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE service/ingress-nginx-controller NodePort 10.97.10.122 <none> 80:30395/TCP,443:32541/TCP 78s service/ingress-nginx-controller-admission ClusterIP 10.96.17.67 <none> 443/TCP
service和deployment文件,創建2個service和6個pod
[root@master ~]# cat deploy.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx-deployment
namespace: dev
spec:
replicas: 3
selector:
matchLabels:
app: nginx-pod
template:
metadata:
labels:
app: nginx-pod
spec:
containers:
- name: nginx
image: nginx:1.17.1
ports:
- containerPort: 80
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: tomcat-deployment
namespace: dev
spec:
replicas: 3
selector:
matchLabels:
app: tocmat-pod
template:
metadata:
labels:
app: tocmat-pod
spec:
containers:
- name: tomcat
image: tomcat:8.5-jre10-slim
ports:
- containerPort: 8080
---
apiVersion: v1
kind: Service
metadata:
name: nginx-service
namespace: dev
spec:
selector:
app: nginx-pod
clusterIP: None
type: ClusterIP
ports:
- port: 80
targetPort: 80
---
apiVersion: v1
kind: Service
metadata:
name: tomcat-service
namespace: dev
spec:
selector:
app: tomcat-pod
type: ClusterIP
clusterIP: None
ports:
- port: 8080
targetPort: 8080
[root@master ~]# kubectl get deployments.apps,pod -n dev
NAME READY UP-TO-DATE AVAILABLE AGE
deployment.apps/nginx-deployment 3/3 3 3 86s
deployment.apps/tomcat-deployment 3/3 3 3 86s
NAME READY STATUS RESTARTS AGE
pod/nginx-deployment-5cb65f68db-5lzpb 1/1 Running 0 86s
pod/nginx-deployment-5cb65f68db-75h4m 1/1 Running 0 86s
pod/nginx-deployment-5cb65f68db-nc8pj 1/1 Running 0 86s
pod/tomcat-deployment-5dbff496f4-6msb2 1/1 Running 0 86s
pod/tomcat-deployment-5dbff496f4-7wjc9 1/1 Running 0 86s
pod/tomcat-deployment-5dbff496f4-wlgmm 1/1 Running 0 86s
2、http代理
創建一個yaml文件就是里面,
訪問的就是域名+path 如果path是/xxx的話,訪問要帶上域名/xxx
訪問的時候,就會將其轉發到對應的service加上端口上面即可
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: ingress-http
namespace: dev
annotations:
nginx.ingress.kubernetes.io/rewrite-target: /
spec:
ingressClassName: nginx #這個ingress由nginx來進行處理
rules: #定義一組的規則
- host: nginx.com
http:
paths:
- pathType: Prefix #表示路徑匹配是基于前綴的
path: / #表示匹配所有以/開頭的路徑
backend: #指定了請求轉發到后端服務
service:
name: nginx-service
port:
number: 80 #后端服務監聽的端口
- host: tomcat.com
http:
paths:
- pathType: Prefix
path: /
backend:
service:
name: tomcat-service
port:
number: 8080
3、https代理
密鑰要提前的生成


浙公網安備 33010602011771號