linux12k8s --> 12kubeadm部署高可用k8s
文章目錄
KubeAdmin安裝k8s
1、集群類型
# kubernetes集群大體上分為兩類: 一主多從和多主多從
# 1、一主多從:
一臺 Master節點和多臺Node節點,搭建簡單,有單機故障分析,適合于測試環境
# 2、多主多從:
多臺 Master節點和多臺Node節點,搭建麻煩,安全性比較高,適合于生產環境
2、安裝方式
官方地址:https://kubernetes.io/docs/reference/setup-tools/kubeadm/kubeadm/
# 方式一: kubeadm
Kubeadm 是一個K8s 部署工具,提供kubeadm init 和kubeadm join,用于快速部署Kubernetes 集群。
# 方式二:二進制包
從github 下載發行版的二進制包,手動部署每個組件,組成Kubernetes 集群。
Kubeadm 降低部署門檻,但屏蔽了很多細節,遇到問題很難排查。如果想更容易可控,推薦使用二進制包部署Kubernetes 集群,雖然手動部署麻煩點,期間可以學習很多工作原理,也利于后期維護。
3、高可用架構圖

一、準備環境 (電腦系統16G+)
1、部署軟件、系統要求
| 軟件 | 版本 |
|---|---|
| Centos | CentOS Linux release 7.5及以上 |
| Docker | 19.03.12 |
| Kubernetes | V0.13.0 |
| Flannel | V1.19.1 |
| Kernel-lm | kernel-lt-4.4.245-1.el7.elrepo.x86_64.rpm |
| Kernel-lm-deve | kernel-lt-devel-4.4.245-1.el7.elrepo.x86_64.rpm |
2、節點規劃
- IP建議采用192網段,避免與kubernetes內網沖突
| 準備機器 | IP | 配置 | 系統內核版本 |
|---|---|---|---|
| k8s-master1 | 192.168.15.111 | 2核2G | 4.4+ |
| k8s-master2 | 192.168.15.112 | 2核2G | 4.4+ |
| **k8s-master3 | 192.168.15.113 | 2核2G | 4.4+ |
| k8s-node1 | **192.168.15.114 | 2核2G | 4.4+ |
| k8s-node2 | **192.168.15.115 | 2核2G | 4.4+ |
二、kubeadm安裝k8s
服務器配置至少是2G2核的。如果不是則可以在集群初始化后面增加 --ignore-preflight-errors=NumCPU
1、內核優化腳本(所有機器)
[root@k8s-m-01 ~]# vim base.sh
#!/bin/bash
# 1、修改主機名和網卡
hostnamectl set-hostname $1 &&\
sed -i "s#111#$2#g" /etc/sysconfig/network-scripts/ifcfg-eth[01] &&\
systemctl restart network &&\
# 2、關閉selinux和防火墻和ssh連接
setenforce 0 &&\
sed -i 's#enforcing#disabled#g' /etc/selinux/config &&\
systemctl disable --now firewalld &&\
# 如果iptables沒有安裝就不需要執行
# systemctl disable --now iptables &&\
sed -i 's/#UseDNS yes/UseDNS no/g' /etc/ssh/sshd_config &&\
systemctl restart sshd &&\
# 3、關閉swap分區
# 一旦觸發 swap,會導致系統性能急劇下降,所以一般情況下,K8S 要求關閉 swap
# cat /etc/fstab
# 注釋最后一行swap,如果沒有安裝swap就不需要
swapoff -a &&\
#忽略swap
echo 'KUBELET_EXTRA_ARGS="--fail-swap-on=false"' > /etc/sysconfig/kubelet &&\
# 4、修改本機hosts文件
cat >>/etc/hosts <<EOF
192.168.15.111 k8s-m-01 m1
192.168.15.112 k8s-n-01 n1
192.168.15.113 k8s-n-02 n2
EOF
# 5、配置鏡像源(國內源)
# 默認情況下,CentOS 使用的是官方 yum 源,所以一般情況下在國內使用是非常慢,所以我們可以替換成 國內的一些比較成熟的 yum 源,例如:清華大學鏡像源,網易云鏡像源等等
rm -rf /ect/yum.repos.d/* &&\
curl -o /etc/yum.repos.d/CentOS-Base.repo https://mirrors.aliyun.com/repo/Centos-7.repo &&\
curl -o /etc/yum.repos.d/epel.repo http://mirrors.aliyun.com/repo/epel-7.repo &&\
yum clean all &&\
yum makecache &&\
# 6、更新系統
#查看內核版本,若內核高于4.0,可不加--exclud選項
yum update -y --exclud=kernel* &&\
# 由于 Docker 運行需要較新的系統內核功能,例如 ipvs 等等,所以一般情況下,我們需要使用 4.0+以上版 本的系統內核要求是 4.18+,如果是 CentOS 8 則不需要內核系統更新
# 7、安裝基礎常用軟件,是為了方便我們的日常使用
yum install wget expect vim net-tools ntp bash-completion ipvsadm ipset jq iptables conntrack sysstat libseccomp ntpdate -y &&\
# 8、更新系統內核
#如果是centos8則不需要升級內核
cd /opt/ &&\
wget https://elrepo.org/linux/kernel/el7/x86_64/RPMS/kernel-lt-5.4.137-1.el7.elrepo.x86_64.rpm &&\
wget https://elrepo.org/linux/kernel/el7/x86_64/RPMS/kernel-lt-devel-5.4.137-1.el7.elrepo.x86_64.rpm &&\
# 如果內核低于4.0會有一些bug,導致生產環境中,如果流量很大的時候,會出現流量抖動現象
# 官網https://elrepo.org/linux/kernel/el7/x86_64/RPMS/
# 9、安裝系統內容
yum localinstall /opt/kernel-lt* -y &&\
# 10、調到默認啟動
grub2-set-default 0 && grub2-mkconfig -o /etc/grub2.cfg &&\
# 11、查看當前默認啟動的內核
grubby --default-kernel &&\
reboot
# 安裝完成就是5.4內核
2、 免密腳本(所有機器)
# 1、免密
[root@k8s-master-01 ~]# ssh-keygen -t rsa
[root@k8s-master-01 ~]# for i in m1 m2 m3 n1 n2;do ssh-copy-id -i ~/.ssh/id_rsa.pub root@$i;done
# 在集群當中,時間是一個很重要的概念,一旦集群當中某臺機器時間跟集群時間不一致,可能會導致集群面 臨很多問題。所以,在部署集群之前,需要同步集群當中的所有機器的時間
方式一:時間同步ntpdate
# 2、時間同步寫入定時任務 crontab -e
# 每隔5分鐘刷新一次
*/5 * * * * /usr/sbin/ntpdate ntp.aliyun.com &> /dev/null
方式二:時間同步chrony
[root@k8s-m-01 ~]# yum -y install chrony
[root@k8s-m-01 ~]# systemctl enable --now chronyd
[root@k8s-m-01 ~]# date #三臺機器時間是否一樣
Mon Aug 2 10:44:18 CST 2021
3、安裝IPVS和內核優化(所有機器)
kubernetes中service有兩種代理模式,一種是iptables,一種是ipvs
兩者相比,ipvs性能高,但是如果使用,需要手動加載ipvs模塊
# 1、安裝 IPVS 、加載 IPVS 模塊 (所有節點)
[root@k8s-m-01 ~]# yum install ipset ipvsadm #如果沒有下載這2個命令
ipvs 是系統內核中的一個模塊,其網絡轉發性能很高。一般情況下,我們首選 ipvs
[root@k8s-n-01 ~]# vim /etc/sysconfig/modules/ipvs.modules
#!/bin/bash
ipvs_modules="ip_vs ip_vs_lc ip_vs_wlc ip_vs_rr ip_vs_wrr ip_vs_lblc ip_vs_lblcr ip_vs_dh ip_vs_sh ip_vs_fo ip_vs_nq ip_vs_sed ip_vs_ftp nf_conntrack"
for kernel_module in ${ipvs_modules}; do
/sbin/modinfo -F filename ${kernel_module} > /dev/null 2>&1
if [ $? -eq 0 ]; then
/sbin/modprobe ${kernel_module}
fi
done
# 2、授權(所有節點)
[root@k8s-n-01 ~]# chmod 755 /etc/sysconfig/modules/ipvs.modules && bash /etc/sysconfig/modules/ipvs.modules && lsmod | grep ip_vs
# 3、內核參數優化(所有節點)
加載IPVS 模塊、生效配置
內核參數優化的主要目的是使其更適合 kubernetes 的正常運行
[root@k8s-n-01 ~]# vim /etc/sysctl.d/k8s.conf
net.ipv4.ip_forward = 1
net.bridge.bridge-nf-call-iptables = 1 # 可以之間修改這兩個
net.bridge.bridge-nf-call-ip6tables = 1 # 可以之間修改這兩個
fs.may_detach_mounts = 1
vm.overcommit_memory=1 # 不檢查物理內存是否夠用
vm.swappiness=0 # 禁止使用 swap 空間,只有當系統 OOM 時才允許使用它
vm.panic_on_oom=0 # 開啟 OOM
fs.inotify.max_user_watches=89100
fs.file-max=52706963 開啟 OOM
fs.nr_open=52706963
net.ipv4.tcp_keepalive_time = 600
net.ipv4.tcp.keepaliv.probes = 3
net.ipv4.tcp_keepalive_intvl = 15
net.ipv4.tcp.max_tw_buckets = 36000
net.ipv4.tcp_tw_reuse = 1
net.ipv4.tcp.max_orphans = 327680
net.ipv4.tcp_orphan_retries = 3
net.ipv4.tcp_syncookies = 1
net.ipv4.tcp_max_syn_backlog = 16384
net.ipv4.ip_conntrack_max = 65536
net.ipv4.tcp_max_syn_backlog = 16384
net.ipv4.top_timestamps = 0
net.core.somaxconn = 16384
# 立即生效
sysctl --system
4、安裝Docker(所有機器)
1、docker安裝腳本
方式一:華為源
[root@k8s-m-01 ~]# vim docker.sh
# 1、清空已安裝的docker
sudo yum remove docker docker-common docker-selinux docker-engine &&\
sudo yum install -y yum-utils device-mapper-persistent-data lvm2 &&\
# 2、安裝doceker源
wget -O /etc/yum.repos.d/docker-ce.repo https://repo.huaweicloud.com/docker-ce/linux/centos/docker-ce.repo &&\
# 3、軟件倉庫地址替換
sudo sed -i 's+download.docker.com+repo.huaweicloud.com/docker-ce+' /etc/yum.repos.d/docker-ce.repo &&\
# 4、重新生成源
yum clean all &&\
yum makecache &&\
# 5、安裝docker
sudo yum makecache fast &&\
sudo yum install docker-ce -y &&\
# 6、設置docker開機自啟動
systemctl enable --now docker.service
# 7、創建docker目錄、啟動服務(所有節點) ------ 單獨執行加速docekr運行速度
sudo mkdir -p /etc/docker
sudo tee /etc/docker/daemon.json <<-'EOF'
{
"registry-mirrors": ["https://k7eoap03.mirror.aliyuncs.com"]
}
EOF
方式二:阿里云
[root@k8s-n-01 ~]# vim docker.sh
# step 1: 安裝必要的一些系統工具
sudo yum install -y yum-utils device-mapper-persistent-data lvm2 &&\
# Step 2: 添加軟件源信息
sudo yum-config-manager --add-repo https://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo &&\
# Step 3
sudo sed -i 's+download.docker.com+mirrors.aliyun.com/docker-ce+' /etc/yum.repos.d/docker-ce.repo &&\
# Step 4: 更新并安裝Docker-CE
sudo yum makecache fast &&\
sudo yum -y install docker-ce &&\
# Step 4: 開啟Docker服務
systemctl enable --now docker.service &&\
# Step 5: Docker加速優化服務
sudo mkdir -p /etc/docker
sudo tee /etc/docker/daemon.json <<-'EOF'
{
"registry-mirrors": ["https://k7eoap03.mirror.aliyuncs.com"]
}
EOF
2、docker卸載
# 1、卸載舊的版本
sudo yum remove docker \
docker-client \
docker-client-latest \
docker-common \
docker-latest \
docker-latest-logrotate \
docker-logrotate \
docker-engine
#2.卸載依賴
yum remove docker-ce docker-ce-cli containerd.io -y
#3.刪除目錄
rm -rf /var/lib/docker #docker默認的工作路徑
#4.鏡像加速器(docker優化)
- 登錄阿里云找到容器鏡像服務
- 找到鏡像加速地址
- 配置使用
5、安裝kubernetes(所有機器)
#1、阿里源kubernetes
[root@k8s-n-02 yum.repos.d]# cat <<EOF > /etc/yum.repos.d/kubernetes.repo
[kubernetes]
name=Kubernetes
baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64/
enabled=1
gpgcheck=1
repo_gpgcheck=1
gpgkey=https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg https://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg
EOF
# 2、下載最新版本 yum install -y kubelet kubeadm kubectl
# 版本是kubelet-1.21.3
yum install kubectl-1.21.3 kubeadm-1.21.3 kubelet-1.21.3 -y
# 3、此時只需開機自啟,無需啟動,因為還未初始化
systemctl enable --now kubelet.service
# 4、查看版本
[root@k8s-m-01 ~]# kubectl version
Client Version: version.Info{Major:"1", Minor:"21", GitVersion:"v1.21.3",
6、對kubeadmin做高可用
1、安裝高可用軟件(所有master節點)
# 負載均衡器有很多種,只要能實現api-server高可用都行
# 官方推薦: keeplived + haproxy
[root@k8s-m-01 ~]# yum install -y keepalived haproxy
2、修改keepalived配置文件(所有master節點)
# 1、根據節點的不同,修改的配置也不同
mv /etc/keepalived/keepalived.conf /etc/keepalived/keepalived.conf_bak
cd /etc/keepalived
KUBE_APISERVER_IP=`hostname -i`
cat > /etc/keepalived/keepalived.conf <<EOF
! Configuration File for keepalived
global_defs {
router_id LVS_DEVEL
}
vrrp_script chk_kubernetes {
script "/etc/keepalived/check_kubernetes.sh"
interval 2
weight -5
fall 3
rise 2
}
vrrp_instance VI_1 {
state MASTER # m2、m3節點改成BACKUP
interface eth1
mcast_src_ip ${KUBE_APISERVER_IP}
virtual_router_id 51
priority 100 # 權重 m2改成90 m3改成80
advert_int 2
authentication {
auth_type PASS
auth_pass K8SHA_KA_AUTH
}
virtual_ipaddress {
172.16.1.116
}
}
EOF
# 2、加載keepalived并啟動
[root@k8s-m-01 keepalived]# systemctl daemon-reload
[root@k8s-m-01 /etc/keepalived]# systemctl enable --now keepalived
# 4、驗證keepalived是否啟動
[root@k8s-m-01 keepalived]# systemctl status keepalived.service
● keepalived.service - LVS and VRRP High Availability Monitor
Loaded: loaded (/usr/lib/systemd/system/keepalived.service; enabled; vendor preset: disabled)
Active: active (running) since Sun 2021-08-01 14:48:23 CST; 27s ago
[root@k8s-m-01 keepalived]# ip a |grep 116
inet 172.16.1.116/32 scope global eth1
3、修改haproxy配置文件(所有master節點)
# 1、高可用軟件 --->是做負載均衡 向后負載均衡會用SLB
[root@k8s-m-01 keepalived]# vim /etc/haproxy/haproxy.cfg
global
maxconn 2000
ulimit-n 16384
log 127.0.0.1 local0 err
stats timeout 30s
defaults
log global
mode http
option httplog
timeout connect 5000
timeout client 50000
timeout server 50000
timeout http-request 15s
timeout http-keep-alive 15s
frontend monitor-in
bind *:33305
mode http
option httplog
monitor-uri /monitor
listen stats
bind *:8006
mode http
stats enable
stats hide-version
stats uri /stats
stats refresh 30s
stats realm Haproxy\ Statistics
stats auth admin:admin
frontend k8s-master
bind 0.0.0.0:8443
bind 127.0.0.1:8443
mode tcp
option tcplog
tcp-request inspect-delay 5s
default_backend k8s-master
backend k8s-master
mode tcp
option tcplog
option tcp-check
balance roundrobin
default-server inter 10s downinter 5s rise 2 fall 2 slowstart 60s maxconn 250 maxqueue 256 weight 100
server k8s-m-01 172.16.1.111:6443 check inter 2000 fall 2 rise 2 weight 100
server k8s-m-02 172.16.1.112:6443 check inter 2000 fall 2 rise 2 weight 100
server k8s-m-03 172.16.1.113:6443 check inter 2000 fall 2 rise 2 weight 100
# 2、啟動haproxy
[root@k8s-m-01 keepalived]# systemctl daemon-reload
[root@k8s-m-01 /etc/keepalived]# systemctl enable --now haproxy.service
# 3、檢查集群狀態
[root@k8s-m-01 keepalived]# systemctl status haproxy.service
● haproxy.service - HAProxy Load Balancer
Loaded: loaded (/usr/lib/systemd/system/haproxy.service; enabled; vendor preset: disabled)
Active: active (running) since Fri 2021-07-16 21:12:00 CST; 27s ago
Main PID: 4997 (haproxy-systemd)
7、m01主節點初始化配置
1、查看kubernetes所需要的鏡像
# 1、查看鏡像列表
[root@k8s-m-01 ~]# kubeadm config images list
k8s.gcr.io/kube-apiserver:v1.21.3
k8s.gcr.io/kube-controller-manager:v1.21.3
k8s.gcr.io/kube-scheduler:v1.21.3
k8s.gcr.io/kube-proxy:v1.21.3
k8s.gcr.io/pause:3.4.1
k8s.gcr.io/etcd:3.4.13-0
k8s.gcr.io/coredns/coredns:v1.8.0
quay.io/coreos/flannel:v0.14.0
# 2、查看阿里云鏡像列表
[root@k8s-m-01 ~]# kubeadm config images list --image-repository=registry.cn-shanghai.aliyuncs.com/mmk8s
registry.cn-shanghai.aliyuncs.com/mmk8s/kube-apiserver:v1.21.3
registry.cn-shanghai.aliyuncs.com/mmk8s/kube-controller-manager:v1.21.3
registry.cn-shanghai.aliyuncs.com/mmk8s/kube-scheduler:v1.21.3
registry.cn-shanghai.aliyuncs.com/mmk8s/kube-proxy:v1.21.3
registry.cn-shanghai.aliyuncs.com/mmk8s/pause:3.4.1
registry.cn-shanghai.aliyuncs.com/mmk8s/etcd:3.4.13-0
registry.cn-shanghai.aliyuncs.com/mmk8s/coredns:v1.8.0
2、部署m01主節點
# 1、生成初始化配置文件
[root@k8s-m-01 ~]# kubeadm config print init-defaults >init-config.yaml
# 2、修改init-config.yaml文件
[root@k8s-m-01 ~]# vim init-config.yaml
apiVersion: kubeadm.k8s.io/v1beta2
bootstrapTokens:
- groups:
- system:bootstrappers:kubeadm:default-node-token
token: abcdef.0123456789abcdef # token每個人都不一樣
ttl: 24h0m0s
usages:
- signing
- authentication
kind: InitConfiguration
localAPIEndpoint:
advertiseAddress: 172.16.1.111 # 當前的主機ip
bindPort: 6443
nodeRegistration:
criSocket: /var/run/dockershim.sock
name: k8s-m-01 # 對應的主機名
taints:
- effect: NoSchedule
key: node-role.kubernetes.io/master
---
apiServer:
cerSANs:
- 172.16.1.116 # 高可用的虛擬IP
timeoutForControlPlane: 4m0s
apiVersion: kubeadm.k8s.io/v1beta2
certificatesDir: /etc/kubernetes/pki
controlPlaneEndpoint: 172.16.1.116:8443 # 高可用的虛擬IP
clusterName: kubernetes
controllerManager: {}
dns:
type: CoreDNS
etcd:
local:
dataDir: /var/lib/etcd
imageRepository: registry.cn-shanghai.aliyuncs.com/baim0os # 可以寫自己的鏡像倉庫
kind: ClusterConfiguration
kubernetesVersion: 1.21.3 # 版本號
networking:
dnsDomain: cluster.local
podSubnet: 10.244.0.0/16 # 網絡路由
serviceSubnet: 10.96.0.0/12
scheduler: {}
# 3、初始化集群
[root@k8s-m-01 ~]# kubeadm init --config init-config.yaml --upload-certs
You can now join any number of the control-plane node running the following command on each as root:
# 主節點命令復制下來
kubeadm join 172.16.1.116:8443 --token abcdef.0123456789abcdef \
--discovery-token-ca-cert-hash sha256:3c24cf3218a243148f20c6804d3766d2b6cd5dadc620313d0cf2dcbfd1626c5d \
--control-plane --certificate-key 1e852aa82be85e8b1b4776cce3a0519b1d0b1f76e5633e5262e2436e8f165993
# 從節點命令復制下來
Then you can join any number of worker nodes by running the following on each as root:
kubeadm join 172.16.1.116:8443 --token abcdef.0123456789abcdef \
--discovery-token-ca-cert-hash sha256:3c24cf3218a243148f20c6804d3766d2b6cd5dadc620313d0cf2dcbfd1626c5d
# 4、主節點創建集群
node節點要查看token,主節點生成token可重復執行查看,不會改變
[root@k8s-m-01 ~]# kubeadm token create --print-join-command
kubeadm join 172.16.1.116:8443 --token pfu0ek.ndis39t916v9clq1 --discovery-token-ca-cert-hash sha256:3c24cf3218a243148f20c6804d3766d2b6cd5dadc620313d0cf2dcbfd1626c5d
# 5、 初始化完成查看kubernetes
[root@k8s-m-01 ~]# systemctl restart kubelet.service
# 6、配置 kubernetes 用戶信息(master01節點執行)
[root@k8s-m-01 ~]# kubectl label nodes k8s-n-01 node-role.kubernetes.io/node=n01
node/k8s-n-01 labeled
[root@k8s-m-01 ~]# kubectl label nodes k8s-n-02 node-role.kubernetes.io/node=n02
node/k8s-n-02 labeled
[root@k8s-m-01 ~]# kubectl get node
NAME STATUS ROLES AGE VERSION
k8s-m-01 Ready control-plane,master 73m v1.21.3
k8s-m-02 Ready control-plane,master 63m v1.21.3
k8s-m-03 Ready control-plane,master 63m v1.21.3
k8s-n-01 Ready node 2m40s v1.21.3
k8s-n-02 Ready node 62m v1.21.3
# 6、建立用戶集群權限
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
# 7、如果使用root用戶,則添加至環境變量 (選做)
# 臨時生效
[root@k8s-m-01 ~]# export KUBECONFIG=/etc/kubernetes/admin.conf
# 永久生效
[root@k8s-m-01 ~]# vim /etc/profile.d/kubernetes.sh
export KUBECONFIG=/etc/kubernetes/admin.conf
[root@k8s-m-01 ~]# source /etc/profile
# 8、增加命令提示 (所以節點都執行)
所有節點執行
yum install -y bash-completion
source /usr/share/bash-completion/bash_completion
source <(kubectl completion bash)
echo "source <(kubectl completion bash)" >> ~/.bashrc
3、故障排除
# 1、從節點加入集群可能會出現如下報錯:
[ERROR FileContent--proc-sys-net-bridge-bridge-nf-call-iptables]: /proc/sys/net/bridge/bridge-nf-call-iptables contents are not set to 1
[preflight] If you know what you are doing, you can make a check non-fatal with `--ignore-preflight-errors=...`
To see the stack trace of this error execute with --v=5 or higher
PS:前提安裝Docker+啟動,再次嘗試加入節點!
# 1、報錯原因:
swap沒關,一旦觸發 swap,會導致系統性能急劇下降,所以一般情況下,所以K8S 要求關閉 swap
# 2、解決方法:
1> 執行以下三條命令后再次執行添加到集群命令:
modprobe br_netfilter
echo 1 > /proc/sys/net/bridge/bridge-nf-call-iptables
echo 1 > /proc/sys/net/ipv4/ip_forward
# 2、STATUS 狀態是Healthy
[root@k8s-m-01 ~]# kubectl get cs
Warning: v1 ComponentStatus is deprecated in v1.19+
NAME STATUS MESSAGE ERROR
scheduler Unhealthy Get "http://127.0.0.1:10251/healthz": dial tcp 127.0.0.1:10251: connect: connection refused
controller-manager Unhealthy Get "http://127.0.0.1:10252/healthz": dial tcp 127.0.0.1:10252: connect: connection refused
etcd-0 Healthy {"health":"true"}
1、解決方式
[root@k8s-m-01 ~]# vim /etc/kubernetes/manifests/kube-controller-manager.yaml
#- --port=0
[root@k8s-m-01 ~]# vim /etc/kubernetes/manifests/kube-scheduler.yaml
#- --port=0
[root@k8s-m-01 ~]# systemctl restart kubelet.service
2、查看狀態
[root@k8s-m-01 ~]# kubectl get cs
Warning: v1 ComponentStatus is deprecated in v1.19+
NAME STATUS MESSAGE ERROR
controller-manager Healthy ok
scheduler Healthy ok
etcd-0 Healthy {"health":"true"}
8、kubernetes網絡插件calico
Calico是一個純三層的協議,為OpenStack虛機和Docker容器提供多主機間通信。Calico不使用重疊網絡比如flannel和libnetwork重疊網絡驅動,它是一個純三層的方法,使用虛擬路由代替虛擬交換,每一臺虛擬路由通過BGP協議傳播可達信息(路由)到剩余數據中心。
1、安裝集群網絡插件(主節點)


2、安裝 Calico 網絡清單
# 1、下載并生成網絡插件
[root@k8s-m-01 ~]# curl https://docs.projectcalico.org/manifests/calico.yaml -O
[root@k8s-m-01 ~]# kubectl apply -f calico.yaml
3、檢查集群初始化狀態
# 方式一:查看nodes節點
[root@k8s-m-01 ~]# kubectl get nodes
NAME STATUS ROLES AGE VERSION
k8s-m-01 Ready control-plane,master 14m v1.21.3
k8s-m-02 Ready control-plane,master 4m43s v1.21.3
k8s-m-03 Ready control-plane,master 4m36s v1.21.3
k8s-n-02 Ready control-plane,node 3m2s v1.21.3
k8s-n-02 Ready control-plane,node 3m2s v1.21.3
# 方式二:NDS測試
[root@k8s-m-01 ~]# kubectl run test -it --rm --image=busybox:1.28.3
If you don't see a command prompt, try pressing enter.
/ # nslookup kubernetes #輸入這條命令,成功后就是以下內容
Server: 10.96.0.10
Address 1: 10.96.0.10 kube-dns.kube-system.svc.cluster.local
Name: kubernetes
Address 1: 10.96.0.1 kubernetes.default.svc.cluster.local
/ #
#出現以上界面成功
三、安裝集群圖形化界面(Dashboard )
Dashboard 是 基 于 網 頁 的 Kubernetes 用 戶 界 面 。 您 可 以 使 用 Dashboard 將 容 器 應 用 部 署 到Kubernetes 集群中,也可以對容器應用排錯,還能管理集群本身及其附屬資源。您可以使用 Dashboard 獲取運行在集群中的應用的概覽信息,也可以創建或者修改 Kubernetes 資源(如Deployment,Job,DaemonSet等等)。
1、安裝圖形化界面
可以對 Deployment 實現彈性伸縮、發起滾動升級、重啟 Pod 或者使用向導創建新的應用。
# 1、下載資源清單并生成
方式一:giitubx下載
[root@k8s-m-01 ~]# wget https://raw.githubusercontent.com/kubernetes/dashboard/v2.0.4/aio/deploy/recommended.yaml
方式二:自己網站下載并生成
[root@k8s-m-01 ~]# wget http://www.mmin.xyz:81/package/k8s/recommended.yaml
[root@k8s-m-01 ~]# kubectl apply -f recommended.yaml
方式三:一步生成并安裝
[root@k8s-m-01 ~]# kubectl apply -f https://raw.githubusercontent.com/kubernetes/dashboard/v2.3.1/aio/deploy/recommended.yaml
# 2、查看端口
[root@k8s-m-01 ~]# kubectl get svc -n kubernetes-dashboard
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
dashboard-metrics-scraper ClusterIP 10.109.68.74 <none> 8000/TCP 30s
kubernetes-dashboard ClusterIP 10.105.125.10 <none> 443/TCP 34s
# 3、開一個端口,用于訪問
[root@k8s-m-01 ~]# kubectl edit svc -n kubernetes-dashboard kubernetes-dashboard
type: ClusterIP => type: NodePort #修改成NodePort
# 4、重新查看端口
[root@k8s-m-01 ~]# kubectl get svc -n kubernetes-dashboard
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
dashboard-metrics-scraper ClusterIP 10.96.44.119 <none> 8000/TCP 12m
kubernetes-dashboard NodePort 10.96.42.127 <none> 443:40927/TCP 12m
# 5、創建token配置文件
[root@k8s-m-01 ~]# vim token.yaml
apiVersion: v1
kind: ServiceAccount
metadata:
name: admin-user
namespace: kube-system
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: admin-user
annotations:
rbac.authorization.kubernetes.io/autoupdate: "true"
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: cluster-admin
subjects:
- kind: ServiceAccount
name: admin-user
namespace: kube-system
# 6、部署token到集群
[root@k8s-m-01 ~]# kubectl apply -f token.yaml
serviceaccount/admin-user created
clusterrolebinding.rbac.authorization.k8s.io/admin-user created
# 7、獲取token
[root@k8s-m-01 ~]# kubectl -n kube-system describe secret $(kubectl -n kube-system get secret | grep admin-user | awk '{print $1}') | grep token: | awk '{print $2}'
eyJhbGciOiJSUzI1NiIsImtpZCI6Ik1NeTJxSDZmaFc1a00zWVRXTHdQSlZlQnNjWUdQMW1zMjg5OTBZQ1JxNVEifQ.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJrdWJlLXN5c3RlbSIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJhZG1pbi11c2VyLXRva2VuLWpxMm56Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9zZXJ2aWNlLWFjY291bnQubmFtZSI6ImFkbWluLXVzZXIiLCJrdWJlcm5ldGVzLmlvL3NlcnZpY2VhY2NvdW50L3NlcnZpY2UtYWNjb3VudC51aWQiOiIyN2Q4MjIzYi1jYmY1LTQ5ZTUtYjAxMS1hZTAzMzM2MzVhYzQiLCJzdWIiOiJzeXN0ZW06c2VydmljZWFjY291bnQ6a3ViZS1zeXN0ZW06YWRtaW4tdXNlciJ9.Q4gC_Kr_Ltl_zG0xkhSri7FQrXxdA5Zjb4ELd7-bVbc_9kAe292w0VM_fVJky5FtldsY0XOp6zbiDVCPkmJi9NXT-P09WvPc9g-ISbbQB_QRIWrEWF544TmRSTZJW5rvafhbfONtqZ_3vWtMkCiDsf7EAwDWLLqA5T46bAn-fncehiV0pf0x_X16t72Qqa-aizHBrVcMsXQU0wnYC7jt373pnhnFHYdcJXx_LgHaC1LgCzx5BfkuphiYOaj_dVB6tAlRkQo3QkFP9GIBW3LcVfhOQBmMQl8KeHvBW4QC67PQRv55IUaUDJ_lRC2QKbeJzaUto-ER4YxFwr4tncBwZQ
# 8、驗證集群是否成功
[root@k8s-m-01 kubernetes]# kubectl run test01 -it --rm --image=busybox:1.28.3
If you don't see a command prompt, try pressing enter.
/ # nslookup kubernetes
Address 1: 10.96.0.2 kube-dns.kube-system.svc.cluster.local
Name: kubernetes
Address 1: 10.96.0.1 kubernetes.default.svc.cluster.local
/
# 9、通過token訪問
192.168.15.111:40927 # 第五步查看端口




浙公網安備 33010602011771號