kubeadm安裝kubernetes集群
kubeadm安裝kubernetes集群:
kubernetes
是:master/node 架構
master上運行:API server ,Scheduler,Controller-Manager etcd
node上運行:kebelet,容器引擎(例docker) ,kube-proxy ,flannel(網路)
Pod:
自主式Pod
控制器管理的Pod
kubernetes安裝方法:
1)傳統的方式安裝,讓k8s自己的組件運行為系統級的守護進程,這種方式安裝復雜,需要好多ca的證書,配置文件需要手動解決,相當復雜。
2)簡單部署方法,依靠kubeadm官方提供的安裝工具
注意:每個節點包括master都要安裝docker,kubelet,并運行
實現自我管理的方式,把k8s自己也部署為pod。而master也需要安裝flannel,來連接node節點。
kubeadm 安裝kubernetes
環境:
master: etcd: 192.168.44.165
node01: 192.168.44.166
node02: 192.168.44.167
前提:
1.基于主機名通信:/etc/hosts
2.時間同步
3.關閉firewalld和iptables,selinux
OS:centos 7
安裝步驟:
1.etcd cluster,僅master節點
2.flannel,集群所有節點包括master節點
3.配置k8s的master:僅master節點
kubernetes-master
啟動的服務:
kube-apiserver,kube-scheduler,kube-controller-manager
4.配置k8s的各node節點
kubernetes-node
先設定啟動docker服務
啟動k8s的服務:
kube-proxy, kubelet
kubeadm(安裝步驟)
1.master,node:安裝kubelet,kubeadm,docker
2.master:kubeadm init (生成ca證書等)
3.個節點nodes:kubeadm join (加入集群
開始安裝:
修改主機名
192.168.44.177 ---> master
192.168.44.178 ---> node01
192.168.44.176 ---> node02
修改host文件各個節點:
vim /etc/hosts
192.168.44.177 master
192.168.44.178 node01
192.168.44.176 node02
注:要關閉防火墻和selinux及同步時間
selinux 永久關閉:sed -i 's/SELINUX=enforcing/SELINUX=disabled/g' /etc/selinux/config
所有節點都安裝一下:
阿里云的鏡像
docker的yum源
cd /etc/yum.repos.d/
wget https://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo
編輯kubernetes的yum倉庫,也是阿里的鏡像
vim kubernetes.repo
[kubernetes]
name=kubernetes Repo
baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64/
gpgcheck=1 #檢查
pgpkey=https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg #檢查的key
enabled=1
然后保存退出執行
yum repolist
安裝ssh密鑰登陸
在master執行:
ssh-keygen -t rsa#一路回車
ssh-copy-id node01 (node02) #將密鑰復制到各個node節點
安裝
yum install docker-ce kubelet kubeadm kubectl -y
如不能連接docker倉庫,可以配置docker代理例www.ik8s.io 是可以的配置如下:
vim /usr/lib/systemd/system/docker.service
[Unit]
Description=Docker Application Container Engine
Documentation=https://docs.docker.com
BindsTo=containerd.service
After=network-online.target firewalld.service
Wants=network-online.target
Requires=docker.socket
[Service]
Type=notify
# the default is not to use systemd for cgroups because the delegate issues still
# exists and systemd currently does not support the cgroup feature set required
# for containers run by docker
Environment="HTTPS_PROXY=http://www.ik8s.io:10080"#配置的代理
Environment="NO_PROXY=127.0.0.1/8,192.168.44.0/24"#配置不讓自己代理
ExecStart=/usr/bin/dockerd -H fd://
ExecReload=/bin/kill -s HUP $MAINPID
執行:
systemctl daemon-reload
啟動docker
systemctl start docker
systemctl status docker
修改這兩個文件,確保是1 可以先cat查看。(因為docker會生成大量的iptables規則,有可能對iptables內部的call和ip6有影響,所以要打開橋接功能)
echo 1 > /proc/sys/net/bridge/bridge-nf-call-iptables
echo 1 > /proc/sys/net/bridge/bridge-nf-call-ip6tables
查看安裝的軟件:
rpm -ql kubelet
/etc/kubernetes/manifests #清單目錄
/etc/sysconfig/kubelet #配置文件
/etc/systemd/system/kubelet.service #
/usr/bin/kubelet #注程序
注:早期的k8s是必須關閉swap的交換分區(關閉命令:swapoff -a 開啟命令:sqapon -a,永久關閉需要:修改/etc/fstab中內容,將swap哪一行用#注釋掉)
初始化master:(在master執行執行)
只需要將其設置成開機自啟就可以不用專門啟動
systemctl enable docker
systemctl enable kubelet
如不關閉swap可以配置忽略swap和指定ipvs模式
支持lvs模型
vim /etc/sysconfig/kubelet
KUBELET_EXTRA_ARGS="--fail-swap-on=false" #關閉swap
KUBE_PROXY_MODE=ipvs #支持ipvs
然后安裝ipvs模塊
ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack_ipv4
如沒有ipvs模塊
cat > /etc/sysconfig/modules/ipvs.modules <<EOF
#!/bin/bash
modprobe -- ip_vs
modprobe -- ip_vs_rr
modprobe -- ip_vs_wrr
modprobe -- ip_vs_sh
modprobe -- nf_conntrack_ipv4
EOF
chmod 755 /etc/sysconfig/modules/ipvs.modules && bash /etc/sysconfig/modules/ipvs.modules && lsmod | grep -e ip_vs -e nf_conntrack_ipv4
初始化安裝就支持ipvs
初始化:
kubeadm version #查看版本
kubeadm version: &version.Info{Major:"1", Minor:"13", GitVersion:"v1.13.3", GitCommit:"721bfa751924da8d1680787490c54b9179b1fed0", GitTreeState:"clean", BuildDate:"2019-02-01T20:05:53Z", GoVersion:"go1.11.5", Compiler:"gc", Platform:"linux/amd64"}
kubeadm init --kubernetes-version=v1.13.3 --pod-network-cidr=10.244.0.0/16 --service-cidr=10.96.0.0/12 --ignore-preflight-errors=Swap
--kubernetes-version #指定k8s版本
--pod-network-cid #指定pod網路
--service-cidr #指定service的我網路
--ignore-preflight-errors #忽略swap
[init] Using Kubernetes version: v1.13.3
[preflight] Running pre-flight checks
[WARNING Swap]: running with swap on is not supported. Please disable swap
[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 18.09.2. Latest validated version: 18.06
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
error execution phase preflight: [preflight] Some fatal errors occurred:
[ERROR ImagePull]: failed to pull image k8s.gcr.io/kube-apiserver:v1.13.3: output: Error response from daemon: Get https://k8s.gcr.io/v2/: net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)
, error: exit status 1
[ERROR ImagePull]: failed to pull image k8s.gcr.io/kube-controller-manager:v1.13.3: output: Error response from daemon: Get https://k8s.gcr.io/v2/: net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)
, error: exit status 1
[ERROR ImagePull]: failed to pull image k8s.gcr.io/kube-scheduler:v1.13.3: output: Error response from daemon: Get https://k8s.gcr.io/v2/: net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)
, error: exit status 1
[ERROR ImagePull]: failed to pull image k8s.gcr.io/kube-proxy:v1.13.3: output: Error response from daemon: Get https://k8s.gcr.io/v2/: net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)
, error: exit status 1
[ERROR ImagePull]: failed to pull image k8s.gcr.io/pause:3.1: output: Error response from daemon: Get https://k8s.gcr.io/v2/: net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)
, error: exit status 1
[ERROR ImagePull]: failed to pull image k8s.gcr.io/etcd:3.2.24: output: Error response from daemon: Get https://k8s.gcr.io/v2/: net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)
, error: exit status 1
[ERROR ImagePull]: failed to pull image k8s.gcr.io/coredns:1.2.6: output: Error response from daemon: Get https://k8s.gcr.io/v2/: net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)
, error: exit status 1
報錯是因為不能連接k8s.gcr.io地址可以下載好上傳上去,我這里寫了個腳本doker pull
vim image.sh
#!/bin/bash
echo ""
echo "Pulling Docker Images from mirrorgooglecontainers..."
echo "==>kube-apiserver:"
docker pull mirrorgooglecontainers/kube-apiserver:v1.13.3
docker tag mirrorgooglecontainers/kube-apiserver:v1.13.3 k8s.gcr.io/kube-apiserver:v1.13.3
echo "==>kube-controller-manager:"
docker pull mirrorgooglecontainers/kube-controller-manager:v1.13.3
docker tag mirrorgooglecontainers/kube-controller-manager:v1.13.3 k8s.gcr.io/kube-controller-manager:v1.13.3
echo "==>kube-scheduler:"
docker pull mirrorgooglecontainers/kube-scheduler:v1.13.3
docker tag mirrorgooglecontainers/kube-scheduler:v1.13.3 k8s.gcr.io/kube-scheduler:v1.13.3
echo "==>kube-proxy:"
docker pull mirrorgooglecontainers/kube-proxy:v1.13.3
docker tag mirrorgooglecontainers/kube-proxy:v1.13.3 k8s.gcr.io/kube-proxy:v1.13.3
echo "==>etcd:"
docker pull mirrorgooglecontainers/etcd:3.2.24
docker tag mirrorgooglecontainers/etcd:3.2.24 k8s.gcr.io/etcd:3.2.24
echo "==>pause:"
docker pull mirrorgooglecontainers/pause:3.1
docker tag mirrorgooglecontainers/pause:3.1 k8s.gcr.io/pause:3.1
echo "==>coredns:"
docker pull coredns/coredns
docker tag coredns/coredns k8s.gcr.io/coredns:1.2.6
echo "==>docker rmi mirrorgooglecontainers..."
docker rmi coredns/coredns
docker rmi mirrorgooglecontainers/etcd:3.2.24
docker rmi mirrorgooglecontainers/pause:3.1
docker rmi mirrorgooglecontainers/kube-proxy:v1.13.3
docker rmi mirrorgooglecontainers/kube-scheduler:v1.13.3
docker rmi mirrorgooglecontainers/kube-controller-manager:v1.13.3
docker rmi mirrorgooglecontainers/kube-apiserver:v1.13.3
查看pull下來的鏡像:
docker images
REPOSITORY TAG IMAGE ID CREATED SIZE
k8s.gcr.io/kube-apiserver v1.13.3 fe242e556a99 3 weeks ago 181MB
k8s.gcr.io/kube-controller-manager v1.13.3 0482f6400933 3 weeks ago 146MB
k8s.gcr.io/kube-proxy v1.13.3 98db19758ad4 3 weeks ago 80.3MB
k8s.gcr.io/kube-scheduler v1.13.3 3a6f709e97a0 3 weeks ago 79.6MB
coredns/coredns latest eb516548c180 6 weeks ago 40.3MB
k8s.gcr.io/coredns 1.2.6 eb516548c180 6 weeks ago 40.3MB
k8s.gcr.io/etcd 3.2.24 3cab8e1b9802 5 months ago 220MB
k8s.gcr.io/pause 3.1 da86e6ba6ca1 14 months ago 742kB
在執行初始化命令:
kubeadm init --kubernetes-version=v1.13.3 --pod-network-cidr=10.244.0.0/16 --service-cidr=10.96.0.0/12 --ignore-preflight-errors=Swap
可以加參數 --apiserver-advertise-address=master——ip 指定master的ip地址
出現一下說明初始化成功:
[addons] Applied essential addon: CoreDNS
[addons] Applied essential addon: kube-proxy
Your Kubernetes master has initialized successfully! #master已經初始化成功后
To start using your cluster, you need to run the following as a regular user: #為了啟動一個集群,最好是用一個普通用戶需要執行一下命令(我這為了方便就直接用root用戶執行了),
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
https://kubernetes.io/docs/concepts/cluster-administration/addons/
You can now join any number of machines by running the following on each node
as root: #有root用戶在其他node節點執行一下命令將其加入到kuberntes集群(注:token 和--discovery-token-ca-cert-hash的參數要記住方便以后有節點加入集群)
kubeadm join 192.168.44.177:6443 --token p50b8j.9ot6yuxc11zvrwcs --discovery-token-ca-cert-hash sha256:a2ceffc9a67763cb98ca7fd23fc2d93ea5370a9007620214d5e098b1874ba75b
執行初始化侯的命令
[root@master ~]# mkdir -p $HOME/.kube
[root@master ~]# cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
[root@master ~]# chown $(id -u):$(id -g) $HOME/.kube/config
用kubectl查看汲取狀態
[root@master ~]# kubectl get componentstatus(或kubectl get cs)
NAME STATUS MESSAGE ERROR
controller-manager Healthy ok
scheduler Healthy ok
etcd-0 Healthy {"health": "true"}
查看節點命令:kubectl get nodes
查看命名空間
[root@master ~]# kubectl get ns (看到有三個命名空間其中系統及的pod在kube-system里)
NAME STATUS AGE
default Active 38m
kube-public Active 38m
kube-system Active 38m
安裝flannel 網址:https://github.com/coreos/flannel執行一下命令:
[root@master ~]# kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml
podsecuritypolicy.extensions/psp.flannel.unprivileged created
clusterrole.rbac.authorization.k8s.io/flannel created
clusterrolebinding.rbac.authorization.k8s.io/flannel created
serviceaccount/flannel created
configmap/kube-flannel-cfg created
daemonset.extensions/kube-flannel-ds-amd64 created
daemonset.extensions/kube-flannel-ds-arm64 created
daemonset.extensions/kube-flannel-ds-arm created
daemonset.extensions/kube-flannel-ds-ppc64le created
daemonset.extensions/kube-flannel-ds-s390x created
查看flanel是否運行(也可以docker images 查看你flanel的鏡像是否下載成功)
[root@master ~]# kubectl get pods -n kube-system
NAME READY STATUS RESTARTS AGE
coredns-86c58d9df4-hq6db 1/1 Running 0 40m
coredns-86c58d9df4-p7xgr 1/1 Running 0 40m
etcd-master 1/1 Running 0 39m
kube-apiserver-master 1/1 Running 0 39m
kube-controller-manager-master 1/1 Running 0 39m
kube-flannel-ds-amd64-6fqw6 1/1 Running 0 5m6s
kube-proxy-mwqsz 1/1 Running 0 40m
kube-scheduler-master 1/1 Running 0 39m
node節點操作:
首先關閉防火前和selinux
安裝:
docker的yum源
編輯kubernetes的yum倉庫,也是阿里的鏡像
可以將mster上的yum源和docker獲取鏡像的腳本復制到給個節點:
scp docker_image.sh node02:/root
scp docker_image.sh node01:/root
scp docker-ce.repo kubernetes.repo node01:/etc/yum.repos.d/
scp docker-ce.repo kubernetes.repo node02:/etc/yum.repos.d/
執行:yum repolist
安裝相關命令
yum install docker-ce kubelet kubeadm kubectl -y (node節點可以不安裝kubelet)
啟動docker將其設置成開機自啟就kubelet可以不用專門啟動
systemctl start docker
systemctl enable docker
systemctl enable kubelet
執行:
sh docker_image.sh (來獲取鏡像)
如不關閉swap可以配置忽略swap
vim /etc/sysconfig/kubelet
KUBELET_EXTRA_ARGS="--fail-swap-on=false"
將節點加入到集群(執行master初始化侯的命令如:沒有關閉swap 就要加入--ignore-preflight-errors=Swap參數)
kubeadm join 192.168.44.177:6443 --token p50b8j.9ot6yuxc11zvrwcs --discovery-token-ca-cert-hash sha256:a2ceffc9a67763cb98ca7fd23fc2d93ea5370a9007620214d5e098b1874ba75b --ignore-preflight-errors=Swap
注:如不能下載flannel 可將master上的鏡像導入到node節點上
各個幾點執行完以上命令并加入到集群
master查看nodes
[root@master yum.repos.d]# kubectl get nodes
NAME STATUS ROLES AGE VERSION
master Ready master 79m v1.13.3
node01 Ready <none> 15m v1.13.3
node02 Ready <none> 106s v1.13.3
master查看kube-prox和falene的運行情況
[root@master yum.repos.d]# kubectl get pods -n kube-system -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
coredns-86c58d9df4-hq6db 1/1 Running 0 78m 10.244.0.3 master <none> <none>
coredns-86c58d9df4-p7xgr 1/1 Running 0 78m 10.244.0.2 master <none> <none>
etcd-master 1/1 Running 0 77m 192.168.44.177 master <none> <none>
kube-apiserver-master 1/1 Running 0 77m 192.168.44.177 master <none> <none>
kube-controller-manager-master 1/1 Running 0 77m 192.168.44.177 master <none> <none>
kube-flannel-ds-amd64-6fqw6 1/1 Running 0 43m 192.168.44.177 master <none> <none>
kube-flannel-ds-amd64-hn29s 1/1 Running 0 14m 192.168.44.178 node01 <none> <none>
kube-flannel-ds-amd64-tds62 1/1 Running 0 56s 192.168.44.176 node02 <none> <none>
kube-proxy-4ppd9 1/1 Running 0 56s 192.168.44.176 node02 <none> <none>
kube-proxy-bpjm8 1/1 Running 0 14m 192.168.44.178 node01 <none> <none>
kube-proxy-mwqsz 1/1 Running 0 78m 192.168.44.177 master <none> <none>
kube-scheduler-master 1/1 Running 0 77m 192.168.44.177 master <none> <none>
node節點執行命令查看鏡像的運行情況
docker image ls
REPOSITORY TAG IMAGE ID CREATED SIZE
k8s.gcr.io/kube-apiserver v1.13.3 fe242e556a99 4 weeks ago 181MB
k8s.gcr.io/kube-controller-manager v1.13.3 0482f6400933 4 weeks ago 146MB
k8s.gcr.io/kube-proxy v1.13.3 98db19758ad4 4 weeks ago 80.3MB
k8s.gcr.io/kube-scheduler v1.13.3 3a6f709e97a0 4 weeks ago 79.6MB
quay.io/coreos/flannel v0.11.0-amd64 ff281650a721 4 weeks ago 52.6MB
k8s.gcr.io/coredns 1.2.6 eb516548c180 6 weeks ago 40.3MB
k8s.gcr.io/etcd 3.2.24 3cab8e1b9802 5 months ago 220MB
k8s.gcr.io/pause 3.1 da86e6ba6ca1 14 months ago 742kB
到此k8s集群安裝完成

浙公網安備 33010602011771號