<output id="qn6qe"></output>

    1. <output id="qn6qe"><tt id="qn6qe"></tt></output>
    2. <strike id="qn6qe"></strike>

      亚洲 日本 欧洲 欧美 视频,日韩中文字幕有码av,一本一道av中文字幕无码,国产线播放免费人成视频播放,人妻少妇偷人无码视频,日夜啪啪一区二区三区,国产尤物精品自在拍视频首页,久热这里只有精品12

      k8s學習2

      使用kubeadm方式搭建k8s集群

      k8s示例

      master(虛擬機1):192.168.190.101
      node1(虛擬機3):192.168.190.103
      node2(虛擬機4):192.168.190.104

      集群中所有機器網(wǎng)絡(luò)可以互通
      可以訪問外網(wǎng)

      系統(tǒng)初始化

      1、關(guān)閉防火墻
      systemctl stop firewalld
      systemctl disable firewalld

      2、關(guān)閉selinux
      sed -i 's/enforcing/disabled/' /etc/selinux/config # 永久
      setenforce 0 # 臨時

      3、關(guān)閉swap
      swapoff -a # 臨時
      sed -ri 's/.swap./#&/' /etc/fstab # 永久

      4、根據(jù)規(guī)劃設(shè)置主機名
      hostnamectl set-hostname <hostname>

      5、在master添加hosts

      cat >> /etc/hosts <<EOF
      192.168.190.101 master
      192.168.190.103 node1
      192.168.190.104 node2
      EOF
      

      6、將橋接的IPv4流量傳遞到iptables的鏈

      cat > /etc/sysctl.d/k8s.conf <<EOF
      net.bridge.bridge-nf-call-ip6tables = 1
      net.bridge.bridge-nf-call-iptables = 1
      EOF
      sysctl --system      # 生效
      

      7、時間同步

      yum install ntpdate -y
      ntpdate time.windows.com
      

      所有節(jié)點安裝docker

      wget https://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo -O /etc/yum.repos.d/docker-ce.repo
      
      yum -y install docker-ce
      
      systemctl enable docker && systemctl start docker
      
      docker --version
      
      cat > /etc/docker/daemon.json << EOF
      {
        "registry-mirrors": ["https://b9pmyelo.mirror.aliyuncs.com"],
        "exec-opts": ["native.cgroupdriver=systemd"]
      }
      EOF
      
      systemctl restart docker
      docker info
      
      
      

      添加阿里云YUM軟件源

      cat > /etc/yum.repos.d/kubernetes.repo << EOF
      [kubernetes]
      name=Kubernetes
      baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64
      enabled=1
      gpgcheck=0
      repo_gpgcheck=0
      gpgkey=https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg https://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg
      EOF
      
      
      

      安裝kubeadm,kubelet和kubectl

      由于版本更新頻繁,這里指定版本部署

      yum install -y kubelet-1.23.0 kubeadm-1.23.0 kubectl-1.23.0
      systemctl enable kubelet
      
      

      部署Kubernetes Master

      在192.168.190.101(master)執(zhí)行

      kubeadm init \
        --image-repository=registry.aliyuncs.com/google_containers \
        --apiserver-advertise-address=192.168.190.101 \
        --kubernetes-version v1.27.0 \
        --service-cidr=10.96.0.0/12 \
        --pod-network-cidr=10.244.0.0/16 \
        --ignore-preflight-errors=all
      
      kubeadm init --image-repository=registry.aliyuncs.com/google_containers --pod-network-cidr=10.244.0.0/16 --kubernetes-version=v1.18.5
      
      報錯了
      [init] Using Kubernetes version: v1.27.0
      [preflight] Running pre-flight checks
      	[WARNING FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml]: /etc/kubernetes/manifests/kube-apiserver.yaml already exists
      	[WARNING FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml]: /etc/kubernetes/manifests/kube-controller-manager.yaml already exists
      	[WARNING FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml]: /etc/kubernetes/manifests/kube-scheduler.yaml already exists
      	[WARNING FileAvailable--etc-kubernetes-manifests-etcd.yaml]: /etc/kubernetes/manifests/etcd.yaml already exists
      	[WARNING CRI]: container runtime is not running: output: time="2024-04-18T17:10:41+08:00" level=fatal msg="validate service connection: CRI v1 runtime API is not implemented for endpoint \"unix:///var/run/containerd/containerd.sock\": rpc error: code = Unimplemented desc = unknown service runtime.v1.RuntimeService"
      , error: exit status 1
      	[WARNING KubeletVersion]: the kubelet version is higher than the control plane version. This is not a supported version skew and may lead to a malfunctional cluster. Kubelet version: "1.28.2" Control plane version: "1.27.0"
      [preflight] Pulling images required for setting up a Kubernetes cluster
      [preflight] This might take a minute or two, depending on the speed of your internet connection
      [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
      	[WARNING ImagePull]: failed to pull image registry.k8s.io/kube-apiserver:v1.27.0: output: time="2024-04-18T17:10:41+08:00" level=fatal msg="validate service connection: CRI v1 image API is not implemented for endpoint \"unix:///var/run/containerd/containerd.sock\": rpc error: code = Unimplemented desc = unknown service runtime.v1.ImageService"
      , error: exit status 1
      	[WARNING ImagePull]: failed to pull image registry.k8s.io/kube-controller-manager:v1.27.0: output: time="2024-04-18T17:10:41+08:00" level=fatal msg="validate service connection: CRI v1 image API is not implemented for endpoint \"unix:///var/run/containerd/containerd.sock\": rpc error: code = Unimplemented desc = unknown service runtime.v1.ImageService"
      , error: exit status 1
      	[WARNING ImagePull]: failed to pull image registry.k8s.io/kube-scheduler:v1.27.0: output: time="2024-04-18T17:10:41+08:00" level=fatal msg="validate service connection: CRI v1 image API is not implemented for endpoint \"unix:///var/run/containerd/containerd.sock\": rpc error: code = Unimplemented desc = unknown service runtime.v1.ImageService"
      , error: exit status 1
      	[WARNING ImagePull]: failed to pull image registry.k8s.io/kube-proxy:v1.27.0: output: time="2024-04-18T17:10:41+08:00" level=fatal msg="validate service connection: CRI v1 image API is not implemented for endpoint \"unix:///var/run/containerd/containerd.sock\": rpc error: code = Unimplemented desc = unknown service runtime.v1.ImageService"
      , error: exit status 1
      	[WARNING ImagePull]: failed to pull image registry.k8s.io/pause:3.9: output: time="2024-04-18T17:10:41+08:00" level=fatal msg="validate service connection: CRI v1 image API is not implemented for endpoint \"unix:///var/run/containerd/containerd.sock\": rpc error: code = Unimplemented desc = unknown service runtime.v1.ImageService"
      , error: exit status 1
      	[WARNING ImagePull]: failed to pull image registry.k8s.io/etcd:3.5.9-0: output: time="2024-04-18T17:10:41+08:00" level=fatal msg="validate service connection: CRI v1 image API is not implemented for endpoint \"unix:///var/run/containerd/containerd.sock\": rpc error: code = Unimplemented desc = unknown service runtime.v1.ImageService"
      , error: exit status 1
      	[WARNING ImagePull]: failed to pull image registry.k8s.io/coredns/coredns:v1.10.1: output: time="2024-04-18T17:10:41+08:00" level=fatal msg="validate service connection: CRI v1 image API is not implemented for endpoint \"unix:///var/run/containerd/containerd.sock\": rpc error: code = Unimplemented desc = unknown service runtime.v1.ImageService"
      , error: exit status 1
      [certs] Using certificateDir folder "/etc/kubernetes/pki"
      [certs] Using existing ca certificate authority
      [certs] Using existing apiserver certificate and key on disk
      [certs] Using existing apiserver-kubelet-client certificate and key on disk
      [certs] Using existing front-proxy-ca certificate authority
      [certs] Using existing front-proxy-client certificate and key on disk
      [certs] Using existing etcd/ca certificate authority
      [certs] Using existing etcd/server certificate and key on disk
      [certs] Using existing etcd/peer certificate and key on disk
      [certs] Using existing etcd/healthcheck-client certificate and key on disk
      [certs] Using existing apiserver-etcd-client certificate and key on disk
      [certs] Using the existing "sa" key
      [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
      [kubeconfig] Using existing kubeconfig file: "/etc/kubernetes/admin.conf"
      [kubeconfig] Using existing kubeconfig file: "/etc/kubernetes/kubelet.conf"
      [kubeconfig] Using existing kubeconfig file: "/etc/kubernetes/controller-manager.conf"
      [kubeconfig] Using existing kubeconfig file: "/etc/kubernetes/scheduler.conf"
      [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
      [control-plane] Using manifest folder "/etc/kubernetes/manifests"
      [control-plane] Creating static Pod manifest for "kube-apiserver"
      [control-plane] Creating static Pod manifest for "kube-controller-manager"
      [control-plane] Creating static Pod manifest for "kube-scheduler"
      [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
      [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
      [kubelet-start] Starting the kubelet
      [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
      [kubelet-check] Initial timeout of 40s passed.
      [kubelet-check] It seems like the kubelet isn't running or healthy.
      [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp [::1]:10248: connect: connection refused.
      [kubelet-check] It seems like the kubelet isn't running or healthy.
      [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp [::1]:10248: connect: connection refused.
      [kubelet-check] It seems like the kubelet isn't running or healthy.
      [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp [::1]:10248: connect: connection refused.
      [kubelet-check] It seems like the kubelet isn't running or healthy.
      [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp [::1]:10248: connect: connection refused.
      [kubelet-check] It seems like the kubelet isn't running or healthy.
      [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp [::1]:10248: connect: connection refused.
      
      Unfortunately, an error has occurred:
      	timed out waiting for the condition
      
      This error is likely caused by:
      	- The kubelet is not running
      	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
      
      If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
      	- 'systemctl status kubelet'
      	- 'journalctl -xeu kubelet'
      
      Additionally, a control plane component may have crashed or exited when started by the container runtime.
      To troubleshoot, list all containers using your preferred container runtimes CLI.
      Here is one example how you may list all running Kubernetes containers by using crictl:
      	- 'crictl --runtime-endpoint unix:///var/run/containerd/containerd.sock ps -a | grep kube | grep -v pause'
      	Once you have found the failing container, you can inspect its logs with:
      	- 'crictl --runtime-endpoint unix:///var/run/containerd/containerd.sock logs CONTAINERID'
      error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
      To see the stack trace of this error execute with --v=5 or higher
      
      
      
      試著這樣創(chuàng)建
      kubeadm init \
      --kubernetes-version v1.27.0 \
      --image-repository registry.aliyuncs.com/google_containers \
      --pod-network-cidr=10.244.0.0/16
      
      
      報錯
      [init] Using Kubernetes version: v1.27.0
      [preflight] Running pre-flight checks
      error execution phase preflight: [preflight] Some fatal errors occurred:
      	[ERROR FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml]: /etc/kubernetes/manifests/kube-apiserver.yaml already exists
      	[ERROR FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml]: /etc/kubernetes/manifests/kube-controller-manager.yaml already exists
      	[ERROR FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml]: /etc/kubernetes/manifests/kube-scheduler.yaml already exists
      	[ERROR FileAvailable--etc-kubernetes-manifests-etcd.yaml]: /etc/kubernetes/manifests/etcd.yaml already exists
      	[ERROR CRI]: container runtime is not running: output: time="2024-04-18T16:31:40+08:00" level=fatal msg="validate service connection: CRI v1 runtime API is not implemented for endpoint \"unix:///var/run/containerd/containerd.sock\": rpc error: code = Unimplemented desc = unknown service runtime.v1.RuntimeService"
      , error: exit status 1
      	[ERROR KubeletVersion]: the kubelet version is higher than the control plane version. This is not a supported version skew and may lead to a malfunctional cluster. Kubelet version: "1.28.2" Control plane version: "1.27.0"
      [preflight] If you know what you are doing, you can make a check non-fatal with `--ignore-preflight-errors=...`
      To see the stack trace of this error execute with --v=5 or higher
      
      
      
      
      pod
      172.20.20.0/20
      
      svc
      172.21.20.0/20
      
      
      
      
      posted @ 2025-08-04 19:54  BJliberty  閱讀(11)  評論(0)    收藏  舉報
      主站蜘蛛池模板: 国内精品久久久久影视| 视频一区二区不中文字幕| 国产一区二区三区综合视频 | 无码人妻久久一区二区三区app| 99中文字幕国产精品| 国产成人免费一区二区三区| 乐业县| 亚洲精品一区二区二三区| 日韩国产成人精品视频| 国产欧美精品一区aⅴ影院| 武强县| 中文字幕av中文字无码亚| 免费夜色污私人影院在线观看| 在线日韩日本国产亚洲| 日本国产精品第一页久久| 中文字幕无码专区一VA亚洲V专| 免费日韩av网在线观看| 亚洲欧美日韩高清一区二区三区| 女女互揉吃奶揉到高潮视频| 香港日本三级亚洲三级| 亚洲日本欧美日韩中文字幕| 桃花岛亚洲成在人线AV| 香港三级韩国三级日本三级| 色综合网天天综合色中文| 国产高清视频一区二区乱| 狠狠色丁香婷婷综合尤物| 国产露脸无套对白在线播放| 亚洲五月丁香综合视频| 亚洲av色香蕉一区二区三| 国产成人a在线观看视频免费| 日本一区二区三区在线 |观看| 天堂av资源在线免费| 日本一区不卡高清更新二区| 熟女性饥渴一区二区三区| 国产第一页浮力影院入口| 国产美女久久久亚洲综合| 国内精品伊人久久久久AV一坑 | 亚洲综合色婷婷中文字幕| 国产精品久久久午夜夜伦鲁鲁| 国产一区二区高清不卡| 成人国产乱对白在线观看|