使用Kubeadm部署kubernetes集群

使用Kubeadm部署kubernetes集群

省略网络,hosts等相关配置

一、主机安全配置

1、关闭firewalld(每台机器)

[root@XXX ~]# systemctl stop firewalld 
[root@XXX ~]# systemctl disable firewalld 

# 确认是否运行 
[root@XXX ~]# firewall-cmd --state 
not running

2、SELINUX配置(每台机器)

做出以下配置,一定要重启系统才能生效

[root@XXX ~]# sed -ri 's/SELINUX=enforcing/SELINUX=disabled/' /etc/selinux/config

[root@master local]# cat /etc/selinux/config 

# This file controls the state of SELinux on the system.
# SELINUX= can take one of these three values:
#     enforcing - SELinux security policy is enforced.
#     permissive - SELinux prints warnings instead of enforcing.
#     disabled - No SELinux policy is loaded.
SELINUX=disabled
# SELINUXTYPE= can take one of three values:
#     targeted - Targeted processes are protected,
#     minimum - Modification of targeted policy. Only selected processes are protected. 
#     mls - Multi Level Security protection.
SELINUXTYPE=targeted 

3、主机时间同步(每台机器)

由于最小化安装系统,需要单独安装ntpdate

[root@XXX ~]# yum -y install ntpdate 
[root@XXX ~]# crontab -e
0 */1 * * * ntpdate time1.aliyun.com

按 Esc 按: wq   回车

[root@master local]# ntpdate time1.aliyun.com
 4 Nov 14:54:54 ntpdate[1637]: adjust time server 203.107.6.88 offset 0.238380 sec

4、永久关闭swap分区(每台机器)

使用kubeadm部署必须关闭swap分区,修改配置文件后

需要重启操作系统。如果安装centOS7的时候选择的是自动创建分区,那么是一定会创建swap分区的。

#打开编辑并注释掉相关内容
[root@node2 local]# vim /etc/fstab

#
# /etc/fstab
# Created by anaconda on Wed Sep 16 18:50:24 2020
#
# Accessible filesystems, by reference, are maintained under '/dev/disk'
# See man pages fstab(5), findfs(8), mount(8) and/or blkid(8) for more info
#
/dev/mapper/centos-root /                       xfs     defaults        0 0
UUID=71a3a2c7-1e60-4bc6-b641-8e82b3d1e79b /boot                   xfs     defaults        0 0
#/dev/mapper/centos-swap swap                    swap    defaults        0 0

#保存,退出

#使用命令查看,此时是还有的,因为没有重启
[root@node2 local]# free -m
              total        used        free      shared  buff/cache   available
Mem:           3770         138        3456          11         175        3421
Swap:          2047           0        2047

# 重启
[root@node2 local]# reboot

# 重启完毕后再次查看
[root@node1 ~]# free -m
              total        used        free      shared  buff/cache   available
Mem:           3770         134        3448          11         187        3419
Swap:             0           0           0

5、添加网桥过滤(每台机器)

目的是为了实现内核的过滤

# 添加网桥过滤及地址转发
[root@master ~]# vim /etc/sysctl.d/k8s.conf
net.bridge.bridge-nf-call-ip6tables = 1 
net.bridge.bridge-nf-call-iptables = 1 
net.ipv4.ip_forward = 1 
vm.swappiness = 0

# 加载br_netfilter模块
[root@master ~]# modprobe br_netfilter
[root@master ~]# lsmod | grep br_netfilter
br_netfilter           22256  0 
bridge                151336  1 br_netfilter

# 加载网桥过滤配置文件
[root@master ~]# sysctl -p /etc/sysctl.d/k8s.conf
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
net.ipv4.ip_forward = 1
vm.swappiness = 0

6、开启ipvs(每台机器)

ipvs比iptables的转换效率要高,这里就直接部署ipvs就可以了

# 安装ipset以及ipvsadm 
[root@master ~]# yum -y install ipset ipvsadm

#添加需要加载的模块(直接复制下面所有内容,粘贴到命令行中)
cat > /etc/sysconfig/modules/ipvs.modules <<EOF
#!/bin/bash 
modprobe -- ip_vs 
modprobe -- ip_vs_rr 
modprobe -- ip_vs_wrr 
modprobe -- ip_vs_sh 
modprobe -- nf_conntrack_ipv4 
EOF

#验证一下
[root@master ~]# ll /etc/sysconfig/modules/
总用量 4
-rw-r--r-- 1 root root 130 11月  4 15:22 ipvs.modules

# 修改权限
[root@master ~]# chmod 755 /etc/sysconfig/modules/ipvs.modules 

#执行
[root@master ~]# sh /etc/sysconfig/modules/ipvs.modules

#验证其中一个
[root@master ~]# lsmod | grep ip_vs_rr
ip_vs_rr               12600  0 
ip_vs                 145497  6 ip_vs_rr,ip_vs_sh,ip_vs_wrr

7、在manager节点及work节点安装指定版本的docker-ce(每台机器都要操作)

Kubsernetes不能直接管理容器,它最小的管理单元是pod,pod是可以管理相关容器,因此,K8s需要借助于docker这种容器管理工具来完成容器的管理。

yum源获取,建议使用清华镜像源,官方提供的镜像由于网络速度原因下载较慢

[root@master ~]# wget -O /etc/yum.repos.d/docker-ce.repo https://mirrors.tuna.tsinghua.edu.cn/docker-ce/linux/ce
ntos/docker-ce.repo--2020-11-06 11:35:06--  https://mirrors.tuna.tsinghua.edu.cn/docker-ce/linux/centos/docker-ce.repo
正在解析主机 mirrors.tuna.tsinghua.edu.cn (mirrors.tuna.tsinghua.edu.cn)... 101.6.8.193, 2402:f000:1:408:8100::1
正在连接 mirrors.tuna.tsinghua.edu.cn (mirrors.tuna.tsinghua.edu.cn)|101.6.8.193|:443... 已连接。
已发出 HTTP 请求,正在等待回应... 200 OK
长度:1919 (1.9K) [application/octet-stream]
正在保存至: “/etc/yum.repos.d/docker-ce.repo”

100%[======================================================================>] 1,919       --.-K/s 用时 0s      

2020-11-06 11:35:07 (583 MB/s) - 已保存 “/etc/yum.repos.d/docker-ce.repo” [1919/1919])


# 查看yum源
[root@master ~]# ls /etc/yum.repos.d/
CentOS-Base.repo  CentOS-Debuginfo.repo  CentOS-Media.repo    CentOS-Vault.repo          docker-ce.repo
CentOS-CR.repo    CentOS-fasttrack.repo  CentOS-Sources.repo  CentOS-x86_64-kernel.repo

# docker yum源的排序,这里我们使用18.06.3.ce-3.el7这个版本
[root@master ~]# yum list docker-ce.x86_64 --showduplicates | sort -r

# 安装指定docker
[root@master ~]# yum -y install --setopt=obsoletes=0 docker-ce-18.06.3.ce-3.el7


# 颜值一下docker版本
[root@master ~]# docker version
Client:
 Version:           18.06.3-ce
 API version:       1.38
 Go version:        go1.10.3
 Git commit:        d7080c1
 Built:             Wed Feb 20 02:26:51 2019
 OS/Arch:           linux/amd64
 Experimental:      false
Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?


# 设置为开机自启动
[root@master ~]# systemctl enable docker
Created symlink from /etc/systemd/system/multi-user.target.wants/docker.service to /usr/lib/systemd/system/docke
r.service.

# 启动docker
[root@master ~]# systemctl start docker

# 再次查看会包含服务器版本
[root@master ~]# docker version
Client:
 Version:           18.06.3-ce
 API version:       1.38
 Go version:        go1.10.3
 Git commit:        d7080c1
 Built:             Wed Feb 20 02:26:51 2019
 OS/Arch:           linux/amd64
 Experimental:      false

Server:
 Engine:
  Version:          18.06.3-ce
  API version:      1.38 (minimum version 1.12)
  Go version:       go1.10.3
  Git commit:       d7080c1
  Built:            Wed Feb 20 02:28:17 2019
  OS/Arch:          linux/amd64
  Experimental:     false

8、修改docker-ce服务配置文件

修改其目的是为了后续使用/etc/docker/daemon.json来进行更多配置

#修改内容如下 注意:有些版本不需要修改,请注意观察
[root@XXX ~]# cat /usr/lib/systemd/system/docker.service 
[Unit] 
... 

[Service] 
... 
ExecStart=/usr/bin/dockerd #如果原文件此行后面 有-H选项,请删除-H(含)后面所有内容。
... 

[Install] 
...

# 添加daemon.josn文件
[root@node1 ~]# vim /etc/docker/daemon.json
{
        "exec-opts": ["native.cgroupdriver=systemd"]
}

# 对docker进行重启操作
[root@master ~]# systemctl restart docker

# 查看有没有启动
[root@master ~]# systemctl status docker
● docker.service - Docker Application Container Engine
   Loaded: loaded (/usr/lib/systemd/system/docker.service; enabled; vendor preset: disabled)
   Active: active (running) since 五 2020-11-06 15:13:47 CST; 53s ago
     Docs: https://docs.docker.com
 Main PID: 10633 (dockerd)
    Tasks: 22
   Memory: 46.8M
   CGroup: /system.slice/docker.service
           ├─10633 /usr/bin/dockerd
           └─10640 docker-containerd --config 
...

9、部署软件及配置

所有k8s集群节点均需安装,默认yum源是谷歌,可以使用阿里云yum

需求 kubeadm kubelet kubectl docker-ce
初始化集群、管理集群等,版本为:1.17.2 用于接收api-server指令,对pod生命周期进行管理,版本为:1.17.2 集群命令行管理工具,版本为:1.17.2 18.06.3
# 谷歌yum源
[kubernetes] 
name=Kubernetes 
baseurl=https://packages.cloud.google.com/yum /repos/kubernetes-el7-x86_64 
enabled=1 
gpgcheck=1 
repo_gpgcheck=1 
gpgkey=https://packages.cloud.google.com/yum/doc/yum-key.gpg https://packages.cloud.google.com/yum/doc/rpm-package-key.gpg

# 阿里云yum源
[kubernetes]
name=Kubernetes 
baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64/  
enabled=1 
gpgcheck=1 
repo_gpgcheck=1 
gpgkey=https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg  
       https://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg

# 新建文件(每台机器)
[root@master ~]# vim /etc/yum.repos.d/k8s.repo
[kubernetes]
name=Kubernetes 
baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64/  
enabled=1 
gpgcheck=1 
repo_gpgcheck=1 
gpgkey=https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg  
       https://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg

:wq 保存退出


# 检查yum源是否可用(每台机器)
[root@master ~]# yum list | grep kubeadm
导入 GPG key 0xA7317B0F:
 用户ID     : "Google Cloud Packages Automatic Signing Key <gc-team@google.com>"
 指纹       : d0bc 747f d8ca f711 7500 d6fa 3746 c208 a731 7b0f
 来自       : https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg
y  
kubeadm.x86_64                              1.19.3-0                   kubernetes

# 安装
[root@master ~]# yum -y install kubeadm kubelet kubectl
9.1 修改kubelet相关配置

主要配置kubelet,如果不配置可能会导致k8s集群无法启动

# 为了实现docker使用的cgroupdriver与kubelet使用的 cgroup的一致性,建议修改如下文件内容。 
[root@XXX ~]# vim /etc/sysconfig/kubelet 
KUBELET_EXTRA_ARGS="--cgroup-driver=systemd"

# 设置为开机启动,注意:这里千万不要去手动启动它,它的启动是由kubeadm初始化的时候启动
[root@master ~]# systemctl enable kubelet
Created symlink from /etc/systemd/system/multi-user.target.wants/kubelet.service to /usr/lib/systemd/system/kube
let.service.


9.2 k8s集群容器镜像准备

由于使用kubeadm部署集群,集群所有核心组件均以Pod运行,需要为主机准备镜像,不同角色主机准备不同的镜像。

Master主机镜像
# 在master主机上操作
# 查看集群使用的容器镜像
[root@master ~]# kubeadm config images list
W1108 17:10:38.408422   11402 configset.go:348] WARNING: kubeadm cannot validate component configs for API groups [ku
belet.config.k8s.io kubeproxy.config.k8s.io]k8s.gcr.io/kube-apiserver:v1.19.3
k8s.gcr.io/kube-controller-manager:v1.19.3
k8s.gcr.io/kube-scheduler:v1.19.3
k8s.gcr.io/kube-proxy:v1.19.3
k8s.gcr.io/pause:3.2
k8s.gcr.io/etcd:3.4.13-0
k8s.gcr.io/coredns:1.7.0


# 创建一个python下载脚本,shell脚本都可以,主要是实现国内镜像拉取改名字,删除原来的(这里很多小伙伴直接拉取镜像是不可以的,需要翻墙)
[root@master ~]# vim kubeadm_images.py
#! /usr/bin/python3
 
import os
 
images=[
    "kube-apiserver:v1.19.3",
    "kube-controller-manager:v1.19.3",
    "kube-scheduler:v1.19.3",
    "kube-proxy:v1.19.3",
    "pause:3.2",
    "etcd:3.4.13-0",
    "coredns:1.7.0",
]
 
for i in images:
    pullCMD = "docker pull registry.cn-hangzhou.aliyuncs.com/google_containers/{}".format(i)
    print("run cmd '{}', please wait ...".format(pullCMD))
    os.system(pullCMD)
 
    tagCMD = "docker tag registry.cn-hangzhou.aliyuncs.com/google_containers/{} k8s.gcr.io/{}".format(i, i)
    print("run cmd '{}', please wait ...".format(tagCMD ))
    os.system(tagCMD)
 
    rmiCMD = "docker rmi registry.cn-hangzhou.aliyuncs.com/google_containers/{}".format(i)
    print("run cmd '{}', please wait ...".format(rmiCMD ))
    os.system(rmiCMD)

# 执行脚本
[root@master ~]# python kubeadm_images.py

# 查看已下载的镜像
[root@master ~]# docker images
REPOSITORY                           TAG                 IMAGE ID            CREATED             SIZE
k8s.gcr.io/kube-proxy                v1.19.3             cdef7632a242        3 weeks ago         118MB
k8s.gcr.io/kube-apiserver            v1.19.3             a301be0cd44b        3 weeks ago         119MB
k8s.gcr.io/kube-controller-manager   v1.19.3             9b60aca1d818        3 weeks ago         111MB
k8s.gcr.io/kube-scheduler            v1.19.3             aaefbfa906bd        3 weeks ago         45.7MB
k8s.gcr.io/etcd                      3.4.13-0            0369cf4303ff        2 months ago        253MB
k8s.gcr.io/coredns                   1.7.0               bfe3a36ebd25        4 months ago        45.2MB
k8s.gcr.io/pause                     3.2                 80d28bedfe5d        8 months ago        683kB

Worker主机镜像(涉及docker基础中的制作镜像和load镜像)

只需要两个镜像

# master节点操作
# 制作第一个镜像
[root@master ~]# docker save -o kube-p.tar k8s.gcr.io/kube-proxy:v1.19.3
[root@master ~]# ls
anaconda-ks.cfg  kubeadm_images.py  kube-p.tar

# 制作第二个镜像
[root@master ~]# docker save -o p.tar k8s.gcr.io/pause:3.2
[root@master ~]# ls
anaconda-ks.cfg  kubeadm_images.py  kube-p.tar  p.tar

# 拷贝到worker1和worker2节点
[root@master ~]# scp kube-p.tar p.tar node1:/root
kube-p.tar                                                                         100%  114MB  28.5MB/s   00:04    
p.tar                                                                              100%  677KB  24.2MB/s   00:00    
[root@master ~]# scp kube-p.tar p.tar node2:/root
kube-p.tar                                                                         100%  114MB  16.3MB/s   00:07    
p.tar                                                                              100%  677KB  23.1MB/s   00:00  

# 分别在另外两个节点中加载刚刚传输过来的jar包,也就是我们的镜像
[root@node2 ~]# ls
anaconda-ks.cfg  kube-p.tar  p.tar
您在 /var/spool/mail/root 中有新邮件
[root@node2 ~]# docker load -i kube-p.tar 
91e3a07063b3: Loading layer [==================================================>]  53.89MB/53.89MB
b4e54f331697: Loading layer [==================================================>]  21.78MB/21.78MB
b9b82a97c787: Loading layer [==================================================>]  5.168MB/5.168MB
1b55846906e8: Loading layer [==================================================>]  4.608kB/4.608kB
061bfb5cb861: Loading layer [==================================================>]  8.192kB/8.192kB
78dd6c0504a7: Loading layer [==================================================>]  8.704kB/8.704kB
f1b0b899d419: Loading layer [==================================================>]  38.81MB/38.81MB
Loaded image: k8s.gcr.io/kube-proxy:v1.19.3
[root@node2 ~]# docker images
REPOSITORY              TAG                 IMAGE ID            CREATED             SIZE
k8s.gcr.io/kube-proxy   v1.19.3             cdef7632a242        3 weeks ago         118MB
[root@node2 ~]# docker load -i p.tar 
ba0dae6243cc: Loading layer [==================================================>]  684.5kB/684.5kB
Loaded image: k8s.gcr.io/pause:3.2
[root@node2 ~]# docker images
REPOSITORY              TAG                 IMAGE ID            CREATED             SIZE
k8s.gcr.io/kube-proxy   v1.19.3             cdef7632a242        3 weeks ago         118MB
k8s.gcr.io/pause        3.2                 80d28bedfe5d        8 months ago        683kB

9.3 K8s集群初始化

在master节点上操作

# kubeadm初始化(生成证书,注意:将生成的日志全部 复制下来后面需要用到)
[root@master ~]# kubeadm init --kubernetes-version=v1.19.3 --pod-network-cidr=172.16.0.0/16 --apiserver-advertise-add
runknown flag: --apiserver-advertise-addr
To see the stack trace of this error execute with --v=5 or higher
[root@master ~]# kubeadm init --kubernetes-version=v1.19.3 --pod-network-cidr=172.16.0.0/16 --apiserver-advertise-add
ress=192.168.177.135W1108 17:48:12.509898   14299 configset.go:348] WARNING: kubeadm cannot validate component configs for API groups [ku
belet.config.k8s.io kubeproxy.config.k8s.io][init] Using Kubernetes version: v1.19.3
[preflight] Running pre-flight checks
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
[certs] Using certificateDir folder "/etc/kubernetes/pki"
[certs] Generating "ca" certificate and key
[certs] Generating "apiserver" certificate and key
[certs] apiserver serving cert is signed for DNS names [kubernetes kubernetes.default kubernetes.default.svc kubernet
es.default.svc.cluster.local master] and IPs [10.96.0.1 192.168.177.135][certs] Generating "apiserver-kubelet-client" certificate and key
[certs] Generating "front-proxy-ca" certificate and key
[certs] Generating "front-proxy-client" certificate and key
[certs] Generating "etcd/ca" certificate and key
[certs] Generating "etcd/server" certificate and key
[certs] etcd/server serving cert is signed for DNS names [localhost master] and IPs [192.168.177.135 127.0.0.1 ::1]
[certs] Generating "etcd/peer" certificate and key
[certs] etcd/peer serving cert is signed for DNS names [localhost master] and IPs [192.168.177.135 127.0.0.1 ::1]
[certs] Generating "etcd/healthcheck-client" certificate and key
[certs] Generating "apiserver-etcd-client" certificate and key
[certs] Generating "sa" key and public key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Starting the kubelet
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kuberne
tes/manifests". This can take up to 4m0s[apiclient] All control plane components are healthy after 16.002852 seconds
[upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
[kubelet] Creating a ConfigMap "kubelet-config-1.19" in namespace kube-system with the configuration for the kubelets
 in the cluster[upload-certs] Skipping phase. Please see --upload-certs
[mark-control-plane] Marking the node master as control-plane by adding the label "node-role.kubernetes.io/master=''"
[mark-control-plane] Marking the node master as control-plane by adding the taints [node-role.kubernetes.io/master:No
Schedule][bootstrap-token] Using token: ttd325.fkw9ksxtbnfbd5kx
[bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
[bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to get nodes
[bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long te
rm certificate credentials[bootstrap-token] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bo
otstrap Token[bootstrap-token] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
[bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
[kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
[addons] Applied essential addon: CoreDNS
[addons] Applied essential addon: kube-proxy

Your Kubernetes control-plane has initialized successfully!

To start using your cluster, you need to run the following as a regular user:

  mkdir -p $HOME/.kube
  sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
  sudo chown $(id -u):$(id -g) $HOME/.kube/config

You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
  https://kubernetes.io/docs/concepts/cluster-administration/addons/

Then you can join any number of worker nodes by running the following on each as root:

kubeadm join 192.168.177.135:6443 --token ttd325.fkw9ksxtbnfbd5kx 
    --discovery-token-ca-cert-hash sha256:0e273db3742cf2f7d981e550fa0e7b830004b3f41e8712af5aa975ce2823da63 
    
# 将上面的输出信息复制完保存在桌面上后继续操作
# 准备集群管理文件
[root@master ~]# mkdir .kube
[root@master ~]# cp -i /etc/kubernetes/admin.conf .kube/config
[root@master ~]# ll .kube/config 
-rw------- 1 root root 5567 11月  8 17:55 .kube/config


# 网络配置(网络插件的使用)
[root@master ~]# scp -r calico-39 node1:/root
calico-cni.tar                                                                     100%  156MB  22.2MB/s   00:07    
calico-node.tar                                                                    100%  186MB  18.6MB/s   00:10    
calico.yml                                                                         100%   21KB   4.2MB/s   00:00    
kube-controllers.tar                                                               100%   48MB  24.1MB/s   00:02    
pod2daemon-flexvol.tar                                                             100% 9821KB  37.3MB/s   00:00    
[root@master ~]# scp -r calico-39 node2:/root
calico-cni.tar                                                                     100%  156MB  25.9MB/s   00:06    
calico-node.tar                                                                    100%  186MB  20.6MB/s   00:09    
calico.yml                                                                         100%   21KB   1.9MB/s   00:00    
kube-controllers.tar                                                               100%   48MB  24.1MB/s   00:02    
pod2daemon-flexvol.tar                                                             100% 9821KB  49.6MB/s   00:00    
[root@master ~]# ll
总用量 117580
-rw-------. 1 root root      1271 9月  16 18:54 anaconda-ks.cfg
drwxr-xr-x  2 root root       127 11月  8 17:59 calico-39
-rw-r--r--  1 root root       786 11月  8 17:15 kubeadm_images.py
-rw-------  1 root root 119695360 11月  8 17:22 kube-p.tar
-rw-------  1 root root    692736 11月  8 17:24 p.tar

# 镜像准备(每台机器都要操作)
[root@master calico-39]# ll
总用量 408720
-rw-r--r-- 1 root root 163265024 11月  8 17:59 calico-cni.tar
-rw-r--r-- 1 root root 194709504 11月  8 17:59 calico-node.tar
-rw-r--r-- 1 root root     21430 11月  8 17:59 calico.yml
-rw-r--r-- 1 root root  50465280 11月  8 17:59 kube-controllers.tar
-rw-r--r-- 1 root root  10056192 11月  8 17:59 pod2daemon-flexvol.tar
[root@master calico-39]# docker load -i calico-cni.tar 
1c95c77433e8: Loading layer [==================================================>]  72.47MB/72.47MB
f919277f01fb: Loading layer [==================================================>]  90.76MB/90.76MB
0094c919faf3: Loading layer [==================================================>]  10.24kB/10.24kB
9e1263ee4198: Loading layer [==================================================>]   2.56kB/2.56kB
Loaded image: calico/cni:v3.9.0
[root@master calico-39]# docker load -i calico-node.tar 
538afb24c98b: Loading layer [==================================================>]  33.76MB/33.76MB
85b8bbfa3535: Loading layer [==================================================>]  3.584kB/3.584kB
7a653a5cb14b: Loading layer [==================================================>]  3.584kB/3.584kB
97cc86557fed: Loading layer [==================================================>]  21.86MB/21.86MB
3abae82a71aa: Loading layer [==================================================>]  11.26kB/11.26kB
7c85b99e7c27: Loading layer [==================================================>]  11.26kB/11.26kB
0e20735d7144: Loading layer [==================================================>]   6.55MB/6.55MB
2e3dede6195a: Loading layer [==================================================>]  2.975MB/2.975MB
f85ff1d9077d: Loading layer [==================================================>]  55.87MB/55.87MB
9d55754fd45b: Loading layer [==================================================>]   1.14MB/1.14MB
Loaded image: calico/node:v3.9.0
[root@master calico-39]# docker load -i kube-controllers.tar 
fd6ffbcdb09f: Loading layer [==================================================>]  47.35MB/47.35MB
9c4005f3e0bc: Loading layer [==================================================>]  3.104MB/3.104MB
Loaded image: calico/kube-controllers:v3.9.0
[root@master calico-39]# docker load -i pod2daemon-flexvol.tar 
3fc64803ca2d: Loading layer [==================================================>]  4.463MB/4.463MB
3aff8caf48a7: Loading layer [==================================================>]   5.12kB/5.12kB
89effeea5ce5: Loading layer [==================================================>]  5.572MB/5.572MB
Loaded image: calico/pod2daemon-flexvol:v3.9.0
[root@master calico-39]# docker images
REPOSITORY                           TAG                 IMAGE ID            CREATED             SIZE
k8s.gcr.io/kube-proxy                v1.19.3             cdef7632a242        3 weeks ago         118MB
k8s.gcr.io/kube-scheduler            v1.19.3             aaefbfa906bd        3 weeks ago         45.7MB
k8s.gcr.io/kube-apiserver            v1.19.3             a301be0cd44b        3 weeks ago         119MB
k8s.gcr.io/kube-controller-manager   v1.19.3             9b60aca1d818        3 weeks ago         111MB
k8s.gcr.io/etcd                      3.4.13-0            0369cf4303ff        2 months ago        253MB
k8s.gcr.io/coredns                   1.7.0               bfe3a36ebd25        4 months ago        45.2MB
k8s.gcr.io/pause                     3.2                 80d28bedfe5d        8 months ago        683kB
calico/node                          v3.9.0              f9d62fb5edb1        14 months ago       190MB
calico/pod2daemon-flexvol            v3.9.0              aa79ce3237eb        14 months ago       9.78MB
calico/cni                           v3.9.0              56c7969ed8e6        14 months ago       160MB
calico/kube-controllers              v3.9.0              f5cc48269a09        14 months ago       50.4MB


# 只需在master节点中修改calico的yml文件
# 由于calico自身网络发现机制有问题,因为需要修改 calico使用的物理网卡,添加607及608行,修改620行
[root@master calico-39]# vim calico.yml
604             # Auto-detect the BGP IP address.
605             - name: IP
606               value: "autodetect"
607             - name: IP_AUTODETECTION_METHOD
608               value: "interface=ens.*"
609             # Enable IPIP
610             - name: CALICO_IPV4POOL_IPIP
611               value: "Always"
612             # Set MTU for tunnel device used if ipip is enabled
613             - name: FELIX_IPINIPMTU
614               valueFrom:
615                 configMapKeyRef:
616                   name: calico-config
617                   key: veth_mtu
618             # The default IPv4 pool to create on startup if none exists. Pod IPs will be
619             # chosen from this range. Changing this value after installation will have
620             # no effect. This should fall within `--cluster-cidr`.
621             - name: CALICO_IPV4POOL_CIDR
622               value: "172.16.0.0/16"

# 应用calico资源清文件
[root@master calico-39]# kubectl apply -f calico.yml

# 添加工作节点到集群(注意:只需要在工作节点执行,并且回到主目录,执行的命令就是上面输出复制的最后一行,比对进行复制,如下是成功的输出)
[root@node1 ~]# kubeadm join 192.168.177.135:6443 --token ttd325.fkw9ksxtbnfbd5kx 
>     --discovery-token-ca-cert-hash sha256:0e273db3742cf2f7d981e550fa0e7b830004b3f41e8712af5aa975ce2823da63
[preflight] Running pre-flight checks
[preflight] Reading configuration from the cluster...
[preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -oyaml'
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Starting the kubelet
[kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap...

This node has joined the cluster:
* Certificate signing request was sent to apiserver and a response was received.
* The Kubelet was informed of the new secure connection details.

Run 'kubectl get nodes' on the control-plane to see this node join the cluster.

验证K8s集群可用性方法(必须是在master节点上操作)

[root@master calico-39]# kubectl get nodes
NAME     STATUS   ROLES    AGE     VERSION
master   Ready    master   29m     v1.19.3
node1    Ready    <none>   2m30s   v1.19.3
node2    Ready    <none>   2m25s   v1.19.3

# 查看集群健康状态
[root@master ~]# kubectl get cs
Warning: v1 ComponentStatus is deprecated in v1.19+
NAME                 STATUS    MESSAGE             ERROR
controller-manager   Healthy   ok                  
scheduler            Healthy   ok                  
etcd-0               Healthy   {"health":"true"}  
[root@master ~]# kubectl cluster-info
Kubernetes master is running at https://192.168.177.135:6443
KubeDNS is running at https://192.168.177.135:6443/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy

To further debug and diagnose cluster problems, use 'kubectl cluster-info dump'.


出错修改:

在我们正常安装kubernetes1.18.6之后,可能会出现一下错误:

[root@k8s-master manifests]# kubectl get cs
NAME                 STATUS      MESSAGE                                                                                     ERROR
scheduler            Unhealthy   Get http://127.0.0.1:10251/healthz: dial tcp 127.0.0.1:10251: connect: connection refused
controller-manager   Unhealthy   Get http://127.0.0.1:10252/healthz: dial tcp 127.0.0.1:10252: connect: connection refused
etcd-0               Healthy     {"health":"true"}
出现这种情况,是/etc/kubernetes/manifests下的kube-controller-manager.yaml和kube-scheduler.yaml设置的默认端口是0,在文件中注释掉就可以了

kube-controller-manager.yaml文件修改:注释掉27行

 1 apiVersion: v1
  2 kind: Pod
  3 metadata:
  4   creationTimestamp: null
  5   labels:
  6     component: kube-controller-manager
  7     tier: control-plane
  8   name: kube-controller-manager
  9   namespace: kube-system
 10 spec:
 11   containers:
 12   - command:
 13     - kube-controller-manager
 14     - --allocate-node-cidrs=true
 15     - --authentication-kubeconfig=/etc/kubernetes/controller-manager.conf
 16     - --authorization-kubeconfig=/etc/kubernetes/controller-manager.conf
 17     - --bind-address=127.0.0.1
 18     - --client-ca-file=/etc/kubernetes/pki/ca.crt
 19     - --cluster-cidr=10.244.0.0/16
 20     - --cluster-name=kubernetes
 21     - --cluster-signing-cert-file=/etc/kubernetes/pki/ca.crt
 22     - --cluster-signing-key-file=/etc/kubernetes/pki/ca.key
 23     - --controllers=*,bootstrapsigner,tokencleaner
 24     - --kubeconfig=/etc/kubernetes/controller-manager.conf
 25     - --leader-elect=true
 26     - --node-cidr-mask-size=24
 27   #  - --port=0
 28     - --requestheader-client-ca-file=/etc/kubernetes/pki/front-proxy-ca.crt
 29     - --root-ca-file=/etc/kubernetes/pki/ca.crt
 30     - --service-account-private-key-file=/etc/kubernetes/pki/sa.key
 31     - --service-cluster-ip-range=10.1.0.0/16
 32     - --use-service-account-credentials=true
kube-scheduler.yaml配置修改:注释掉19行

 1 apiVersion: v1
  2 kind: Pod
  3 metadata:
  4   creationTimestamp: null
  5   labels:
  6     component: kube-scheduler
  7     tier: control-plane
  8   name: kube-scheduler
  9   namespace: kube-system
 10 spec:
 11   containers:
 12   - command:
 13     - kube-scheduler
 14     - --authentication-kubeconfig=/etc/kubernetes/scheduler.conf
 15     - --authorization-kubeconfig=/etc/kubernetes/scheduler.conf
 16     - --bind-address=127.0.0.1
 17     - --kubeconfig=/etc/kubernetes/scheduler.conf
 18     - --leader-elect=true
 19   #  - --port=0
然后三台机器均重启kubelet

[root@k8s-master ]# systemctl restart kubelet.service
再次查看,就正常啦

[root@k8s-master manifests]# kubectl get cs
NAME                 STATUS    MESSAGE             ERROR
scheduler            Healthy   ok
controller-manager   Healthy   ok
etcd-0               Healthy   {"health":"true"}
原文地址:https://www.cnblogs.com/wyh-study/p/13947064.html