centos7.8系统使用Kubeadm安装部署kubernetes1.23.1

一、机器情况

主机 ip 配置 操作系统
master 192.168.0.160 2c4g50G centos7.8
node01 192.168.0.6 2c4g50G centos7.8
node02 192.168.0.167 2c4g50G centos7.8

二、机器设置

以下步骤需要在每个节点上执行

1、设置主机名

在各自节点上设置各自得主机名

hostnamectl set-hostname  master

hostnamectl set-hostname  node01

hostnamectl set-hostname  node02

2、设置hosts

cat <<EOF >/etc/hosts
192.168.0.160  master
192.168.0.6    node01
192.168.0.167  node02
EOF

3、设置防火墙以及seliunx

关闭防火墙
systemctl stop firewalld
设置开机不启动
systemctl disable firewalld
关闭selinux
vi /etc/selinux/config
SELINUX=disabled
重启系统
reboot

4、关闭swap分区

vi  /etc/fstab
 #
 # /etc/fstab
 # Created by anaconda on Mon Jan 21 19:19:41 2019
 #
 # Accessible filesystems, by reference, are maintained under '/dev/disk'
 # See man pages fstab(5), findfs(8), mount(8) and/or blkid(8) for more info
 #
 /dev/mapper/centos-root /                       xfs     defaults        0 0
 UUID=214b916c-ad23-4762-b916-65b53fce1920 /boot                   xfs     defaults        0     0
 #/dev/mapper/centos-swap swap                    swap    defaults        0 0

5、创建/etc/sysctl.d/k8s.conf文件,添加如下内容

cat <<EOF >/etc/sysctl.d/k8s.conf
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
net.ipv4.ip_forward = 1
vm.swappiness=0
EOF

#执行命令使修改生效

modprobe br_netfilter

sysctl -p /etc/sysctl.d/k8s.conf

6、kube-proxy开启ipvs的前置条件

cat > /etc/sysconfig/modules/ipvs.modules <<EOF
#!/bin/bash
modprobe -- ip_vs
modprobe -- ip_vs_rr
modprobe -- ip_vs_wrr
modprobe -- ip_vs_sh
modprobe -- nf_conntrack_ipv4
EOF

加载模块
chmod 755 /etc/sysconfig/modules/ipvs.modules && bash /etc/sysconfig/modules/ipvs.modules && lsmod | grep -e ip_vs -e nf_conntrack_ipv4

安装了ipset软件包
yum install ipset -y

安装管理工具ipvsadm
yum install ipvsadm -y

三、安装docker

以下步骤需要在每个节点上执行

1、设置阿里云docker yum源

yum-config-manager  --add-repo https://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo

如果yum-config-manager不能用,请安装yum-utils
yum -y install yum-utils

查看可安装Docker版本
yum list docker-ce.x86_64  --showduplicates |sort -r

2、安装docker

yum默认是安装最新版本,但是为了兼容性,这里就指定版本安装

yum install -y --setopt=obsoletes=0 docker-ce-18.06.1.ce-3.el7

3、设置docker的Cgroup Driver

cat <<EOF >/etc/docker/daemon.json
{
"exec-opts": ["native.cgroupdriver=systemd"]
}
EOF

如果上面一行还有内容,记得在上面一行加上逗号

4、启动docker设置开机启动

systemctl start docker && systemctl enable docker

四、使用kubeadm安装kubernetes

1-4需在所有节点执行,5-6在master节点上执行

1、配置yum源

vi /etc/yum.repos.d/kubernetes.repo
[kubernetes]
name=Kubernetes
baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64/
gpgcheck=1
enable=1
repo_gpgcheck=1
gpgkey=https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg
	https://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg

2、安装kubelet,kubeadm,kubectl

yum makecache fast && yum install -y kubelet  kubeadm kubectl

3、修改kubelet的Cgroup Driver

cat <<EOF >/etc/sysconfig/kubelet
KUBELET_CGROUP_ARGS="--cgroup-driver=systemd"
KUBE_PROXY_MODE="ipvs"
EOF 

4、下载必要的镜像

[root@master ~]# kubeadm config images list
k8s.gcr.io/kube-apiserver:v1.23.1
k8s.gcr.io/kube-controller-manager:v1.23.1
k8s.gcr.io/kube-scheduler:v1.23.1
k8s.gcr.io/kube-proxy:v1.23.1
k8s.gcr.io/pause:3.6
k8s.gcr.io/etcd:3.5.1-0
k8s.gcr.io/coredns/coredns:v1.8.6  ##注意这里多了一级coredns

vi k8s.sh
docker pull mirrorgooglecontainers/kube-apiserver:v1.23.1
docker pull mirrorgooglecontainers/kube-controller-manager:v1.23.1
docker pull mirrorgooglecontainers/kube-scheduler:v1.23.1
docker pull mirrorgooglecontainers/kube-proxy:v1.23.1
docker pull mirrorgooglecontainers/pause:3.6
docker pull mirrorgooglecontainers/etcd:3.5.1-0
docker pull coredns/coredns:v1.8.6


docker tag mirrorgooglecontainers/kube-apiserver:v1.12.1 k8s.gcr.io/kube-apiserver:v1.23.1
docker tag mirrorgooglecontainers/kube-controller-manager:v1.23.1 k8s.gcr.io/kube-controller-manager:v1.23.1
docker tag mirrorgooglecontainers/kube-scheduler:v1.12.1 k8s.gcr.io/kube-scheduler:v1.23.1
docker tag mirrorgooglecontainers/kube-proxy:v1.12.1  k8s.gcr.io/kube-proxy:v1.23.1
docker tag mirrorgooglecontainers/pause:3.1  k8s.gcr.io/pause:3.6
docker tag mirrorgooglecontainers/etcd:3.2.24  k8s.gcr.io/etcd:3.5.1-0
docker tag coredns/coredns:v1.8.6  k8s.gcr.io/coredns/coredns:v1.8.6  ##这里打tag的时候也要多一级coredns



docker rmi mirrorgooglecontainers/kube-apiserver:v1.23.1
docker rmi mirrorgooglecontainers/kube-controller-manager:v1.23.1
docker rmi mirrorgooglecontainers/kube-scheduler:v1.23.1
docker rmi mirrorgooglecontainers/kube-proxy:v1.23.1
docker rmi mirrorgooglecontainers/pause:3.6
docker rmi mirrorgooglecontainers/etcd:3.5.1-0
docker rmi coredns/coredns:v1.8.6

5、初始化

kubeadm init \
--kubernetes-version=v1.23.1 \
--pod-network-cidr=10.244.0.0/16 \
--service-cidr=10.96.0.0/12 \
--apiserver-advertise-address=192.168.0.160


mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config

如果上面的参数填写错误,需要重新初始化需执行重置
kubeadm reset kubernetes-version:修改要安装的版本 apiserver-advertise-address:master节点的ip地址

当出现如下信息表示成功

Your Kubernetes control-plane has initialized successfully!

To start using your cluster, you need to run the following as a regular user:

  mkdir -p $HOME/.kube
  sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
  sudo chown $(id -u):$(id -g) $HOME/.kube/config

Alternatively, if you are the root user, you can run:

  export KUBECONFIG=/etc/kubernetes/admin.conf

You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
  https://kubernetes.io/docs/concepts/cluster-administration/addons/

Then you can join any number of worker nodes by running the following on each as root:
在nide01和node02节点上分别执行如下命令,加入集群

kubeadm join 192.168.0.160:6443 --token 57jle4.zbccddfk8d2su6pe \
	--discovery-token-ca-cert-hash sha256:556eeec7a4d742155a785b90a6efaebd95c466ad939047d4ad90ccb55dc35418

  

6、Flannel部署

wget https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml

kubectl apply -f  kube-flannel.yml


如需重新安装需要先删除所创建的网络配置
kubectl delete -f  kube-flannel.yml

7、节点加入集群

在node01和node02节点上分别执行如下命令加入集群:

kubeadm join 192.168.0.160:6443 --token 57jle4.zbccddfk8d2su6pe \
	--discovery-token-ca-cert-hash sha256:556eeec7a4d742155a785b90a6efaebd95c466ad939047d4ad90ccb55dc35418

8、查看状态

master节点上执行,如果都为Ready表示成功
kubectl  get node

[root@master ~]# kubectl  get node
NAME     STATUS   ROLES                  AGE   VERSION
master   Ready    control-plane,master   27m   v1.23.1
node01   Ready    <none>                 17m   v1.23.1
node02   Ready    <none>                 17m   v1.23.1

  

 五、镜像地址

我已把相关镜像打包,放到云盘上了,有需要的自取

链接: https://pan.baidu.com/s/1XKN32WXiXmp6XKlsgw-xGw 提取码: q8hs 

 

作者:凉生墨客 本文版权归作者所有,欢迎转载,但未经作者同意必须保留此段声明,且在文章页面明显位置给出原文连接,否则保留追究法律责任的权利。
原文地址:https://www.cnblogs.com/heruiguo/p/15719859.html