k8s集群部署

基础配置

systemctl disable firewalld
systemctl stop firewalld
关闭SELinux,
set setenforce 0
sed -i 's/^SELINUX=enforcing$/SELINUX=disabled/' /etc/selinux/config
systemctl disable firewalld --now
禁用swap
swapoff -a
vi /etc/fstab  #永久禁掉swap分区,打开如下文件注释掉swap那一行
系统参数与内核模块

# 修改内核参数
cat <<EOF > /etc/sysctl.d/k8s.conf
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
EOF
sysctl --system

安装docker

// 安装docker
$ yum install -y docker-ce
// 开机启动 && 启动服务
$ systemctl enable docker && systemctl start docker

安装k8s命令

# 在/etc/yum.repos.d 下创建k8s.repos, 并添加如下内容

name=Kubernetes
baseurl=http://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64
enabled=1
gpgcheck=0
repo_gpgcheck=0
gpgkey=http://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg
       http://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg
// 安装
yum install -y kubelet kubeadm kubectl
systemctl enable kubelet  && systemctl start kubelet

开启k8s需要的端口号,可以使用netstat | grep 端口号 进行查看
// 6443
firewall-cmd --zone=public --add-port=6443/tcp --permanent && firewall-cmd --reload
// 10250
firewall-cmd --zone=public --add-port=10250/tcp --permanent && firewall-cmd --reload

拉取镜像

// 查看kubeadm镜像
$ kubeadm config images list

k8s.gcr.io/kube-apiserver:v1.16.2
k8s.gcr.io/kube-controller-manager:v1.16.2
k8s.gcr.io/kube-scheduler:v1.16.2
k8s.gcr.io/kube-proxy:v1.16.2
k8s.gcr.io/pause:3.1
k8s.gcr.io/etcd:3.3.15-0
k8s.gcr.io/coredns:1.6.2

一般大家都不能翻墙,下面是我组合的命令(或者将k8s.gcr.io替换成registry.aliyuncs.com/google_containers),最后大家可以自行docker rmi 删除不需要的镜像

拉取镜像
docker pull mirrorgooglecontainers/kube-apiserver-amd64:v1.16.2
docker pull mirrorgooglecontainers/kube-controller-amd64:v1.16.2
docker pull mirrorgooglecontainers/kube-controller-manager-amd64:v1.16.2
docker pull mirrorgooglecontainers/kube-scheduler-amd64:v1.16.2
docker pull mirrorgooglecontainers/kube-proxy-amd64:v1.16.2
docker pull mirrorgooglecontainers/pause:3.1
docker pull mirrorgooglecontainers/etcd-amd64:3.3.15-0
docker pull mirrorgooglecontainers/coredns:1.6.2
docker pull mirrorgooglecontainers/coredns-amd64:1.6.2
docker pull coredns/coredns:1.6.2

给镜像打上标签
docker tag mirrorgooglecontainers/kube-proxy-amd64:v1.16.2 k8s.gcr.io/kube-proxy:v1.16.2
docker tag mirrorgooglecontainers/kube-apiserver-amd64:v1.16.2 k8s.gcr.io/kube-apiserver:v1.16.2
docker tag mirrorgooglecontainers/kube-controller-manager-amd64:v1.16.2 k8s.gcr.io/kube-controller-manager:v1.16.2
docker tag mirrorgooglecontainers/kube-scheduler-amd64:v1.16.2 k8s.gcr.io/kube-scheduler:v1.16.2
docker tag mirrorgooglecontainers/etcd-amd64:3.3.15-0 k8s.gcr.io/etcd:3.3.15-0
docker tag coredns/coredns:1.6.2 k8s.gcr.io/coredns:1.6.2
docker tag mirrorgooglecontainers/pause:3.1 k8s.gcr.io/pause:3.1

初始化k8s

这里需要几分钟
kubeadm init
下面是成功的提示
Your Kubernetes control-plane has initialized successfully!

To start using your cluster, you need to run the following as a regular user:

  mkdir -p $HOME/.kube
  sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
  sudo chown $(id -u):$(id -g) $HOME/.kube/config

You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
  https://kubernetes.io/docs/concepts/cluster-administration/addons/

Then you can join any number of worker nodes by running the following on each as root:

kubeadm join 192.168.1.28:6443 --token 80giyd.frc8n2a1xpmly9or 
    --discovery-token-ca-cert-hash sha256:2511ccbdf3a1bb20a4acb35fc38e917c03c962210ff09bcb9691e703132dbd70

安装成功之后按照上面的提示进行配置

// 安装成功后根据提示配置
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config

// master 参与工作(单机模式必备)由于我是一台机器配置的单机k8s,这里开启单机模式
kubectl taint nodes --all node-role.kubernetes.io/master-

使用kubectl get nodes
[root@testserver lvph]# kubectl get nodes
NAME         STATUS     ROLES    AGE     VERSION
testserver   NotReady   master   5m52s   v1.16.2
这里显示NotReady 是因为我们没有安装网络插件

You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
  https://kubernetes.io/docs/concepts/cluster-administration/addons/
但是我们这里使用的weave模式
使用一下命令  kubectl apply -f "https://cloud.weave.works/k8s/net?k8s-version=$(kubectl version | base64 | tr -d '
')"
[root@testserver lvph]# kubectl apply -f "https://cloud.weave.works/k8s/net?k8s-version=$(kubectl version | base64 | tr -d '
')"
serviceaccount/weave-net created
clusterrole.rbac.authorization.k8s.io/weave-net created
clusterrolebinding.rbac.authorization.k8s.io/weave-net created
role.rbac.authorization.k8s.io/weave-net created
rolebinding.rbac.authorization.k8s.io/weave-net created
daemonset.apps/weave-net created
[root@testserver lvph]# kubectl get nodes
NAME         STATUS   ROLES    AGE     VERSION
testserver   Ready    master   9m43s   v1.16.2

原文地址:https://www.cnblogs.com/lph970417/p/11793288.html