kubeadm部署k8s

以下所有操作,在三台节点全部执行

1、关闭防火墙及selinux

~]# systemctl stop firewalld && systemctl disable firewalld
~]# systemctl stop firewalld && systemctl disable firewalld
Removed symlink /etc/systemd/system/multi-user.target.wants/firewalld.service.
Removed symlink /etc/systemd/system/dbus-org.fedoraproject.FirewallD1.service.
~]# sed -i 's/^SELINUX=.*/SELINUX=disabled/' /etc/selinux/config  && setenforce 0

2、关闭交换分区

~]# swapoff -a
~]# sed -i '/ swap / s/^(.*)$/#1/g' /etc/fstab

3、设置主机名并配置hosts文件

~]# hostnamectl set-hostname k8s-master
~]# hostnamectl set-hostname k8s-node01
~]# hostnamectl set-hostname k8s-node02
~]# echo -e "192.168.53.6 k8s-master
192.168.53.7 k8s-node01
192.168.53.8 k8s-node02" >>/etc/hosts

4、优化内核参数

cat > /etc/sysctl.d/kubernetes.conf << EOF
net.bridge.bridge-nf-call-iptables=1
net.bridge.bridge-nf-call-ip6tables=1
net.ipv4.ip_forward=1
net.ipv4.tcp_tw_recycle=0
vm.swappiness=0 # 禁止使用 swap 空间,只有当系统 OOM 时才允许使用它
vm.overcommit_memory=1 # 不检查物理内存是否够用
vm.panic_on_oom=0 # 开启 OOM
fs.inotify.max_user_instances=8192
fs.inotify.max_user_watches=1048576
fs.file-max=52706963
fs.nr_open=52706963
net.ipv6.conf.all.disable_ipv6=1
net.netfilter.nf_conntrack_max=2310720
EOF

5、设置时间同步,调整时区

~]# timedatectl set-timezone "Asia/Shanghai"
##同步时间 ntpd服务,略

6、安装docker

##卸载旧版本(如果安装过旧版本的话)
~]# yum remove docker  docker-common docker-selinux docker-engine
##安装需要的软件包, yum-util 提供yum-config-manager功能,另外两个是devicemapper驱动依赖的
~]# yum install -y yum-utils device-mapper-persistent-data lvm2
##设置yum源
~]# wget https://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo -O /etc/yum.repos.d/docker-ce.repo
##查看所有仓库中所有docker版本,并选择特定版本安装
~]# yum list docker-ce --showduplicates | sort -r

~]# yum -y install docker-ce-18.06.1.ce-3.el7
~]# systemctl enable docker && systemctl start docker
~]# docker version

7、添加kubernetes YUM软件源

~]# cat > /etc/yum.repos.d/kubernetes.repo << EOF
> [kubernetes]
> name=Kubernetes
> baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64
> enabled=1
> gpgcheck=0
> repo_gpgcheck=0
> gpgkey=https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg https://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg
> EOF

8、安装kubeadm,kubelet和kubectl

##查看现有得版本
~]# yum list kubelet --showduplicates | sort -r
~]# yum install -y kubelet-1.17.0 kubeadm-1.17.0 kubectl-1.17.0

9、添加阿里云镜像仓库,配置镜像加速器

tee /etc/docker/daemon.json <<-'EOF'
{
  "registry-mirrors": ["https://9dxgnq38.mirror.aliyuncs.com"]
}
EOF

systemctl daemon-reload && systemctl restart docker

10、部署kubernetes master

[root@k8s-master ~]# kubeadm init 
> --apiserver-advertise-address=192.168.53.6 
> --image-repository registry.aliyuncs.com/google_containers 
> --kubernetes-version v1.17.0 
> --service-cidr=10.1.0.0/16 
> --pod-network-cidr=10.244.0.0/16

##输出结果##

W0412 10:12:53.420583   21203 validation.go:28] Cannot validate kube-proxy config - no validator is available
W0412 10:12:53.420625   21203 validation.go:28] Cannot validate kubelet config - no validator is available
[init] Using Kubernetes version: v1.17.0
[preflight] Running pre-flight checks
    [WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Starting the kubelet
[certs] Using certificateDir folder "/etc/kubernetes/pki"
[certs] Generating "ca" certificate and key
[certs] Generating "apiserver" certificate and key
[certs] apiserver serving cert is signed for DNS names [k8s-master kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local] and IPs [10.1.0.1 192.168.53.6]
[certs] Generating "apiserver-kubelet-client" certificate and key
[certs] Generating "front-proxy-ca" certificate and key
[certs] Generating "front-proxy-client" certificate and key
[certs] Generating "etcd/ca" certificate and key
[certs] Generating "etcd/server" certificate and key
[certs] etcd/server serving cert is signed for DNS names [k8s-master localhost] and IPs [192.168.53.6 127.0.0.1 ::1]
[certs] Generating "etcd/peer" certificate and key
[certs] etcd/peer serving cert is signed for DNS names [k8s-master localhost] and IPs [192.168.53.6 127.0.0.1 ::1]
[certs] Generating "etcd/healthcheck-client" certificate and key
[certs] Generating "apiserver-etcd-client" certificate and key
[certs] Generating "sa" key and public key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
W0412 10:13:27.370448   21203 manifests.go:214] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
[control-plane] Creating static Pod manifest for "kube-scheduler"
W0412 10:13:27.372031   21203 manifests.go:214] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
[apiclient] All control plane components are healthy after 16.513820 seconds
[upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
[kubelet] Creating a ConfigMap "kubelet-config-1.17" in namespace kube-system with the configuration for the kubelets in the cluster
[upload-certs] Skipping phase. Please see --upload-certs
[mark-control-plane] Marking the node k8s-master as control-plane by adding the label "node-role.kubernetes.io/master=''"
[mark-control-plane] Marking the node k8s-master as control-plane by adding the taints [node-role.kubernetes.io/master:NoSchedule]
[bootstrap-token] Using token: 1e814k.iqrbdc1n8riq9s6g
[bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
[bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
[bootstrap-token] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
[bootstrap-token] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
[bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
[kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
[addons] Applied essential addon: CoreDNS
[addons] Applied essential addon: kube-proxy

Your Kubernetes control-plane has initialized successfully!

To start using your cluster, you need to run the following as a regular user:

  mkdir -p $HOME/.kube
  sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
  sudo chown $(id -u):$(id -g) $HOME/.kube/config

You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
  https://kubernetes.io/docs/concepts/cluster-administration/addons/

Then you can join any number of worker nodes by running the following on each as root:

kubeadm join 192.168.53.6:6443 --token 1e814k.iqrbdc1n8riq9s6g 
    --discovery-token-ca-cert-hash sha256:20466ca6824d169e6c7c5a1f253702145e79829485f088eb3f30d9668dcb7c28
输出结果

##根据输出结果提示操作##

  mkdir -p $HOME/.kube
  sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
  sudo chown $(id -u):$(id -g) $HOME/.kube/config

##获取ca证书sha256编码hash值##

~]# openssl x509 -pubkey -in /etc/kubernetes/pki/ca.crt | openssl rsa -pubin -outform der 2>/dev/null | openssl dgst -sha256 -hex | sed 's/^.* //'
20466ca6824d169e6c7c5a1f253702145e79829485f088eb3f30d9668dcb7c28

11、加入集群节点node

##根据输出结果加入node节点##

kubeadm join 192.168.53.6:6443 --token 1e814k.iqrbdc1n8riq9s6g 
    --discovery-token-ca-cert-hash sha256:20466ca6824d169e6c7c5a1f253702145e79829485f088eb3f30d9668dcb7c28
##查看集群
[root@k8s-master ~]# kubectl get nodes
NAME         STATUS     ROLES    AGE     VERSION
k8s-master   NotReady   master   9m35s   v1.17.0
k8s-node01   NotReady   <none>   61s     v1.17.0
k8s-node02   NotReady   <none>   26s     v1.17.0

若需要在其他节点执行kubectl命令只需将master得$HOME/.kube/config复制其他节点即可

12、安装网络插件

1)手动拉取

在集群的所有机器上操作

# 手动拉取flannel的docker镜像
docker pull easzlab/flannel:v0.11.0-amd64
# 修改镜像名称
docker tag easzlab/flannel:v0.11.0-amd64 quay.io/coreos/flannel:v0.11.0-amd64

2)下载并安装flannel资源配置清单(此操作在master节点上进行)

wget  https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml
kubectl apply -f kube-flannel.yml 

13、部署 Dashboard

在k8s-master上操作

        获取dashboard的recommended.yaml

wget https://raw.githubusercontent.com/kubernetes/dashboard/v2.0.0-rc6/aio/deploy/recommended.yaml

修改recommended.yaml配置,如下:

[root@k8s-master k8s_install]# pwd
/root/k8s_install
[root@k8s-master k8s_install]# vim recommended.yaml
………………
kind: Service
apiVersion: v1
metadata:
  labels:
    k8s-app: kubernetes-dashboard
  name: kubernetes-dashboard
  namespace: kubernetes-dashboard
spec:
  # 添加处
  type: NodePort
  ports:
    - port: 443
      targetPort: 8443
      # 添加处
      nodePort: 30001
  selector:
    k8s-app: kubernetes-dashboard
………………
  template:
    metadata:
      labels:
        k8s-app: kubernetes-dashboard
    spec:
      containers:
        - name: kubernetes-dashboard
          image: kubernetesui/dashboard:v2.0.0-rc6
          # 修改处 从 Always 改为了 IfNotPresent
          #imagePullPolicy: Always
          imagePullPolicy: IfNotPresent
………………
    spec:
      containers:
        - name: dashboard-metrics-scraper
          image: kubernetesui/metrics-scraper:v1.0.3
          # 添加如下行
          imagePullPolicy: IfNotPresent

启动dashboard

kubectl apply -f recommended.yaml

查看dashboard运行情况

[root@k8s-master ~]# kubectl get po -A -o wide
NAMESPACE              NAME                                         READY   STATUS    RESTARTS   AGE     IP             NODE         NOMINATED NODE   READINESS GATES
kube-system            coredns-9d85f5447-mqqp2                      1/1     Running   1          4h6m    10.244.0.3     k8s-master   <none>           <none>
kube-system            coredns-9d85f5447-n4wxb                      1/1     Running   1          4h6m    10.244.1.3     k8s-node01   <none>           <none>
kube-system            etcd-k8s-master                              1/1     Running   1          4h7m    192.168.53.6   k8s-master   <none>           <none>
kube-system            kube-apiserver-k8s-master                    1/1     Running   1          4h7m    192.168.53.6   k8s-master   <none>           <none>
kube-system            kube-controller-manager-k8s-master           1/1     Running   1          4h7m    192.168.53.6   k8s-master   <none>           <none>
kube-system            kube-flannel-ds-6588w                        1/1     Running   1          3h45m   192.168.53.7   k8s-node01   <none>           <none>
kube-system            kube-flannel-ds-tbsjr                        1/1     Running   1          3h45m   192.168.53.8   k8s-node02   <none>           <none>
kube-system            kube-flannel-ds-xmntx                        1/1     Running   1          3h45m   192.168.53.6   k8s-master   <none>           <none>
kube-system            kube-proxy-9tzfp                             1/1     Running   1          4h6m    192.168.53.6   k8s-master   <none>           <none>
kube-system            kube-proxy-mtrcg                             1/1     Running   1          3h58m   192.168.53.8   k8s-node02   <none>           <none>
kube-system            kube-proxy-nfsph                             1/1     Running   1          3h58m   192.168.53.7   k8s-node01   <none>           <none>
kube-system            kube-scheduler-k8s-master                    1/1     Running   1          4h7m    192.168.53.6   k8s-master   <none>           <none>
kubernetes-dashboard   dashboard-metrics-scraper-7b8b58dc8b-rrps4   1/1     Running   0          115s    10.244.2.4     k8s-node02   <none>           <none>
kubernetes-dashboard   kubernetes-dashboard-755dcb9575-dxb8h        1/1     Running   0          115s    10.244.1.4     k8s-node01   <none>           <none>

[root@k8s-master ~]# kubectl get pods -n kubernetes-dashboard -o wide
NAME                                         READY   STATUS    RESTARTS   AGE     IP           NODE         NOMINATED NODE   READINESS GATES
dashboard-metrics-scraper-7b8b58dc8b-rrps4   1/1     Running   0          2m18s   10.244.2.4   k8s-node02   <none>           <none>
kubernetes-dashboard-755dcb9575-dxb8h        1/1     Running   0          2m18s   10.244.1.4   k8s-node01   <none>           <none>

查看services服务信息

[root@k8s-master ~]# kubectl get services --all-namespaces
NAMESPACE              NAME                        TYPE        CLUSTER-IP     EXTERNAL-IP   PORT(S)                  AGE
default                kubernetes                  ClusterIP   10.1.0.1       <none>        443/TCP                  4h8m
kube-system            kube-dns                    ClusterIP   10.1.0.10      <none>        53/UDP,53/TCP,9153/TCP   4h8m
kubernetes-dashboard   dashboard-metrics-scraper   ClusterIP   10.1.141.130   <none>        8000/TCP                 3h25m
kubernetes-dashboard   kubernetes-dashboard        NodePort    10.1.81.27     <none>        443:30001/TCP            3h25m

浏览器访问: https://192.168.53.6:30001

访问结果如下:

使用令牌登录(需要创建能够访问 Dashboard 的用户)

[root@k8s-master ~]# cat account.yaml
# Create Service Account
apiVersion: v1
kind: ServiceAccount
metadata:
  name: admin-user
  namespace: kube-system
---
# Create ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
  name: admin-user
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: cluster-admin
subjects:
- kind: ServiceAccount
  name: admin-user
  namespace: kube-system

[root@k8s-master ~]# kubectl apply -f account.yaml
serviceaccount/admin-user created
clusterrolebinding.rbac.authorization.k8s.io/admin-user created

查看绑定信息

[root@k8s-master ~]# kubectl get clusterrolebinding
NAME                                                   AGE
admin-user                                             33s
cluster-admin                                          4h15m
dashboard-admin                                        3h31m
flannel                                                3h58m
kubeadm:kubelet-bootstrap                              4h15m
kubeadm:node-autoapprove-bootstrap                     4h15m
kubeadm:node-autoapprove-certificate-rotation          4h15m
kubeadm:node-proxier                                   4h15m
kubernetes-dashboard                                   3h31m
system:basic-user                                      4h15m
system:controller:attachdetach-controller              4h15m
system:controller:certificate-controller               4h15m
system:controller:clusterrole-aggregation-controller   4h15m
......

获取tocken

[root@k8s-master ~]# kubectl describe secrets -n kube-system $(kubectl -n kube-system get secret | grep admin-user | awk '{print $1}')
Name:         admin-user-token-jqfkj
Namespace:    kube-system
Labels:       <none>
Annotations:  kubernetes.io/service-account.name: admin-user
              kubernetes.io/service-account.uid: 1e5bc391-b62f-4238-81ae-146aa1cbf434

Type:  kubernetes.io/service-account-token

Data
====
token:      eyJhbGciOiJSUzI1NiIsImtpZCI6IjFVMmVVWERvbm1ZN1Y2NWYxRHFGeF9PWTljSFI5ZGRoUDNoUVM0UE5tR00ifQ.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJrdWJlLXN5c3RlbSIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJhZG1pbi11c2VyLXRva2VuLWpxZmtqIiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9zZXJ2aWNlLWFjY291bnQubmFtZSI6ImFkbWluLXVzZXIiLCJrdWJlcm5ldGVzLmlvL3NlcnZpY2VhY2NvdW50L3NlcnZpY2UtYWNjb3VudC51aWQiOiIxZTViYzM5MS1iNjJmLTQyMzgtODFhZS0xNDZhYTFjYmY0MzQiLCJzdWIiOiJzeXN0ZW06c2VydmljZWFjY291bnQ6a3ViZS1zeXN0ZW06YWRtaW4tdXNlciJ9.ylZtHEs_UhBsTHGCmXib0GpZ1sJSWkbHyycXVUn_Ny_foffPGrDeGCwr6Hs8wuDXJu7rMXvqYM06-XohrAebRFp162DuIht4K7LLUmUtCued9BfKKqVqkjVhP-RN_A3iHuACpkDmvqi09zthMYRYNuXEzjkGWKiLeAMSEyygisch8JW1_1s0FhKlQL9kMD_C1mx0tHUdmHoH_sheCb6Hwib1l7eREocqi8UaZRU3QCWgcx_uX-_bXCAulNqO_UhRdG28lky7wxvf4-QgkjLaw-b3-eKFjxdDgVIS11Yuag5SErmOepVcQNB6XHn04rnYZ6k0ecUob-9CiXPRlGs-ig
ca.crt:     1025 bytes
namespace:  11 bytes

将得到的token信息用于访问,结果如下:

 

原文地址:https://www.cnblogs.com/goujinyang/p/14648417.html