1.kubernetes的五个组件
master节点的三个组件
kube-apiserver
整个集群的唯一入口,并提供认证、授权、访问控制、API注册和发现等机制。
kube-controller-manager (控制器管理器)
负责维护集群的状态,比如故障检测、自动扩展、滚动更新等。保证资源到达期望值。
kube-scheduler
调度器
经过策略调度POD到合适的节点上面运行。分别有预选策略和优选策略。
node节点的两个组件
kubelet
在集群节点上运行的代理,kubelet会通过各种机制来确保容器处于运行状态且健康。kubelet不会管理不是由kubernetes创建的容器。kubelet接收POD的期望状态(副本数、镜像、网络等),并调用容器运行环境来实现预期状态。
kubelet会定时汇报节点的状态给apiserver,作为scheduler调度的基础。kubelet会对镜像和容器进行清理,避免不必要的文件资源占用。
kube-proxy
kube-proxy是集群中节点上运行的网络代理,是实现service资源功能组件之一。kube-proxy建立了POD网络和集群网络之间的关系。不同node上的service流量转发规则会通过kube-proxy来调用apiserver访问etcd进行规则更新。
service流量调度方式有三种方式:userspace(废弃,性能很差)、iptables(性能差,复杂,即将废弃)、ipvs(性能好,转发方式清晰)。
2.集群架构
角色 | ip(vip 10.252.4.10) | 组件 |
km1 | 10.252.4.11 | kube-apiserver,kube-controller-manager,kube-scheduler,etcd,nginx,keepalived |
km2 | 10.252.4.12 | kube-apiserver,kube-controller-manager,kube-scheduler,etcd,nginx,keepalived |
kn1 | 10.252.4.13 | kubelet,kube-proxy,docker etcd |
kn2 | 10.252.4.14 | kubelet,kube-proxy,docker etcd |
dev | 10.252.4.2 | nfsdnsdocker |
3. 搭建集群
3.1 机器基本配置
以下配置在5台机器上面操作
3.1.1 修改主机名
修改主机名称:km1、km2
ode1、node2
3.1.2 配置hosts文件
修改机器的/etc/hosts文件
cat >> /etc/hosts << EOF 10.252.4.11 km1 10.252.4.12 km2 10.252.4.14 kn1 10.252.4.15 kn2 EOF
3.1.3 关闭防火墙和selinux
systemctl stop firewalld setenforce 0 sed -i 's/^SELINUX=.*/SELINUX=disabled/' /etc/selinux/config
3.1.4 关闭交换分区
swapoff -a
永久关闭,修改/etc/fstab,注释掉swap一行
3.1.5 时间同步
yum install -y chrony systemctl start chronyd systemctl enable chronyd chronyc sources
3.1.6 修改内核参数
cat > /etc/sysctl.d/k8s.conf << EOF net.ipv4.ip_forward = 1 net.bridge.bridge-nf-call-ip6tables = 1 net.bridge.bridge-nf-call-iptables = 1 EOF sysctl --system
3.1.7 加载ipvs模块
modprobe -- ip_vs modprobe -- ip_vs_rr modprobe -- ip_vs_wrr modprobe -- ip_vs_sh modprobe -- nf_conntrack_ipv4 lsmod | grep ip_vs lsmod | grep nf_conntrack_ipv4 yum install -y ipvsadm
3.2 配置工作目录
每台机器都需要配置证书文件、组件的配置文件、组件的服务启动文件,现专门选择 km1 来统一生成这些文件,然后再分发到其他机器。以下操作在 km1 上进行
[root@km1 ~]# mkdir -p /data/work 注:该目录为配置文件和证书文件生成目录,后面的所有文件生成相关操作均在此目录下进行 [root@km1 ~]# ssh-keygen -t rsa -b 2048 [root@km1 ~]# ssh-copy-id -i .ssh/id_rsa.pub km2
将秘钥分发到另外五台机器,让 master1 可以免密码登录其他机器
3.3 搭建etcd集群
3.3.1 配置etcd工作目录
[root@km1 ~]# mkdir -p /etc/etcd # 配置文件存放目录 [root@km1 ~]# mkdir -p /etc/etcd/ssl # 证书文件存放目录
3.3.2 创建etcd证书
工具下载 [root@km1 ~]# cd /data/work/
wget https://pkg.cfssl.org/R1.2/cfssl_linux-amd64 -O /usr/bin/cfssl wget https://pkg.cfssl.org/R1.2/cfssljson_linux-amd64 -O /usr/bin/cfssl-json wget https://pkg.cfssl.org/R1.2/cfssl-certinfo_linux-amd64 -O /usr/bin/cfssl-certinfo chmod +x /usr/bin/cfssl*
配置ca请求文件
[root@km1 work]# cat ca-csr.json { "CN": "kubernetes", "key": { "algo": "rsa", "size": 2048 }, "names": [ { "C": "CN", "ST": "Beijing", "L": "beijing", "O": "k8s", "OU": "system" } ], "ca": { "expiry": "175200h" } }
注:
CN:Common Name,kube-apiserver 从证书中提取该字段作为请求的用户名 (User Name);浏览器使用该字段验证网站是否合法;
O:Organization,kube-apiserver 从证书中提取该字段作为请求用户所属的组 (Group)
创建ca证书
[root@km1 work]# cfssl gencert -initca ca-csr.json | cfssljson -bare ca
配置ca证书策略
[root@km1 work]# cat ca-config.json { "signing": { "default": { "expiry": "175200h" }, "profiles": { "kubernetes": { "usages": [ "signing", "key encipherment", "server auth", "client auth" ], "expiry": "175200h" } } } }
配置etcd请求csr文件
[root@km1 work]# vim etcd-csr.json { "CN": "etcd", "hosts": [ "127.0.0.1", "10.252.4.11", "10.252.4.12", "10.252.4.13" ], "key": { "algo": "rsa", "size": 2048 }, "names": [{ "C": "CN", "ST": "Beijing", "L": "Beijing", "O": "k8s", "OU": "system" }] }
生成证书
[root@km1 work]# cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=kubernetes etcd-csr.json | cfssljson -bare etcd [root@km1 work]# ls etcd*.pem
3.3.3 部署etcd集群
[root@km1 work]# wget https://github.com/etcd-io/etcd/releases/download/v3.4.13/etcd-v3.4.13-linux-amd64.tar.gz [root@km1 work]# tar -xf etcd-v3.4.13-linux-amd64.tar.gz [root@km1 work]# cp -p etcd-v3.4.13-linux-amd64/etcd* /usr/local/bin/ [root@km1 work]# rsync -vaz etcd-v3.4.13-linux-amd64/etcd* kn2:/usr/local/bin/
创建配置文件
[root@km1 work]# vim etcd.conf #[Member] ETCD_NAME="etcd1" ETCD_DATA_DIR="/var/lib/etcd/default.etcd" ETCD_LISTEN_PEER_URLS="https://10.252.4.11:2380" ETCD_LISTEN_CLIENT_URLS="https://10.252.4.11:2379,http://127.0.0.1:2379" #[Clustering] ETCD_INITIAL_ADVERTISE_PEER_URLS="https://10.252.4.11:2380" ETCD_ADVERTISE_CLIENT_URLS="https://10.252.4.11:2379" ETCD_INITIAL_CLUSTER="etcd1=https://10.252.4.11:2380,etcd2=https://10.252.4.12:2380,etcd3=https://10.252.4.13:2380" ETCD_INITIAL_CLUSTER_TOKEN="etcd-cluster" ETCD_INITIAL_CLUSTER_STATE="new"
注:
ETCD_NAME:节点名称,集群中唯一
ETCD_DATA_DIR:数据目录
ETCD_LISTEN_PEER_URLS:集群通信监听地址
ETCD_LISTEN_CLIENT_URLS:客户端访问监听地址
ETCD_INITIAL_ADVERTISE_PEER_URLS:集群通告地址
ETCD_ADVERTISE_CLIENT_URLS:客户端通告地址
ETCD_INITIAL_CLUSTER:集群节点地址
ETCD_INITIAL_CLUSTER_TOKEN:集群Token
ETCD_INITIAL_CLUSTER_STATE:加入集群的当前状态,new是新集群,existing表示加入已有集群
创建启动服务文件
[root@km1 work]# vim etcd.service [Unit] Description=Etcd Server After=network.target After=network-online.target Wants=network-online.target [Service] Type=notify EnvironmentFile=-/etc/etcd/etcd.conf WorkingDirectory=/var/lib/etcd/ ExecStart=/usr/local/bin/etcd --cert-file=/etc/etcd/ssl/etcd.pem --key-file=/etc/etcd/ssl/etcd-key.pem --trusted-ca-file=/etc/etcd/ssl/ca.pem --peer-cert-file=/etc/etcd/ssl/etcd.pem --peer-key-file=/etc/etcd/ssl/etcd-key.pem --peer-trusted-ca-file=/etc/etcd/ssl/ca.pem --peer-client-cert-auth --client-cert-auth Restart=on-failure RestartSec=5 LimitNOFILE=65536 [Install] WantedBy=multi-user.target
同步相关文件到各个节点
[root@km1 work]# cp ca*.pem etcd*.pem /etc/etcd/ssl/ [root@km1 work]# cp etcd.conf /etc/etcd/ [root@km1 work]# cp etcd.service /usr/lib/systemd/system/ [root@km1 work]# scp ca*.pem etcd*.pem km2:/etc/etcd/ssl/ [root@km1 work]# scp etcd.conf km2:/etc/etcd/ [root@km1 work]# scp etcd.service km2:/usr/lib/systemd/system/ [root@km1 work]# scp ca*.pem etcd*.pem kn:/etc/etcd/ssl/ [root@km1 work]# scp etcd.conf kn:/etc/etcd/ [root@km1 work]# scp etcd.service kn:/usr/lib/systemd/system/
注:km2和kn分别修改配置文件中etcd名字和ip,并创建目录 /var/lib/etcd/default.etcd
启动etcd集群<km1、km2和kn分别执行以下命令>
[root@km1 work]# mkdir -p /var/lib/etcd/default.etcd [root@km1 work]# systemctl daemon-reload [root@km1 work]# systemctl start etcd.service 注:同时启动三个节点 [root@km1 work]# systemctl status etcd.service [root@km1 work]# systemctl enable etcd.service
查看集群状态
[root@km1 work]# etcdctl member list
3.4 kubernetes组件部署
3.4.1 下载安装包
[root@km1 work]# wget https://dl.k8s.io/v1.20.1/kubernetes-server-linux-amd64.tar.gz [root@km1 work]# tar -xf kubernetes-server-linux-amd64.tar [root@km1 work]# cd kubernetes/server/bin/ [root@km1 bin]# cp kube-apiserver kube-controller-manager kube-scheduler /usr/local/bin/ [root@km1 bin]# scp kube-apiserver kube-controller-manager kube-scheduler km2:/usr/local/bin/ [root@km1 bin]# scp kubelet kube-proxy kn:/usr/local/bin/ [root@km1 bin]# cd /data/work/
3.4.2 创建工作目录<km1km2>
[root@km1 work]# mkdir -p /etc/kubernetes/ # kubernetes组件配置文件存放目录 [root@km1 work]# mkdir -p /etc/kubernetes/ssl # kubernetes组件证书文件存放目录 [root@km1 work]# mkdir /var/log/kubernetes # kubernetes组件日志文件存放目录
3.4.3 部署api-server
创建csr请求文件
[root@km1 work]# vim kube-apiserver-csr.json { "CN": "kubernetes", "hosts": [ "127.0.0.1", "10.252.4.11", "10.252.4.12", "10.252.4.13", "10.252.4.10", "10.255.0.1", "kubernetes", "kubernetes.default", "kubernetes.default.svc", "kubernetes.default.svc.cluster", "kubernetes.default.svc.cluster.local" ], "key": { "algo": "rsa", "size": 2048 }, "names": [ { "C": "CN", "ST": "Beijing", "L": "Beijing", "O": "k8s", "OU": "system" } ] }
注:
如果 hosts 字段不为空则需要指定授权使用该证书的 IP 或域名列表。
由于该证书后续被 kubernetes master 集群使用,需要将master节点的IP都填上,同时还需要填写 service 网络的首个IP。(一般是 kube-apiserver 指定的 service-cluster-ip-range 网段的第一个IP,如 10.255.0.1)
生成证书和token文件
[root@km1 work]# cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=kubernetes kube-apiserver-csr.json | cfssljson -bare kube-apiserver [root@km1 work]# cat > token.csv << EOF $(head -c 16 /dev/urandom | od -An -t x | tr -d ' '),kubelet-bootstrap,10001,"system:kubelet-bootstrap" EOF
创建配置文件
[root@km1 work]# vim kube-apiserver.conf KUBE_APISERVER_OPTS="--enable-admission-plugins=NamespaceLifecycle,NodeRestriction,LimitRanger,ServiceAccount,DefaultStorageClass,ResourceQuota --anonymous-auth=false --bind-address=10.252.4.11 --secure-port=6443 --advertise-address=10.252.4.11 --insecure-port=0 --authorization-mode=Node,RBAC --runtime-config=api/all=true --enable-bootstrap-token-auth --service-cluster-ip-range=10.255.0.0/16 --token-auth-file=/etc/kubernetes/token.csv --service-node-port-range=30000-50000 --tls-cert-file=/etc/kubernetes/ssl/kube-apiserver.pem --tls-private-key-file=/etc/kubernetes/ssl/kube-apiserver-key.pem --client-ca-file=/etc/kubernetes/ssl/ca.pem --kubelet-client-certificate=/etc/kubernetes/ssl/kube-apiserver.pem --kubelet-client-key=/etc/kubernetes/ssl/kube-apiserver-key.pem --service-account-key-file=/etc/kubernetes/ssl/ca-key.pem --service-account-signing-key-file=/etc/kubernetes/ssl/ca-key.pem # 1.20以上版本必须有此参数 --service-account-issuer=https://kubernetes.default.svc.cluster.local # 1.20以上版本必须有此参数 --etcd-cafile=/etc/etcd/ssl/ca.pem --etcd-certfile=/etc/etcd/ssl/etcd.pem --etcd-keyfile=/etc/etcd/ssl/etcd-key.pem --etcd-servers=https://10.252.4.11:2379,https://10.252.4.12:2379,https://10.252.4.13:2379 --enable-swagger-ui=true --allow-privileged=true --apiserver-count=2 --audit-log-maxage=30 --audit-log-maxbackup=3 --audit-log-maxsize=100 --audit-log-path=/var/log/kube-apiserver-audit.log --event-ttl=1h --alsologtostderr=true --logtostderr=false --log-dir=/var/log/kubernetes --v=4"
注:
--logtostderr:启用日志
--v:日志等级
--log-dir:日志目录
--etcd-servers:etcd集群地址
--bind-address:监听地址
--secure-port:https安全端口
--advertise-address:集群通告地址
--allow-privileged:启用授权
--service-cluster-ip-range:Service虚拟IP地址段
--enable-admission-plugins:准入控制模块
--authorization-mode:认证授权,启用RBAC授权和节点自管理
--enable-bootstrap-token-auth:启用TLS bootstrap机制
--token-auth-file:bootstrap token文件
--service-node-port-range:Service nodeport类型默认分配端口范围
--kubelet-client-xxx:apiserver访问kubelet客户端证书
--tls-xxx-file:apiserver https证书
--etcd-xxxfile:连接Etcd集群证书
--audit-log-xxx:审计日志
创建服务启动文件
[root@km1 work]# vim kube-apiserver.service [Unit] Description=Kubernetes API Server Documentation=https://github.com/kubernetes/kubernetes After=etcd.service Wants=etcd.service [Service] EnvironmentFile=-/etc/kubernetes/kube-apiserver.conf ExecStart=/usr/local/bin/kube-apiserver $KUBE_APISERVER_OPTS Restart=on-failure RestartSec=5 Type=notify LimitNOFILE=65536 [Install] WantedBy=multi-user.target
同步相关文件到各个节点
[root@km1 work]# cp ca*.pem kube-apiserver*.pem /etc/kubernetes/ssl/ [root@km1 work]# cp token.csv kube-apiserver.conf /etc/kubernetes/ [root@km1 work]# cp kube-apiserver.service /usr/lib/systemd/system/ [root@km1 work]# scp ca*.pem kube-apiserver*.pem km2:/etc/kubernetes/ssl/ [root@km1 work]# scp token.csv kube-apiserver.conf km2:/etc/kubernetes/ [root@km1 work]# scp kube-apiserver.service km2:/usr/lib/systemd/system/
注:km1km2配置文件的IP地址修改为实际的本机IP
启动服务
[root@km1 work]# systemctl daemon-reload [root@km1 work]# systemctl start kube-apiserver [root@km1 work]# systemctl status kube-apiserver [root@km1 work]# systemctl enable kube-apiserver [root@km1 work]# netstat -nltup|grep kube-api
3.4.4 部署四层反向代理
分别在km节点安装NGINX和keepalived
[root@km1 work]# yum install nginx keepalived -y [root@km1 work]# vi /etc/nginx/nginx.conf stream { upstream kube-apiserver { server 10.252.4.11:6443 max_fails=3 fail_timeout=30s; server 10.252.4.12:6443 max_fails=3 fail_timeout=30s; } server { listen 7443; proxy_connect_timeout 2s; proxy_timeout 900s; proxy_pass kube-apiserver; } } [root@km1 work]# nginx -t
检查端口脚本 [root@km1 work]#vi /etc/keepalived/check_port.sh #!/bin/bash CHK_PORT=$1 if [ -n "$CHK_PORT" ];then PORT_PROCESS=`ss -lnt|grep $CHK_PORT|wc -l` if [ $PORT_PROCESS -eq 0 ];then echo "Port $CHK_PORT Is Not Used,End." exit 1 fi else echo "Check Port Cant Be Empty!" fi [root@km1 work]# chmod +x /etc/keepalived/check_port.sh ########## 配置文件 keepalived 主 [root@km1 work]# vi /etc/keepalived/keepalived.conf ! Configuration File for keepalived global_defs { router_id 10.252.4.11 } vrrp_script chk_nginx { script "/etc/keepalived/check_port.sh 7443" interval 2 weight -20 } vrrp_instance VI_1 { state MASTER interface eth0 virtual_router_id 251 priority 100 advert_int 1 mcast_src_ip 10.252.4.11 nopreempt authentication { auth_type PASS auth_pass 11111111 } track_script { chk_nginx } virtual_ipaddress { 10.252.4.10 } } keepalived 从: ! Configuration File for keepalived global_defs { router_id 10.252.4.12 } vrrp_script chk_nginx { script "/etc/keepalived/check_port.sh 7443" interval 2 weight -20 } vrrp_instance VI_1 { state BACKUP interface eth0 virtual_router_id 251 mcast_src_ip 10.252.4.12 priority 90 advert_int 1 authentication { auth_type PASS auth_pass 11111111 } track_script { chk_nginx } virtual_ipaddress { 10.252.4.10 } } nopreempt:非抢占式 启动代理并检查 systemctl start nginx keepalived systemctl enable nginx keepalived netstat -lntup|grep nginx ip add
3.4.5 部署kubectl
创建csr请求文件
[root@km1 work]# vim admin-csr.json { "CN": "admin", "hosts": [], "key": { "algo": "rsa", "size": 2048 }, "names": [ { "C": "CN", "ST": "Beijing", "L": "Beijing", "O": "system:masters", "OU": "system" } ] }
说明:
后续 kube-apiserver 使用 RBAC 对客户端(如 kubelet、kube-proxy、Pod)请求进行授权;
kube-apiserver 预定义了一些 RBAC 使用的 RoleBindings,如 cluster-admin 将 Group system:masters 与 Role cluster-admin 绑定,该 Role 授予了调用kube-apiserver 的所有 API的权限;O指定该证书的 Group 为 system:masters,kubelet 使用该证书访问 kube-apiserver 时 ,由于证书被 CA 签名,所以认证通过,同时由于证书用户组为经过预授权的
system:masters,所以被授予访问所有 API 的权限;
注:
这个admin 证书,是将来生成管理员用的kube config 配置文件用的,现在我们一般建议使用RBAC 来对kubernetes 进行角色权限控制, kubernetes 将证书中的CN 字段 作为User, O 字段作为 Group;"O": "system:masters", 必须是system:masters,否则后面kubectl create clusterrolebinding报错。
生成证书
[root@km1 work]# cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=kubernetes admin-csr.json | cfssljson -bare admin [root@km1 work]# cp admin*.pem /etc/kubernetes/ssl/
创建kubeconfig配置文件
kubeconfig 为 kubectl 的配置文件,包含访问 apiserver 的所有信息,如 apiserver 地址、CA 证书和自身使用的证书
设置集群参数 [root@km1 work]# kubectl config set-cluster kubernetes --certificate-authority=ca.pem --embed-certs=true --server=https://10.252.4.10:7443 --kubeconfig=kube.config 设置客户端认证参数 [root@km1 work]# kubectl config set-credentials admin --client-certificate=admin.pem --client-key=admin-key.pem --embed-certs=true --kubeconfig=kube.config 设置上下文参数 [root@km1 work]# kubectl config set-context kubernetes --cluster=kubernetes --user=admin --kubeconfig=kube.config 设置默认上下文 [root@km1 work]# kubectl config use-context kubernetes --kubeconfig=kube.config [root@km1 work]# mkdir ~/.kube [root@km1 work]# cp kube.config ~/.kube/config 授权kubernetes证书访问kubelet api权限 [root@km1 work]# kubectl create clusterrolebinding kube-apiserver:kubelet-apis --clusterrole=system:kubelet-api-admin --user kubernetes
查看集群组件状态
上面步骤完成后,kubectl就可以与kube-apiserver通信了
[root@km1 work]# kubectl cluster-info [root@km1 work]# kubectl get componentstatuses [root@km1 work]# kubectl get all --all-namespaces
同步kubectl配置文件到其他节点
[root@km1 work]# scp -rp /root/.kube/ km2:/root/
配置kubectl子命令补全
[root@km1 work]# yum install -y bash-completion [root@km1 work]# source /usr/share/bash-completion/bash_completion [root@km1 work]# source <(kubectl completion bash) [root@km1 work]# kubectl completion bash > ~/.kube/completion.bash.inc [root@km1 work]# source '/root/.kube/completion.bash.inc' [root@km1 work]# source $HOME/.bash_profile
3.4.6 部署kube-controller-manager
创建csr请求文件
[root@km1 work]# vim kube-controller-manager-csr.json { "CN": "system:kube-controller-manager", "key": { "algo": "rsa", "size": 2048 }, "hosts": [ "127.0.0.1", "10.252.4.11", "10.252.4.12" ], "names": [ { "C": "CN", "ST": "Beijing", "L": "Beijing", "O": "system:kube-controller-manager", "OU": "system" } ] }
注:
hosts 列表包含所有 kube-controller-manager 节点 IP;
CN 为 system:kube-controller-manager、O 为 system:kube-controller-manager,kubernetes 内置的 ClusterRoleBindings system:kube-controller-manager 赋予 kube-controller-manager 工作所需的权限
生成证书
[root@km1 work]# cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=kubernetes kube-controller-manager-csr.json | cfssljson -bare kube-controller-manager [root@km1 work]# ls kube-controller-manager*.pem
创建kube-controller-manager的kubeconfig
设置集群参数 [root@km1 work]# kubectl config set-cluster kubernetes --certificate-authority=ca.pem --embed-certs=true --server=https://10.252.4.10:7443 --kubeconfig=kube-controller-manager.kubeconfig 设置客户端认证参数 [root@km1 work]# kubectl config set-credentials system:kube-controller-manager --client-certificate=kube-controller-manager.pem --client-key=kube-controller-manager-key.pem --embed-certs=true --kubeconfig=kube-controller-manager.kubeconfig 设置上下文参数 [root@km1 work]# kubectl config set-context system:kube-controller-manager --cluster=kubernetes --user=system:kube-controller-manager --kubeconfig=kube-controller-manager.kubeconfig 设置默认上下文 [root@km1 work]# kubectl config use-context system:kube-controller-manager --kubeconfig=kube-controller-manager.kubeconfig
创建配置文件
[root@km1 work]# vim kube-controller-manager.conf KUBE_CONTROLLER_MANAGER_OPTS="--bind-address=127.0.0.1 --kubeconfig=/etc/kubernetes/kube-controller-manager.kubeconfig --service-cluster-ip-range=10.255.0.0/16 --cluster-name=kubernetes --cluster-signing-cert-file=/etc/kubernetes/ssl/ca.pem --cluster-signing-key-file=/etc/kubernetes/ssl/ca-key.pem --allocate-node-cidrs=true --cluster-cidr=10.0.0.0/16 --experimental-cluster-signing-duration=175200h --root-ca-file=/etc/kubernetes/ssl/ca.pem --service-account-private-key-file=/etc/kubernetes/ssl/ca-key.pem --leader-elect=true --feature-gates=RotateKubeletServerCertificate=true --controllers=*,bootstrapsigner,tokencleaner --horizontal-pod-autoscaler-use-rest-clients=true --horizontal-pod-autoscaler-sync-period=10s --tls-cert-file=/etc/kubernetes/ssl/kube-controller-manager.pem --tls-private-key-file=/etc/kubernetes/ssl/kube-controller-manager-key.pem --use-service-account-credentials=true --alsologtostderr=true --logtostderr=false --log-dir=/var/log/kubernetes --v=2"
创建启动文件
[root@km1 work]# vim kube-controller-manager.service [Unit] Description=Kubernetes Controller Manager Documentation=https://github.com/kubernetes/kubernetes [Service] EnvironmentFile=-/etc/kubernetes/kube-controller-manager.conf ExecStart=/usr/local/bin/kube-controller-manager $KUBE_CONTROLLER_MANAGER_OPTS Restart=on-failure RestartSec=5 [Install] WantedBy=multi-user.target
同步相关文件到各个节点
[root@km1 work]# cp kube-controller-manager.pem /etc/kubernetes/ssl/
[root@km1 work]# cp kube-controller-manager.conf kube-controller-manager.kubeconfig /etc/kubernetes/
[root@km1 work]# cp kube-controller-manager.service /usr/lib/systemd/system/
3.4.7 部署kube-scheduler
创建csr请求文件
[root@km1 work]# vim kube-scheduler-csr.json { "CN": "system:kube-scheduler", "hosts": [ "127.0.0.1", "10.252.4.11", "10.252.4.12" ], "key": { "algo": "rsa", "size": 2048 }, "names": [ { "C": "CN", "ST": "Beijing", "L": "Beijing", "O": "system:kube-scheduler", "OU": "system" } ] }
注:
hosts 列表包含所有 kube-scheduler 节点 IP;
CN 为 system:kube-scheduler、O 为 system:kube-scheduler,kubernetes 内置的 ClusterRoleBindings system:kube-scheduler 将赋予 kube-scheduler 工作所需的权限。
生成证书
[root@km1 work]# cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=kubernetes kube-scheduler-csr.json | cfssljson -bare kube-scheduler [root@km1 work]# ls kube-scheduler*.pem
创建kube-scheduler的kubeconfig
设置集群参数 [root@master1 work]# kubectl config set-cluster kubernetes --certificate-authority=ca.pem --embed-certs=true --server=https://10.252.4.10:7443 --kubeconfig=kube-scheduler.kubeconfig 设置客户端认证参数 [root@master1 work]# kubectl config set-credentials system:kube-scheduler --client-certificate=kube-scheduler.pem --client-key=kube-scheduler-key.pem --embed-certs=true --kubeconfig=kube-scheduler.kubeconfig 设置上下文参数 [root@master1 work]# kubectl config set-context system:kube-scheduler --cluster=kubernetes --user=system:kube-scheduler --kubeconfig=kube-scheduler.kubeconfig 设置默认上下文 [root@master1 work]# kubectl config use-context system:kube-scheduler --kubeconfig=kube-scheduler.kubeconfig
创建配置文件
[root@km1 work]# vim kube-scheduler.conf KUBE_SCHEDULER_OPTS="--address=127.0.0.1 --kubeconfig=/etc/kubernetes/kube-scheduler.kubeconfig --leader-elect=true --alsologtostderr=true --logtostderr=false --log-dir=/var/log/kubernetes --v=2"
创建服务启动文件
[root@km1 work]# vim kube-scheduler.service [Unit] Description=Kubernetes Scheduler Documentation=https://github.com/kubernetes/kubernetes [Service] EnvironmentFile=-/etc/kubernetes/kube-scheduler.conf ExecStart=/usr/local/bin/kube-scheduler $KUBE_SCHEDULER_OPTS Restart=on-failure RestartSec=5 [Install] WantedBy=multi-user.target
同步相关文件到各个节点
[root@km1 work]# cp kube-scheduler.service /usr/lib/systemd/system/ [root@km1 work]# cp kube-scheduler.conf kube-scheduler.kubeconfig /etc/kubernetes/ [root@km1 work]# cp kube-scheduler.pem /etc/kubernetes/ssl/ [root@km1 work]# scp kube-scheduler.pem km2:/etc/kubernetes/ssl/ [root@km1 work]# scp kube-scheduler.kubeconfig kube-scheduler.conf km2:/etc/kubernetes/ [root@km1 work]# scp kube-scheduler.service km2:/usr/lib/systemd/system/
启动服务
[root@km2 ~]# systemctl daemon-reload [root@km2 ~]# systemctl start kube-scheduler [root@km2 ~]# systemctl status kube-scheduler [root@km2 ~]# systemctl enable kube-scheduler
3.4.8 部署kubelet 以下操作在master1上操作
创建kubelet-bootstrap.kubeconfig [root@km1 work]# BOOTSTRAP_TOKEN=$(awk -F "," '{print $1}' /etc/kubernetes/token.csv) 设置集群参数 [root@km1 work]# kubectl config set-cluster kubernetes --certificate-authority=ca.pem --embed-certs=true --server=https://10.252.4.10:7443 --kubeconfig=kubelet-bootstrap.kubeconfig 设置客户端认证参数 [root@km1 work]# kubectl config set-credentials kubelet-bootstrap --token=${BOOTSTRAP_TOKEN} --kubeconfig=kubelet-bootstrap.kubeconfig 设置上下文参数 [root@km1 work]# kubectl config set-context default --cluster=kubernetes --user=kubelet-bootstrap --kubeconfig=kubelet-bootstrap.kubeconfig 设置默认上下文 [root@km1 work]# kubectl config use-context default --kubeconfig=kubelet-bootstrap.kubeconfig 创建角色绑定 [root@km1 work]# kubectl create clusterrolebinding kubelet-bootstrap --clusterrole=system:node-bootstrapper --user=kubelet-bootstrap
创建配置文件
[root@km1 work]# vim kubelet.json { "kind": "KubeletConfiguration", "apiVersion": "kubelet.config.k8s.io/v1beta1", "authentication": { "x509": { "clientCAFile": "/etc/kubernetes/ssl/ca.pem" }, "webhook": { "enabled": true, "cacheTTL": "2m0s" }, "anonymous": { "enabled": false } }, "authorization": { "mode": "Webhook", "webhook": { "cacheAuthorizedTTL": "5m0s", "cacheUnauthorizedTTL": "30s" } }, "address": "10.252.4.11", "port": 10250, "readOnlyPort": 10255, "cgroupDriver": "systemd", "hairpinMode": "promiscuous-bridge", "serializeImagePulls": false, "featureGates": { "RotateKubeletClientCertificate": true, "RotateKubeletServerCertificate": true }, "clusterDomain": "cluster.local.", "clusterDNS": ["10.255.0.2"] }
创建启动文件
[root@km1 work]# vim kubelet.service [Unit] Description=Kubernetes Kubelet Documentation=https://github.com/kubernetes/kubernetes After=docker.service Requires=docker.service [Service] WorkingDirectory=/var/lib/kubelet ExecStart=/usr/local/bin/kubelet --bootstrap-kubeconfig=/etc/kubernetes/kubelet-bootstrap.kubeconfig --cert-dir=/etc/kubernetes/ssl --kubeconfig=/etc/kubernetes/kubelet.kubeconfig --config=/etc/kubernetes/kubelet.json --network-plugin=cni --pod-infra-container-image=10.252.4.11:5000/maxzhu/pause:v1 --alsologtostderr=true --logtostderr=false --log-dir=/var/log/kubernetes --v=2 Restart=on-failure RestartSec=5 [Install] WantedBy=multi-user.target
注:
–hostname-override:显示名称,集群中唯一
–network-plugin:启用CNI
–kubeconfig:空路径,会自动生成,后面用于连接apiserver
–bootstrap-kubeconfig:首次启动向apiserver申请证书
–config:配置参数文件
–cert-dir:kubelet证书生成目录
–pod-infra-container-image:管理Pod网络容器的镜像
同步相关文件到各个节点
[root@km1 work]# scp kubelet-bootstrap.kubeconfig kubelet.json kn:/etc/kubernetes/ [root@km1 work]# scp kubelet.service kn:/usr/lib/systemd/system/ [root@km1 work]# scp ca.pem kn:/etc/kubernetes/ssl/
注:kubelete.json配置文件address改为各个节点的ip地址
启动服务
各个work节点上操作
[root@kn1 ~]# mkdir /var/lib/kubelet [root@kn1 ~]# mkdir /var/log/kubernetes [root@kn1 ~]# systemctl daemon-reload [root@kn1 ~]# systemctl enable kubelet [root@kn1 ~]# systemctl start kubelet [root@kn1 ~]# systemctl status kubelet
确认kubelet服务启动成功后,接着到master上Approve一下bootstrap请求。执行如下命令可以看到三个worker节点分别发送了三个 CSR 请求:
[root@km1 work]# kubectl get csr [root@km1 work]# kubectl certificate approve node-csr-HlX3cExsZohWsu8Dd6Rp_ztFejmMdpzvti_qgxo4SAQ [root@km1 work]# kubectl certificate approve node-csr-oykYfnH_coRF2PLJH4fOHlGznOZUBPDg5BPZXDo2wgk [root@km1 work]# kubectl certificate approve node-csr-ytRB2fikhL6dykcekGg4BdD87o-zw9WPU44SZ1nFT50 [root@km1 work]# kubectl get csr [root@km1 work]# kubectl get nodes
3.4.9 部署kube-proxy
创建csr请求文件
[root@km1 work]# vim kube-proxy-csr.json { "CN": "system:kube-proxy", "key": { "algo": "rsa", "size": 2048 }, "names": [ { "C": "CN", "ST": "Beijing", "L": "Beijing", "O": "k8s", "OU": "system" } ] }
生成证书
[root@km1 work]# cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=kubernetes kube-proxy-csr.json | cfssljson -bare kube-proxy [root@km1 work]# ls kube-proxy*.pem
创建kubeconfig文件
[root@master1 work]# kubectl config set-cluster kubernetes --certificate-authority=ca.pem --embed-certs=true --server=https://10.252.4.10:7443 --kubeconfig=kube-proxy.kubeconfig [root@master1 work]# kubectl config set-credentials kube-proxy --client-certificate=kube-proxy.pem --client-key=kube-proxy-key.pem --embed-certs=true --kubeconfig=kube-proxy.kubeconfig [root@master1 work]# kubectl config set-context default --cluster=kubernetes --user=kube-proxy --kubeconfig=kube-proxy.kubeconfig [root@master1 work]# kubectl config use-context default --kubeconfig=kube-proxy.kubeconfig
创建kube-proxy配置文件
[root@km1 work]# vim kube-proxy.yaml apiVersion: kubeproxy.config.k8s.io/v1alpha1 bindAddress: 10.252.4.13 clientConnection: kubeconfig: /etc/kubernetes/kube-proxy.kubeconfig clusterCIDR: 192.168.0.0/16 healthzBindAddress: 10.252.4.13:10256 kind: KubeProxyConfiguration metricsBindAddress: 10.252.4.13:10249 mode: "ipvs"
创建服务启动文件
[root@km1 work]# vim kube-proxy.service [Unit] Description=Kubernetes Kube-Proxy Server Documentation=https://github.com/kubernetes/kubernetes After=network.target [Service] WorkingDirectory=/var/lib/kube-proxy ExecStart=/usr/local/bin/kube-proxy --config=/etc/kubernetes/kube-proxy.yaml --alsologtostderr=true --logtostderr=false --log-dir=/var/log/kubernetes --v=2 Restart=on-failure RestartSec=5 LimitNOFILE=65536 [Install] WantedBy=multi-user.target
同步文件到各个node节点
[root@km1 work]# scp kube-proxy.kubeconfig kube-proxy.yaml kn:/etc/kubernetes/ [root@km1 work]# scp kube-proxy.service kn:/usr/lib/systemd/system/
注:配置文件kube-proxy.yaml中address修改为各节点的实际IP
启动服务
[root@kn ~]# mkdir -p /var/lib/kube-proxy [root@kn ~]# systemctl daemon-reload [root@kn ~]# systemctl enable kube-proxy [root@kn ~]# systemctl restart kube-proxy [root@kn ~]# systemctl status kube-proxy
3.4.10 配置网络组件
[root@km1 work]# wget https://docs.projectcalico.org/v3.14/manifests/calico.yaml [root@km1 work]# kubectl apply -f calico.yaml
此时再来查看各个节点,均为Ready状态 [root@km1 work]# kubectl get pods -A [root@km1 work]# kubectl get nodes