Kubernetes集群添加节点

Kubernetes Cluster add ndoe

加入过程

  1. kubeadm 从 API 服务器下载需要的集群信息。 默认情况下,使用引导token和 CA 密钥哈希来验证数据的真实性。 也可以通过文件或 URL 直接发现根 CA
  2. 拿到集群验证信息后,kubelet进入TLS bootstrapping 过程TLS bootstrap 使用共享token临时向Kubernetes API服务器进行身份验证以提交证书签名请求(CSR); 默认情况下control-pane会自动签署此CSR请求
  3. 最后,kubeadm 配置本地 kubelet 使用分配给节点的确定标识连接到 API 服务器

How to do

默认情况下token保留时间是24小时,如果超出该时间,token将会自动删除,则需要手动创建token及hash值

  • 创建token

    <root@HK-K8S-CP ~># kubeadm token create
    W0815 14:54:38.564119   10867 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
    hnno8a.rnijnbrhejz72t7w
  • 生成token的hash值
    <root@HK-K8S-CP ~># openssl x509 -pubkey -in /etc/kubernetes/pki/ca.crt | openssl rsa -pubin -outform der 2>/dev/null | 
          openssl dgst -sha256 -hex | sed 's/^.* //'
    bcf878114948a608e3f47f2a3824bae94b1f3f9ce9bd529d21a19f0d8af4c6cb
  • 获取control-plane-host:port,在加入集群之前,前提条件需要获取Kubernetes集群的control-plane-host:port,具体查看方法如下
    <root@HK-K8S-CP ~># kubectl describe configmaps -n kube-system  kubeadm-config
    Name:         kubeadm-config
    Namespace:    kube-system
    Labels:       <none>
    Annotations:  
    Data
    ====
    ClusterConfiguration:
    ----
    apiServer:
      certSANs:
      - 47.57.234.123
      extraArgs:
        authorization-mode: Node,RBAC
      timeoutForControlPlane: 4m0s
    apiVersion: kubeadm.k8s.io/v1beta2
    certificatesDir: /etc/kubernetes/pki
    clusterName: kubernetes
    controlPlaneEndpoint: 172.19.0.203:6443
    controllerManager: {}
    dns:
      type: CoreDNS
    etcd:
      local:
        dataDir: /var/lib/etcd
    imageRepository: k8s.gcr.io
    kind: ClusterConfiguration
    kubernetesVersion: v1.18.5
    networking:
      dnsDomain: nflow.so
      podSubnet: 172.20.0.0/20
      serviceSubnet: 10.10.0.0/24
    scheduler: {}
    
    ClusterStatus:
    ----
    apiEndpoints:
      hk-k8s-cp:
        advertiseAddress: 172.19.1.119
        bindPort: 6443
    apiVersion: kubeadm.k8s.io/v1beta2
    kind: ClusterStatus
    
    Events:  <none>
  • 加入集群(注意下面的Contraol Plane的地址与上文获取的不同,这是由于阿里云SLB自身问题,如果在创建的集群的时候,将CP节点作为SLB的后端服务器,需要创建一个公网地址并且指定certSANs参数即可绕过,但是如果只是添加节点到集群中,即可使用CP节点的内网地址)
    <root@HK-K8S-WN4 ~># kubeadm join 172.19.1.119:6443 --token hnno8a.rnijnbrhejz72t7w --discovery-token-ca-cert-hash sha256:bcf878114948a608e3f47f2a3824bae94b1f3f9ce9bd529d21a19f0d8af4c6cb --v=5
    W0815 15:03:45.687465    3589 join.go:346] [preflight] WARNING: JoinControlPane.controlPlane settings will be ignored when control-plane flag is not set.
    I0815 15:03:45.687523    3589 join.go:371] [preflight] found NodeName empty; using OS hostname as NodeName
    I0815 15:03:45.687568    3589 initconfiguration.go:103] detected and using CRI socket: /var/run/dockershim.sock
    [preflight] Running pre-flight checks
    I0815 15:03:45.687648    3589 preflight.go:90] [preflight] Running general checks
    I0815 15:03:45.687704    3589 checks.go:249] validating the existence and emptiness of directory /etc/kubernetes/manifests
    I0815 15:03:45.687757    3589 checks.go:286] validating the existence of file /etc/kubernetes/kubelet.conf
    I0815 15:03:45.687773    3589 checks.go:286] validating the existence of file /etc/kubernetes/bootstrap-kubelet.conf
    I0815 15:03:45.687783    3589 checks.go:102] validating the container runtime
    I0815 15:03:45.770908    3589 checks.go:128] validating if the service is enabled and active
    I0815 15:03:45.862041    3589 checks.go:335] validating the contents of file /proc/sys/net/bridge/bridge-nf-call-iptables
    I0815 15:03:45.862110    3589 checks.go:335] validating the contents of file /proc/sys/net/ipv4/ip_forward
    I0815 15:03:45.862136    3589 checks.go:649] validating whether swap is enabled or not
    I0815 15:03:45.862166    3589 checks.go:376] validating the presence of executable conntrack
    I0815 15:03:45.862191    3589 checks.go:376] validating the presence of executable ip
    I0815 15:03:45.862217    3589 checks.go:376] validating the presence of executable iptables
    I0815 15:03:45.862237    3589 checks.go:376] validating the presence of executable mount
    I0815 15:03:45.862260    3589 checks.go:376] validating the presence of executable nsenter
    I0815 15:03:45.862280    3589 checks.go:376] validating the presence of executable ebtables
    I0815 15:03:45.862298    3589 checks.go:376] validating the presence of executable ethtool
    I0815 15:03:45.862318    3589 checks.go:376] validating the presence of executable socat
    I0815 15:03:45.862337    3589 checks.go:376] validating the presence of executable tc
    I0815 15:03:45.862361    3589 checks.go:376] validating the presence of executable touch
    I0815 15:03:45.862381    3589 checks.go:520] running all checks
    I0815 15:03:45.955690    3589 checks.go:406] checking whether the given node name is reachable using net.LookupHost
    I0815 15:03:45.955912    3589 checks.go:618] validating kubelet version
    I0815 15:03:46.009037    3589 checks.go:128] validating if the service is enabled and active
    I0815 15:03:46.017134    3589 checks.go:201] validating availability of port 10250
    I0815 15:03:46.017322    3589 checks.go:286] validating the existence of file /etc/kubernetes/pki/ca.crt
    I0815 15:03:46.017334    3589 checks.go:432] validating if the connectivity type is via proxy or direct
    I0815 15:03:46.017366    3589 join.go:441] [preflight] Discovering cluster-info
    I0815 15:03:46.017388    3589 token.go:78] [discovery] Created cluster-info discovery client, requesting info from "172.19.1.119:6443"
    I0815 15:03:46.025145    3589 token.go:116] [discovery] Requesting info from "172.19.1.119:6443" again to validate TLS against the pinned public key
    I0815 15:03:46.031503    3589 token.go:133] [discovery] Cluster info signature and contents are valid and TLS certificate validates against pinned roots, will use API Server "172.19.1.119:6443"
    I0815 15:03:46.031521    3589 discovery.go:51] [discovery] Using provided TLSBootstrapToken as authentication credentials for the join process
    I0815 15:03:46.031535    3589 join.go:455] [preflight] Fetching init configuration
    I0815 15:03:46.031541    3589 join.go:493] [preflight] Retrieving KubeConfig objects
    [preflight] Reading configuration from the cluster...
    [preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -oyaml'
    W0815 15:03:46.361210    3589 configset.go:76] Warning: No kubeproxy.config.k8s.io/v1alpha1 config is loaded. Continuing without it: configmaps "kube-proxy" is forbidden: User "system:bootstrap:hnno8a" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
    I0815 15:03:46.364438    3589 interface.go:400] Looking for default routes with IPv4 addresses
    I0815 15:03:46.364456    3589 interface.go:405] Default route transits interface "eth0"
    I0815 15:03:46.364566    3589 interface.go:208] Interface eth0 is up
    I0815 15:03:46.364624    3589 interface.go:256] Interface "eth0" has 1 addresses :[172.19.1.147/24].
    I0815 15:03:46.364645    3589 interface.go:223] Checking addr  172.19.1.147/24.
    I0815 15:03:46.364656    3589 interface.go:230] IP found 172.19.1.147
    I0815 15:03:46.364671    3589 interface.go:262] Found valid IPv4 address 172.19.1.147 for interface "eth0".
    I0815 15:03:46.364683    3589 interface.go:411] Found active IP 172.19.1.147 
    I0815 15:03:46.364730    3589 preflight.go:101] [preflight] Running configuration dependant checks
    I0815 15:03:46.364744    3589 controlplaneprepare.go:211] [download-certs] Skipping certs download
    I0815 15:03:46.364763    3589 kubelet.go:111] [kubelet-start] writing bootstrap kubelet config file at /etc/kubernetes/bootstrap-kubelet.conf
    I0815 15:03:46.365705    3589 kubelet.go:119] [kubelet-start] writing CA certificate at /etc/kubernetes/pki/ca.crt
    I0815 15:03:46.367273    3589 kubelet.go:145] [kubelet-start] Checking for an existing Node in the cluster with name "hk-k8s-wn4" and status "Ready"
    I0815 15:03:46.368707    3589 kubelet.go:159] [kubelet-start] Stopping the kubelet
    [kubelet-start] Downloading configuration for the kubelet from the "kubelet-config-1.18" ConfigMap in the kube-system namespace
    [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
    [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
    [kubelet-start] Starting the kubelet
    [kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap...
    I0815 15:03:51.547974    3589 cert_rotation.go:137] Starting client certificate rotation controller
    I0815 15:03:51.552665    3589 kubelet.go:194] [kubelet-start] preserving the crisocket information for the node
    I0815 15:03:51.552684    3589 patchnode.go:30] [patchnode] Uploading the CRI Socket information "/var/run/dockershim.sock" to the Node API object "hk-k8s-wn4" as an annotation
    
    This node has joined the cluster:
    * Certificate signing request was sent to apiserver and a response was received.
    * The Kubelet was informed of the new secure connection details.
    
    Run 'kubectl get nodes' on the control-plane to see this node join the cluster.
  • 切换至CP节点查看节点是否加入成功
    <root@HK-K8S-CP ~># kubectl get nodes
    NAME         STATUS   ROLES    AGE     VERSION
    hk-k8s-cp    Ready    master   152d    v1.18.5
    hk-k8s-wn1   Ready    worker   152d    v1.18.5
    hk-k8s-wn2   Ready    worker   150d    v1.18.5
    hk-k8s-wn3   Ready    worker   150d    v1.18.5
    hk-k8s-wn4   Ready    <none>   6m27s   v1.18.5
  • 修改加入节点的角色,如下
    <root@HK-K8S-CP ~># kubectl label node hk-k8s-wn4  node-role.kubernetes.io/worker=worker
    node/hk-k8s-wn4 labeled
    <root@HK-K8S-CP ~># kubectl get nodes
    NAME         STATUS   ROLES    AGE     VERSION
    hk-k8s-cp    Ready    master   152d    v1.18.5
    hk-k8s-wn1   Ready    worker   152d    v1.18.5
    hk-k8s-wn2   Ready    worker   150d    v1.18.5
    hk-k8s-wn3   Ready    worker   150d    v1.18.5
    hk-k8s-wn4   Ready    worker   7m48s   v1.18.5
原文地址:https://www.cnblogs.com/apink/p/15143423.html