k8s v1.9.9 二进制部署高可用(8)部署work节点

部署 worker 节点

kubernetes work 节点运行如下组件:
-docker
-kubelet
-kube-proxy

部署 docker 组件

docker 是容器的运行环境,管理它的生命周期。kubelet 通过 Container Runtime Interface (CRI) 与 docker 进行交互。

##########################docker这块儿分别在work节点操作,操作是一样的##########################

下载和分发 docker 二进制文件

wget https://download.docker.com/linux/static/stable/x86_64/docker-18.06.1-ce.tgz
tar -xvf docker-18.06.1-ce.tgz
cp docker/docker*  /usr/bin/

创建systemd unit 文件

mkdir /etc/kubernetes/docker
mkdir -p /etc/kubernetes/docker/{data,exec} /etc/docker

cat > /usr/lib/systemd/system/docker.service <<"EOF"
[Unit]
Description=Docker Application Container Engine
Documentation=http://docs.docker.io

[Service]
WorkingDirectory=/etc/kubernetes/docker
EnvironmentFile=-/run/flannel/docker
ExecStart=/usr/bin/dockerd $DOCKER_NETWORK_OPTIONS 
ExecReload=/bin/kill -s HUP $MAINPID
Restart=always
RestartSec=5
StartLimitInterval=0
LimitNOFILE=infinity
LimitNPROC=infinity
LimitCORE=infinity
Delegate=yes
KillMode=process

[Install]
WantedBy=multi-user.target
EOF
EOF 前后有双引号,这样 bash 不会替换文档中的变量,如 $DOCKER_NETWORK_OPTIONS;
dockerd 运行时会调用其它 docker 命令,如 docker-proxy,所以需要将 docker 命令所在的目录加到 PATH 环境变量中;
flanneld 启动时将网络配置写入 /run/flannel/docker 文件中,dockerd 启动前读取该文件中的环境变量 DOCKER_NETWORK_OPTIONS ,然后设置 docker0 网桥网段;
如果指定了多个 EnvironmentFile 选项,则必须将 /run/flannel/docker 放在最后(确保 docker0 使用 flanneld 生成的 bip 参数);
docker 需要以 root 用于运行;

配置 docker daemon 参数

cat > docker-daemon.json <<EOF
{
    "registry-mirrors": ["https://docker.mirrors.ustc.edu.cn", "https://hub-mirror.c.163.com"],
    "insecure-registries": ["registry:5000"],
    "max-concurrent-uploads": 10,
    "max-concurrent-downloads": 20,
    "live-restore": true,
    "debug": true,
    "data-root": "/etc/kubernetes/docker/data",
    "exec-root": "/etc/kubernetes/docker/exec",
    "log-opts": {
      "max-size": "100m",
      "max-file": "5"
    }
}
EOF

启动 docker 服务

systemctl daemon-reload && systemctl enable docker 
systemctl restart docker && systemctl status docker
ip a

部署 kubelet 组件

kublet 运行在每个 worker 节点上,接收 kube-apiserver 发送的请求,管理 Pod 容器,执行交互式命令,如 exec、run、logs 等。
kublet 启动时自动向 kube-apiserver 注册节点信息,内置的 cadvisor 统计和监控节点的资源使用情况。
为确保安全,本文档只开启接收 https 请求的安全端口,对请求进行认证和授权,拒绝未授权的访问(如 apiserver、heapster)。

在master节点操作

cd kubernetes/server/bin
scp kubelet kube-proxy 10.0.0.13:/usr/bin/
scp kubelet kube-proxy 10.0.0.14:/usr/bin/
cd /root/work/

下面这个值是前面部署master时生成的,贴了命令,去前面找下这个命令的值,复制过来用即可。

$head -c 16 /dev/urandom | od -An -t x | tr -d ' '
86bff28f07a55e60ba1e61ab765c0b55

创建 kubelet bootstrapping kubeconfig 文件

kubectl config set-cluster kubernetes 
--certificate-authority=/root/work/ca.pem 
--embed-certs=true 
--server=https://10.0.0.252:8443 
--kubeconfig=kubelet-bootstrap.kubeconfig

kubectl config set-credentials kubelet-bootstrap 
--token=86bff28f07a55e60ba1e61ab765c0b55 
--kubeconfig=kubelet-bootstrap.kubeconfig

kubectl config set-context default 
--cluster=kubernetes 
--user=kubelet-bootstrap 
--kubeconfig=kubelet-bootstrap.kubeconfig

kubectl config use-context default --kubeconfig=kubelet-bootstrap.kubeconfig
--embed-certs 为 true 时表示将 certificate-authority 证书写入到生成的 bootstrap.kubeconfig 文件中;
设置 kubelet 客户端认证参数时没有指定秘钥和证书,后续由 kube-apiserver 自动生成;

分发 bootstrap kubeconfig 文件到所有 worker 节点

scp kubelet-bootstrap.kubeconfig 10.0.0.13:/etc/kubernetes/kubelet-bootstrap.kubeconfig
scp kubelet-bootstrap.kubeconfig 10.0.0.14:/etc/kubernetes/kubelet-bootstrap.kubeconfig

work节点操作

mkdir /etc/kubernetes/kubelet

创建kubelet systemd unit 文件

10.0.0.13:

cat > /usr/lib/systemd/system/kubelet.service <<EOF
[Unit]
Description=Kubernetes Kubelet
Documentation=https://github.com/GoogleCloudPlatform/kubernetes
After=docker.service
Requires=docker.service

[Service]
WorkingDirectory=/etc/kubernetes/kubelet
ExecStart=/usr/bin/kubelet \
--root-dir=/etc/kubernetes/kubelet \
--address=10.0.0.13 \
--port=10250 \
--read-only-port=0 \
--hostname-override=node-13 \
--pod-infra-container-image=registry.access.redhat.com/rhel7/pod-infrastructure:latest \
--bootstrap-kubeconfig=/etc/kubernetes/kubelet-bootstrap.kubeconfig \
--kubeconfig=/etc/kubernetes/kubelet.kubeconfig \
--feature-gates=RotateKubeletClientCertificate=true,RotateKubeletServerCertificate=true,Accelerators=true,DevicePlugins=true \
--rotate-certificates=true \
--cert-dir=/etc/kubernetes/cert \
--cluster-dns=10.254.0.2 \
--cluster-domain=cluster.local \
--hairpin-mode=promiscuous-bridge \
--allow-privileged=true \
--client-ca-file=/etc/kubernetes/cert/ca.pem \
--anonymous-auth=false \
--authentication-token-webhook=true \
--authorization-mode=Webhook \
--serialize-image-pulls=false \
--max-pods=250 \
--event-qps=0 \
--kube-api-qps=1000 \
--kube-api-burst=2000 \
--registry-qps=0 \
--image-pull-progress-deadline=30m \
--cadvisor-port=0 \
--logtostderr=true \
--v=2
Restart=always
RestartSec=5
StartLimitInterval=0

[Install]
WantedBy=multi-user.target
EOF
--address 不能设置为 127.0.0.1,否则后续 Pods 访问 kubelet 的 API 接口时会失败,因为 Pods 访问的 127.0.0.1 指向自己而不是 kubelet;
--read-only-port=0: 关闭 http 端口 10255;
如果设置了 --hostname-override 选项,则 kube-proxy 也需要设置该选项,否则会出现找不到 Node 的情况;
--bootstrap-kubeconfig 指向 bootstrap kubeconfig 文件,kubelet 使用该文件中的用户名和 token 向 kube-apiserver 发送 TLS Bootstrapping 请求;
管理员通过了 CSR 请求后,kubelet 自动在 --cert-dir 目录创建证书和私钥文件(kubelet-client.crt 和 kubelet-client.key),然后写入 --kubeconfig 文件(自动创建 --kubeconfig 指定的文件);
建议在 --kubeconfig 配置文件中指定 kube-apiserver 地址;
--cluster-dns 指定 kubedns 的 Service IP(可以先分配,后续创建 kubedns 服务时指定该 IP),--cluster-domain 指定域名后缀,这两个参数同时指定后才会生效;
--cadvisor-port=0 关闭 cAdvisor 的 Web 端口;
--root-dir=${K8S_DIR}/kubelet:kublet 数据目录;
--max-pods=250:指定该节点可以运行的 POD 数量,默认值为 110; 该值必须与 flanneld 的 SubnetLen 值相匹配,例如设置 "SubnetLen": 21 时,最大 POD 值为 2048;
--feature-gates:支持 Kubelet Client 和 Server 证书轮转,支持 GPU 加速;Device Plugins 工作目录:/var/lib/kubelet/device-plugins;
--event-qps=0、--kube-api-qps=2000、--kube-api-burst=2000 、--registry-qps=0:调大 qps 和最大同步频率,否则节点 Pod 数很多时(200+),Pod 状态更新非常缓慢(小时级别),导致服务长时间不可用;
--image-pull-progress-deadline=30m:增加镜像 pull 超时时间;

10.0.0.14:

cat > /usr/lib/systemd/system/kubelet.service <<EOF
[Unit]
Description=Kubernetes Kubelet
Documentation=https://github.com/GoogleCloudPlatform/kubernetes
After=docker.service
Requires=docker.service

[Service]
WorkingDirectory=/etc/kubernetes/kubelet
ExecStart=/usr/bin/kubelet \
--root-dir=/etc/kubernetes/kubelet \
--address=10.0.0.14 \
--port=10250 \
--read-only-port=0 \
--hostname-override=node-14 \
--pod-infra-container-image=registry.access.redhat.com/rhel7/pod-infrastructure:latest \
--bootstrap-kubeconfig=/etc/kubernetes/kubelet-bootstrap.kubeconfig \
--kubeconfig=/etc/kubernetes/kubelet.kubeconfig \
--feature-gates=RotateKubeletClientCertificate=true,RotateKubeletServerCertificate=true,Accelerators=true,DevicePlugins=true \
--rotate-certificates=true \
--cert-dir=/etc/kubernetes/cert \
--cluster-dns=10.254.0.2 \
--cluster-domain=cluster.local \
--hairpin-mode=promiscuous-bridge \
--allow-privileged=true \
--client-ca-file=/etc/kubernetes/cert/ca.pem \
--anonymous-auth=false \
--authentication-token-webhook=true \
--authorization-mode=Webhook \
--serialize-image-pulls=false \
--max-pods=250 \
--event-qps=0 \
--kube-api-qps=1000 \
--kube-api-burst=2000 \
--registry-qps=0 \
--image-pull-progress-deadline=30m \
--cadvisor-port=0 \
--logtostderr=true \
--v=2
Restart=always
RestartSec=5
StartLimitInterval=0

[Install]
WantedBy=multi-user.target
EOF

启动 kubelet 服务

kubelet 启动时向 kube-apiserver 发送 TLS bootstrapping 请求,需要先将 bootstrap token 文件中的 kubelet-bootstrap 用户赋予 system:node-bootstrapper 角色,然后 kubelet 才有权限创建认证请求(certificatesigningrequests):
kubectl create clusterrolebinding kubelet-bootstrap --clusterrole=system:node-bootstrapper --user=kubelet-bootstrap
--user=kubelet-bootstrap 是文件 /etc/kubernetes/token.csv 中指定的用户名,同时也写入了文件 /etc/kubernetes/kubelet-bootstrap.kubeconfig;

master节点操作

kubectl create clusterrolebinding kubelet-bootstrap --clusterrole=system:node-bootstrapper --user=kubelet-bootstrap

启动

systemctl daemon-reload && systemctl enable kubelet 
systemctl restart kubelet && systemctl status kubelet
如果开启了 swap 分区,kubelet 会启动失败,需要使用命令 sudo swapoff -a 关闭 swap 分区;
必须先创建工作和日志目录;

kubelet 启动后使用 --bootstrap-kubeconfig 向 kube-apiserver 发送 CSR 请求,当这个 CSR 被 approve 后,kube-controller-manager 为 kubelet 创建 TLS 客户端证书、私钥和 --kubeletconfig 文件。

注意:kube-controller-manager 需要配置 --cluster-signing-cert-file 和 --cluster-signing-key-file 参数,才会为 TLS Bootstrap 创建证书和私钥。

$ kubectl get csr
NAME                                                   AGE       REQUESTOR           CONDITION
node-csr-48VqaZkxOrBNTyIWtbdAO58SGkkxfsQgF9TDEZMwLJI   14s       kubelet-bootstrap   Pending
node-csr-8PJBNkbaa0BeSOxjJU-wVmG-BynUR13kyYO17Jr01YA   13s       kubelet-bootstrap   Pending
node-csr-ImM0EB0AIwLDeOOPJJaCrvZI0ikKUyKSHzdN9L32_Kw   12s       kubelet-bootstrap   Pending
$ kubectl get nodes
No resources found.
三个 work 节点的 csr 均处于 pending 状态;

自动 approve kubelet CSR 请求

创建三个 ClusterRoleBinding,分别用于自动 approve client、renew client、renew server 证书:

master节点操作

cd /root/work

cat > csr-crb.yaml <<EOF
 # Approve all CSRs for the group "system:nodes"
 kind: ClusterRoleBinding
 apiVersion: rbac.authorization.k8s.io/v1
 metadata:
   name: auto-approve-csrs-for-group
 subjects:
 - kind: Group
   name: system:kubelet-bootstrap
   apiGroup: rbac.authorization.k8s.io
 roleRef:
   kind: ClusterRole
   name: system:certificates.k8s.io:certificatesigningrequests:nodeclient
   apiGroup: rbac.authorization.k8s.io
---
 # To let a node of the group "system:nodes" renew its own credentials
 kind: ClusterRoleBinding
 apiVersion: rbac.authorization.k8s.io/v1
 metadata:
   name: node-client-cert-renewal
 subjects:
 - kind: Group
   name: system:nodes
   apiGroup: rbac.authorization.k8s.io
 roleRef:
   kind: ClusterRole
   name: system:certificates.k8s.io:certificatesigningrequests:selfnodeclient
   apiGroup: rbac.authorization.k8s.io
---
# A ClusterRole which instructs the CSR approver to approve a node requesting a
# serving cert matching its client cert.
kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  name: approve-node-server-renewal-csr
rules:
- apiGroups: ["certificates.k8s.io"]
  resources: ["certificatesigningrequests/selfnodeserver"]
  verbs: ["create"]
---
 # To let a node of the group "system:nodes" renew its own server credentials
 kind: ClusterRoleBinding
 apiVersion: rbac.authorization.k8s.io/v1
 metadata:
   name: node-server-cert-renewal
 subjects:
 - kind: Group
   name: system:nodes
   apiGroup: rbac.authorization.k8s.io
 roleRef:
   kind: ClusterRole
   name: approve-node-server-renewal-csr
   apiGroup: rbac.authorization.k8s.io
EOF
kubectl apply -f csr-crb.yaml
等待一段时间(1-10 分钟),三个节点的 CSR 都被自动 approve:
[root@master-ha-10 work]# kubectl get csr
NAME                                                   AGE       REQUESTOR             CONDITION
csr-jql7h                                              1m        system:node:node-13   Approved,Issued
csr-qcmkg                                              1m        system:node:node-14   Approved,Issued
node-csr-IAlNZ_tego8KgY1LIgoS8W7go-twhS1392G00g7TqEE   5m        kubelet-bootstrap     Approved,Issued
node-csr-dpUB1hYajcM2vEt6cK3F16NErnGZ3dvNdLcdz-EGx_Y   5m        kubelet-bootstrap     Approved,Issued

所有节点均 ready:

[root@master-ha-10 work]# kubectl get nodes
NAME      STATUS    ROLES     AGE       VERSION
node-13   Ready     <none>    2m        v1.9.9
node-14   Ready     <none>    2m        v1.9.9
[root@node-13 ~]# ls -l /etc/kubernetes/kubelet.kubeconfig
-rw------- 1 root root 2291 Feb 14 18:45 /etc/kubernetes/kubelet.kubeconfig
[root@node-13 ~]# ls -l /etc/kubernetes/cert/|grep kubelet
-rw-r--r-- 1 root root 1042 Feb 14 18:45 kubelet-client.crt
-rw------- 1 root root  227 Feb 14 18:42 kubelet-client.key
-rw------- 1 root root 1321 Feb 14 18:45 kubelet-server-2021-02-14-18-45-44.pem
lrwxrwxrwx 1 root root   59 Feb 14 18:45 kubelet-server-current.pem -> /etc/kubernetes/cert/kubelet-server-2021-02-14-18-45-44.pem

kublet 启动后监听多个端口,用于接收 kube-apiserver 或其它组件发送的请求:

[root@node-13 ~]# netstat -lnpt|grep kubelet
tcp        0      0 127.0.0.1:10248         0.0.0.0:*               LISTEN      23669/kubelet       
tcp        0      0 10.0.0.13:10250         0.0.0.0:*               LISTEN      23669/kubelet

10248: healthz http 服务;
10250: https API 服务;注意:未开启只读端口 10255;

部署kube-proxy 组件

kube-proxy 运行在所有 worker 节点上,,它监听 apiserver 中 service 和 Endpoint 的变化情况,创建路由规则来进行服务负载均衡。
本文档讲解部署 kube-proxy 的部署,使用 ipvs 模式。

master节点操作

创建证书签名请求:

cd /root/work
cat > kube-proxy-csr.json <<EOF
{
  "CN": "system:kube-proxy",
  "key": {
    "algo": "rsa",
    "size": 2048
  },
  "names": [
    {
      "C": "CN",
      "ST": "BeiJing",
      "L": "BeiJing",
      "O": "k8s",
      "OU": "4Paradigm"
    }
  ]
}
EOF
CN:指定该证书的 User 为 system:kube-proxy;
预定义的 RoleBinding system:node-proxier 将User system:kube-proxy 与 Role system:node-proxier 绑定,该 Role 授予了调用 kube-apiserver Proxy 相关 API 的权限;
该证书只会被 kube-proxy 当做 client 证书使用,所以 hosts 字段为空;

生成证书和私钥:

cfssl gencert -ca=/root/work/ca.pem 
-ca-key=/root/work/ca-key.pem 
-config=/root/work/ca-config.json 
-profile=kubernetes  kube-proxy-csr.json | cfssljson -bare kube-proxy

创建和分发 kubeconfig 文件

$kubectl config set-cluster kubernetes 
--certificate-authority=/root/work/ca.pem 
--embed-certs=true 
--server=https://10.0.0.252:8443 
--kubeconfig=kube-proxy.kubeconfig

$kubectl config set-credentials kube-proxy 
--client-certificate=/root/work/kube-proxy.pem 
--client-key=/root/work/kube-proxy-key.pem 
--embed-certs=true 
--kubeconfig=kube-proxy.kubeconfig

$kubectl config set-context default 
--cluster=kubernetes 
--user=kube-proxy 
--kubeconfig=kube-proxy.kubeconfig

$kubectl config use-context default --kubeconfig=kube-proxy.kubeconfig
--embed-certs=true:将 ca.pem 和 admin.pem 证书内容嵌入到生成的 kubectl-proxy.kubeconfig 文件中(不加时,写入的是证书文件路径);

分发 kubeconfig 文件:

scp kube-proxy.kubeconfig 10.0.0.13:/etc/kubernetes/
scp kube-proxy.kubeconfig 10.0.0.14:/etc/kubernetes/

节点操作

创建和分发 kube-proxy 配置文件

10.0.0.13:

cat > /etc/kubernetes/kube-proxy.config.yaml <<EOF
apiVersion: kubeproxy.config.k8s.io/v1alpha1
kind: KubeProxyConfiguration
bindAddress: 10.0.0.13
clientConnection:
  kubeconfig: /etc/kubernetes/kube-proxy.kubeconfig
clusterCIDR: 172.17.0.0/16
healthzBindAddress: 10.0.0.13:10256
hostnameOverride: node-13
metricsBindAddress: 10.0.0.13:10249
iptables:
  masqueradeAll: false
  masqueradeBit: 14
  minSyncPeriod: 0s
  syncPeriod: 30s
mode: "iptables"
EOF

10.0.0.14:

cat > /etc/kubernetes/kube-proxy.config.yaml <<EOF
apiVersion: kubeproxy.config.k8s.io/v1alpha1
kind: KubeProxyConfiguration
bindAddress: 10.0.0.14
clientConnection:
  kubeconfig: /etc/kubernetes/kube-proxy.kubeconfig
clusterCIDR: 172.17.0.0/16
healthzBindAddress: 10.0.0.14:10256
hostnameOverride: node-14
metricsBindAddress: 10.0.0.14:10249
iptables:
  masqueradeAll: false
  masqueradeBit: 14
  minSyncPeriod: 0s
  syncPeriod: 30s
mode: "iptables"
EOF

创建工作和日志目录

mkdir /etc/kubernetes/kube-proxy

创建 kube-proxy systemd unit 文件,work节点都是一样的。

cat > /usr/lib/systemd/system/kube-proxy.service <<EOF
[Unit]
Description=Kubernetes Kube-Proxy Server
Documentation=https://github.com/GoogleCloudPlatform/kubernetes
After=network.target

[Service]
WorkingDirectory=/etc/kubernetes/kube-proxy
ExecStart=/usr/bin/kube-proxy \
--config=/etc/kubernetes/kube-proxy.config.yaml \
--logtostderr=true \
--v=2
Restart=always
RestartSec=5
StartLimitInterval=0
LimitNOFILE=65536

[Install]
WantedBy=multi-user.target
EOF

启动 kube-proxy 服务

systemctl daemon-reload && systemctl enable kube-proxy  
systemctl restart kube-proxy && systemctl status kube-proxy

查看监听端口和 metrics

[root@node-13 ~]# netstat -lnpt|grep kube-prox
tcp        0      0 10.0.0.13:10256         0.0.0.0:*               LISTEN      27296/kube-proxy    
tcp        0      0 10.0.0.13:10249         0.0.0.0:*               LISTEN      27296/kube-proxy    

查看路由规则

可见将所有到 kubernetes cluster ip 443 端口的请求都转发到 kube-apiserver 的 6443 端口;

[root@node-13 ~]# iptables -nL -t nat|grep kubernetes:https
KUBE-MARK-MASQ  all  --  10.0.0.10            0.0.0.0/0            /* default/kubernetes:https */
DNAT       tcp  --  0.0.0.0/0            0.0.0.0/0            /* default/kubernetes:https */ recent: SET name: KUBE-SEP-2GFESPSWZI6F32XN side: source mask: 255.255.255.255 tcp to:10.0.0.10:6443
KUBE-MARK-MASQ  all  --  10.0.0.12            0.0.0.0/0            /* default/kubernetes:https */
DNAT       tcp  --  0.0.0.0/0            0.0.0.0/0            /* default/kubernetes:https */ recent: SET name: KUBE-SEP-6YD7WFCS6A3UXF6N side: source mask: 255.255.255.255 tcp to:10.0.0.12:6443
KUBE-MARK-MASQ  all  --  10.0.0.11            0.0.0.0/0            /* default/kubernetes:https */
DNAT       tcp  --  0.0.0.0/0            0.0.0.0/0            /* default/kubernetes:https */ recent: SET name: KUBE-SEP-XQAW2FTO4OYRZLRP side: source mask: 255.255.255.255 tcp to:10.0.0.11:6443
KUBE-MARK-MASQ  tcp  -- !172.17.0.0/16        10.254.0.1           /* default/kubernetes:https cluster IP */ tcp dpt:443
KUBE-SVC-NPX46M4PTMTKRN6Y  tcp  --  0.0.0.0/0            10.254.0.1           /* default/kubernetes:https cluster IP */ tcp dpt:443
KUBE-SEP-2GFESPSWZI6F32XN  all  --  0.0.0.0/0            0.0.0.0/0            /* default/kubernetes:https */ recent: CHECK seconds: 10800 reap name: KUBE-SEP-2GFESPSWZI6F32XN side: source mask: 255.255.255.255
KUBE-SEP-XQAW2FTO4OYRZLRP  all  --  0.0.0.0/0            0.0.0.0/0            /* default/kubernetes:https */ recent: CHECK seconds: 10800 reap name: KUBE-SEP-XQAW2FTO4OYRZLRP side: source mask: 255.255.255.255
KUBE-SEP-6YD7WFCS6A3UXF6N  all  --  0.0.0.0/0            0.0.0.0/0            /* default/kubernetes:https */ recent: CHECK seconds: 10800 reap name: KUBE-SEP-6YD7WFCS6A3UXF6N side: source mask: 255.255.255.255
KUBE-SEP-2GFESPSWZI6F32XN  all  --  0.0.0.0/0            0.0.0.0/0            /* default/kubernetes:https */ statistic mode random probability 0.33332999982
KUBE-SEP-XQAW2FTO4OYRZLRP  all  --  0.0.0.0/0            0.0.0.0/0            /* default/kubernetes:https */ statistic mode random probability 0.50000000000
KUBE-SEP-6YD7WFCS6A3UXF6N  all  --  0.0.0.0/0            0.0.0.0/0            /* default/kubernetes:https */
原文地址:https://www.cnblogs.com/you-xiaoqing/p/14411990.html