K8S从入门到放弃系列-(10)kubernetes集群之kube-proxy部署

摘要:
  kube-proxy的作用主要是负责service的实现,具体来说,就是实现了内部从pod到service和外部的从node port向service的访问

新版本目前 kube-proxy 组件全部采用 ipvs 方式负载,所以为了 kube-proxy 能正常工作需要预先处理一下 ipvs 配置以及相关依赖(每台 node 都要处理)
## 开启ipvs
[root@k8s-master01 ~]# ansible k8s-node -m shell -a "yum install -y ipvsadm ipset conntrack"
[root@k8s-master01 ~]# ansible k8s-node -m shell -a 'modprobe -- ip_vs'
[root@k8s-master01 ~]# ansible k8s-node -m shell -a 'modprobe -- ip_vs_rr'
[root@k8s-master01 ~]# ansible k8s-node -m shell -a 'modprobe -- ip_vs_wrr'
[root@k8s-master01 ~]# ansible k8s-node -m shell -a 'modprobe -- ip_vs_sh'
[root@k8s-master01 ~]# ansible k8s-node -m shell -a 'modprobe -- nf_conntrack_ipv4'
1)创建kube-proxy证书请求

[root@k8s-master01 ~]# vim /opt/k8s/certs/kube-proxy-csr.json
{
    "CN": "system:kube-proxy",
    "hosts": [],
    "key": {
        "algo": "rsa",
        "size": 2048
    },
    "names": [
        {
            "C": "CN",
            "ST": "ShangHai",
            "L": "ShangHai",
            "O": "system:kube-proxy",
            "OU": "System"
        }
    ]
}
2)生成kube-proxy证书与私钥

[root@k8s-master01 ~]# cd /opt/k8s/certs/
[root@k8s-master01 certs]# cfssl gencert -ca=/etc/kubernetes/ssl/ca.pem 
 -ca-key=/etc/kubernetes/ssl/ca-key.pem 
 -config=/opt/k8s/certs/ca-config.json 
 -profile=kubernetes kube-proxy-csr.json | cfssljson -bare kube-proxy
2019/04/25 17:39:22 [INFO] generate received request
2019/04/25 17:39:22 [INFO] received CSR
2019/04/25 17:39:22 [INFO] generating key: rsa-2048
2019/04/25 17:39:22 [INFO] encoded CSR
2019/04/25 17:39:22 [INFO] signed certificate with serial number 265052874363255358468035370835573343349230196562
2019/04/25 17:39:22 [WARNING] This certificate lacks a "hosts" field. This makes it unsuitable for
websites. For more information see the Baseline Requirements for the Issuance and Management
of Publicly-Trusted Certificates, v.1.1.6, from the CA/Browser Forum (https://cabforum.org);
specifically, section 10.2.3 ("Information Requirements").
3)查看证书生成

[root@k8s-master01 certs]# ll kube-proxy*
-rw-r--r-- 1 root root 1029 Apr 25 17:39 kube-proxy.csr
-rw-r--r-- 1 root root  302 Apr 25 17:37 kube-proxy-csr.json
-rw------- 1 root root 1675 Apr 25 17:39 kube-proxy-key.pem
-rw-r--r-- 1 root root 1428 Apr 25 17:39 kube-proxy.pem
4)证书分发

[root@k8s-master01 certs]# ansible k8s-node -m copy -a 'src=/opt/k8s/certs/kube-proxy-key.pem dest=/etc/kubernetes/ssl/'
[root@k8s-master01 certs]# ansible k8s-node -m copy -a 'src=/opt/k8s/certs/kube-proxy.pem dest=/etc/kubernetes/ssl/'
5)创建kube-proxy kubeconfig文件
kube-proxy组件连接 apiserver 所需配置文件
## 配置集群参数
[root@k8s-master01 ~]# kubectl config set-cluster kubernetes 
 --certificate-authority=/etc/kubernetes/ssl/ca.pem 
 --embed-certs=true 
 --server=https://127.0.0.1:6443 
 --kubeconfig=kube-proxy.kubeconfig
Cluster "kubernetes" set.
## 配置客户端认证参数
[root@k8s-master01 ~]# kubectl config set-credentials system:kube-proxy 
 --client-certificate=/opt/k8s/certs/kube-proxy.pem 
 --embed-certs=true 
 --client-key=/opt/k8s/certs/kube-proxy-key.pem 
 --kubeconfig=kube-proxy.kubeconfig
User "system:kube-proxy" set.
## 配置集群上下文
[root@k8s-master01 ~]# kubectl config set-context system:kube-proxy@kubernetes 
 --cluster=kubernetes 
 --user=system:kube-proxy 
 --kubeconfig=kube-proxy.kubeconfig
Context "system:kube-proxy@kubernetes" created.
## 配置集群默认上下文
[root@k8s-master01 ~]# kubectl config use-context system:kube-proxy@kubernetes --kubeconfig=kube-proxy.kubeconfig
Switched to context "system:kube-proxy@kubernetes".

##配置文件分发至node节点
[root@k8s-master01 ~]# ansible k8s-node -m copy -a 'src=/root/kube-proxy.kubeconfig dest=/etc/kubernetes/config/'
6)配置kube-proxy参数

[root@k8s-master01 ~]# vim /opt/k8s/cfg/kube-proxy.conf
###
# kubernetes proxy config
# default config should be adequate
# Add your own!
KUBE_PROXY_ARGS="   --bind-address=0.0.0.0 
                    --cleanup-ipvs=true 
                    --cluster-cidr=10.254.0.0/16 
                    --hostname-override=k8s-node01 
                    --healthz-bind-address=0.0.0.0 
                    --healthz-port=10256 
                    --masquerade-all=true 
                    --proxy-mode=ipvs 
                    --ipvs-min-sync-period=5s 
                    --ipvs-sync-period=5s 
                    --ipvs-scheduler=wrr 
                    --kubeconfig=/etc/kubernetes/config/kube-proxy.kubeconfig 
                    --logtostderr=true 
                    --v=2"
## 分发参数配置文件
### 修改hostname-override字段所属主机名
[root@k8s-master01 ~]# ansible k8s-node -m copy -a 'src=/opt/k8s/cfg/kube-proxy.conf dest=/etc/kubernetes/config/'
7)kube-proxy系统服务脚本

[root@k8s-master01 ~]# vim /opt/k8s/unit/kube-proxy.service
[Unit] Description=Kubernetes Kube-Proxy Server Documentation=https://github.com/GoogleCloudPlatform/kubernetes After=network.target [Service] EnvironmentFile=-/etc/kubernetes/config/kube-proxy.conf ExecStart=/usr/local/bin/kube-proxy $KUBE_PROXY_ARGS Restart=on-failure LimitNOFILE=65536 [Install] WantedBy=multi-user.target ## 分发至node节点 [root@k8s-master01 ~]# ansible k8s-node -m copy -a 'src=/opt/k8s/unit/kube-proxy.service dest=/usr/lib/systemd/system/' ## 启动服务 [root@k8s-master01 ~]# ansible k8s-node -m shell -a 'systemctl daemon-reload' [root@k8s-master01 ~]# ansible k8s-node -m shell -a 'systemctl enable kube-proxy' [root@k8s-master01 ~]# ansible k8s-node -m shell -a 'systemctl start kube-proxy'
8)查看ipvs路由规则

检查LVS状态,可以看到已经创建了一个LVS集群,将来自10.254.0.1:443的请求转到三台master的6443端口,而6443就是api-server的端口
[root@k8s-node01 ~]# ipvsadm -L -n
IP Virtual Server version 1.2.1 (size=4096)
Prot LocalAddress:Port Scheduler Flags
  -> RemoteAddress:Port           Forward Weight ActiveConn InActConn
TCP  10.254.0.1:443 wrr
  -> 10.10.0.18:6443              Masq    1      0          0         
  -> 10.10.0.19:6443              Masq    1      0          0         
  -> 10.10.0.20:6443              Masq    1      0          0
原文地址:https://www.cnblogs.com/tchua/p/10772686.html