kubernetes1.5.2集群部署过程--非安全模式

运行环境

宿主机:CentOS7 7.3.1611
关闭selinux
etcd 3.1.9
flunnel 0.7.1
docker 1.12.6
kubernetes 1.5.2

安装软件

yum install etcd kubernetes kubernetes-client kubernetes-master kubernetes-node flannel docker docker-devel docker-client docker-common -y

部署etcd

IP=$(ifconfig ens33 | grep inet | grep -v inet6 | gawk {'print $2'})
cat << EOF > /etc/etcd/etcd.conf
# [member]
ETCD_NAME=${HOSTNAME}
ETCD_DATA_DIR="/var/lib/etcd/default.etcd"
#ETCD_WAL_DIR=""
#ETCD_SNAPSHOT_COUNT="10000"
#ETCD_HEARTBEAT_INTERVAL="100"
#ETCD_ELECTION_TIMEOUT="1000"
ETCD_LISTEN_PEER_URLS="http://${IP}:2380,http://localhost:2380"
ETCD_LISTEN_CLIENT_URLS="http://${IP}:2379,http://localhost:2379"
#ETCD_MAX_SNAPSHOTS="5"
#ETCD_MAX_WALS="5"
#ETCD_CORS=""
#
#[cluster]
# ETCD_INITIAL_ADVERTISE_PEER_URLS="http://${IP}:2380"  # 如果部署集群的话,把这个注释去掉
# ETCD_INITIAL_CLUSTER="kube-master=http://192.168.20.128:2380,kube-node1=http://192.168.20.131:2380,kube-node2=http://192.168.20.132:2380,kube-node3=http://192.168.20.134:2380,kube-node4=http://192.168.20.135:2380"# 如果部署集群的话,把这个注释去掉
#ETCD_INITIAL_CLUSTER_STATE="new"
#ETCD_INITIAL_CLUSTER_TOKEN="etcd-cluster"
ETCD_ADVERTISE_CLIENT_URLS="http://${IP}:2379"
#ETCD_DISCOVERY=""
#ETCD_DISCOVERY_SRV=""
......
EOF

service etcd start systemctl enable etcd

kubernetes

kube-master

cat << EOF > /etc/kubernetes/apiserver
# The address on the local server to listen to.
KUBE_API_ADDRESS="--insecure-bind-address=0.0.0.0"

# The port on the local server to listen on.
KUBE_API_PORT="--port=8080"

# Port minions listen on
KUBELET_PORT="--kubelet-port=10250"

# Comma separated list of nodes in the etcd cluster
KUBE_ETCD_SERVERS="--etcd-servers=http://192.168.20.128:2379"

# Address range to use for services
KUBE_SERVICE_ADDRESSES="--service-cluster-ip-range=172.17.0.0/24"

# default admission control policies
KUBE_ADMISSION_CONTROL="--admission-control=NamespaceLifecycle,NamespaceExists,LimitRanger,SecurityContextDeny,ServiceAccount,ResourceQuota"

# Add your own!
KUBE_API_ARGS=""
EOF

注意,如果出现这样的错误:

replica_set.go:448] Sync "" failed with unable to create pods: No API token found for service account "default", retry after the token is automatically created and added to the service account

是因为KUBEADMISSIONCONTROL="..."中的ServiceAccount引起的,改为

KUBE_ADMISSION_CONTROL="--admission-control=NamespaceLifecycle,NamespaceExists,LimitRanger,SecurityContextDeny,ResourceQuota"就恢复了

node端

cat << EOF > /etc/kubernetes/kubelet
###
# kubernetes kubelet (minion) config

# The address for the info server to serve on (set to 0.0.0.0 or "" for all interfaces)
KUBELET_ADDRESS="--address=0.0.0.0"

# The port for the info server to serve on
KUBELET_PORT="--port=10250"

# You may leave this blank to use the actual hostname
KUBELET_HOSTNAME="--hostname-override=${HOSTNAME}"

# location of the api-server
KUBELET_API_SERVER="--api-servers=http://192.168.20.128:8080"

# pod infrastructure container
KUBELET_POD_INFRA_CONTAINER="--pod-infra-container-image=registry.access.redhat.com/rhel7/pod-infrastructure:latest"

# Add your own!
KUBELET_ARGS=""
EOF

kube-master和node端均执行

配置日志目录

mkdir -p /var/log/kubernetes
chown -R kube:kube /var/log/kubernetes

配置config文件

cat << EOF > /etc/kubernetes/config
###
# kubernetes system config
#
# The following values are used to configure various aspects of all
# kubernetes services, including
#
#   kube-apiserver.service
#   kube-controller-manager.service
#   kube-scheduler.service
#   kubelet.service
#   kube-proxy.service
# logging to stderr means we get it in the systemd journal
KUBE_LOGTOSTDERR="--logtostderr=false --log-dir=/var/log/kubernetes"

# journal message level, 0 is debug
KUBE_LOG_LEVEL="--v=2"

# Should this cluster be allowed to run privileged docker containers
KUBE_ALLOW_PRIV="--allow-privileged=false"

# How the controller-manager, scheduler, and proxy find the apiserver
KUBE_MASTER="--master=http://192.168.20.128:8080"
EOF

KUBE_MASTER="--master=http://192.168.20.128:8080"是将Kubernetes的apiserver进程的服务地址告诉Kubernetes的controller-manager, scheduler和proxy进程。

配置flannel网络

flannel服务是打通node节点的docker网络,实现docker跨主机通讯

配置IP地址

在etcd主机上执行

etcdctl mk /coreos.com/network/config '{"Network":"172.16.0.0/16"}'

测试 etcdctl get /coreos.com/network/config {"Network":"172.16.0.0/16"}

配置文件

需要使用flannel网络的所有主机执行 cat << EOF > /etc/sysconfig/flanneld # Flanneld configuration options

# etcd url location.  Point this to the server where etcd runs
FLANNEL_ETCD_ENDPOINTS="http://192.168.20.128:2379"

# etcd config key.  This is the configuration key that flannel queries
# For address range assignment
FLANNEL_ETCD_PREFIX="/coreos.com/network"

# Any additional options that you want to pass
#FLANNEL_OPTIONS=""
EOF

启动服务

kube-master

for SERVICES in kube-apiserver kube-controller-manager kube-scheduler flanneld
#如果在master上不运行kube-proxy docker等服务,就不需要启动
do
    systemctl restart $SERVICES
    systemctl enable $SERVICES
done

node端

for SERVICES in kube-proxy docker flanneld kubelet
do
    systemctl restart $SERVICES
    systemctl enable $SERVICES 
done

测试

在kube-master上执行:

kubectl get nodes
NAME         STATUS    AGE
kube-node1   Ready     1d

能看到所有的node主机,表示部署成功

原文地址:https://www.cnblogs.com/lykops/p/8263135.html