k8s安装

https://www.kubernetes.org.cn/doc-16

https://blog.csdn.net/zjysource/article/details/52086835

https://blog.csdn.net/wucong60/article/details/81911859

https://www.kubernetes.org.cn/tags/kubeadm

https://www.katacoda.com/courses/kubernetes

Kubernetes包提供了一些服务:

  kube-apiserver,

  kube-scheduler,

  kube-controller-manager,

  kubelet,

  kube-proxy。

  这些服务通过systemd进行管理,配置信息都集中存放在一个地方:/etc/kubernetes。

  我们将会把这些服务运行到不同的主机上。第一台主机,centosmaster,将是Kubernetes 集群的master主机。这台机器上将运行kube-apiserver, kubecontroller-manager和kube-scheduler这几个服务,此外,master主机上还将运行etcd。其余的主机,fed-minion,将是从节点,将会运行kubelet, proxy和docker。

systemctl stop firewalld
systemctl disable firewalld
yum -y install ntp
systemctl start ntpd
systemctl enable ntpd

配置master

yum -y install etcd docker kubernetes

对etcd进行配置,编辑/etc/etcd/etcd.conf

ETCD_DATA_DIR="/var/lib/etcd/default.etcd"  #ETCD存储目录
ETCD_LISTEN_CLIENT_URLS="http://0.0.0.0:2379"  #修改项,表示etcd在2379端口上监听所有网络接口。
ETCD_NAME="default"  #ETCD名称
ETCD_ADVERTISE_CLIENT_URLS="http://localhost:2379"  #对外提供服务的地址

  

对Master节点上的Kubernetes进行配置,编辑配置文件/etc/kubernetes/config

KUBE_LOGTOSTDERR="--logtostderr=true"
KUBE_LOG_LEVEL="--v=0"
KUBE_ALLOW_PRIV="--allow-privileged=false"
KUBE_MASTER="--master=http://192.168.0.201:8080" #修改为master的地址。含义:将apiserver进程的服务地址告诉controller-manager, scheduler和proxy进程。

  

/etc/kubernetes/apiserver

这些配置让apiserver进程在8080端口上监听所有网络接口,并告诉apiserver进程etcd服务的地址。

KUBE_API_ADDRESS="--address=0.0.0.0" #修改项
KUBE_API_PORT="--port=8080" #添加项
KUBELET_PORT="--kubelet-port=10250"
KUBE_ETCD_SERVERS="--etcd-servers=http://127.0.0.1:2379"
KUBE_SERVICE_ADDRESSES="--service-cluster-ip-range=10.254.0.0/16"
KUBE_ADMISSION_CONTROL="--admission-control=NamespaceLifecycle,NamespaceExists,LimitRanger,SecurityContextDeny,ResourceQuota"
KUBE_API_ARGS=""

这台机器的linux的内核中的SELinux不支持 overlay2 graph driver 。
解决方法有两个,要么启动一个新内核,要么就在docker配置文件里面里禁用selinux,--selinux-enabled=false

将“--selinux-enabled”改成“--selinux-enabled=false”

[root@registry lib]# cat /etc/sysconfig/docker
# /etc/sysconfig/docker

# Modify these options if you want to change the way the docker daemon runs
#OPTIONS='--selinux-enabled --log-driver=journald --signature-verification=false'

OPTIONS='--selinux-enabled=false --log-driver=journald --signature-verification=false --registry-mirror=https://fzhifedh.mirror.aliyuncs.com --insecure-registry=registry.sese.com'    #修改这里的"--selinux-enabled",改成"--selinux-enabled=false"
if [ -z "${DOCKER_CERT_PATH}" ]; then
    DOCKER_CERT_PATH=/etc/docker
fi

启动服务:

for SERVICES  in etcd docker kube-apiserver kube-controller-manager kube-scheduler;  do
    systemctl restart $SERVICES
    systemctl enable $SERVICES
    systemctl status $SERVICES
done

现在我们可以使用kubectl get nodes命令来查看,当然,目前还没有Node节点加入到该Kubernetes集群,所以命令的执行结果是空的:

# kubectl get nodes
NAME              STATUS    AGE

 

etcdctl mk /atomic.io/network/config '{"Network":"172.17.0.0/16"}'
#节点上的flannel用master上的etcd里的/atomic.io/network/config来创建网络的表

  

配置节点

yum -y install flannel docker kubernetes

/etc/sysconfig/flanneld   #对flannel进行配置

FLANNEL_ETCD_ENDPOINTS="http://192.168.0.201:2379" #修改成master的地址,告诉flannel进程etcd服务的位置以及在etcd上网络配置信息的节点位置
FLANNEL_ETCD_PREFIX="/atomic.io/network"

/etc/kubernetes/config:

KUBE_LOGTOSTDERR="--logtostderr=true"
KUBE_LOG_LEVEL="--v=0"
KUBE_ALLOW_PRIV="--allow-privileged=false"
KUBE_MASTER="--master=http://192.168.0.201:8080"  #修改成master的地址,将Kubernetes的apiserver进程的服务地址告诉Kubernetes的controller-manager, scheduler和proxy进程。

/etc/kubernetes/kubelet

KUBELET_ADDRESS="--address=0.0.0.0"
KUBELET_PORT="--port=10250"
KUBELET_HOSTNAME="--hostname-override=192.168.0.200" #节点服务器的IP
KUBELET_API_SERVER="--api-servers=http://192.168.0.201:8080" #master服务器的IP
KUBELET_POD_INFRA_CONTAINER="--pod-infra-container-image=registry.access.redhat.com/rhel7/pod-infrastructure:latest"
KUBELET_ARGS=""

节点上启动kube-proxy kubelet docker和flanneld进程并查看其状态

for SERVICES in kube-proxy kubelet docker flanneld; do
    systemctl restart $SERVICES
    systemctl enable $SERVICES
    systemctl status $SERVICES 
done

Master节点上使用kubectl get nodes命令来查看,可以看到加入的Node节点

[root@localhost ~]# kubectl get nodes
NAME            STATUS    AGE
192.168.0.200   Ready     1h

搭建私有仓库:

vim /etc/pki/tls/openssl.cnf

[ v3_ca ]
subjectAltName = IP:192.168.169.125  #增加一行

cd /etc/pki/tls

[root@localhost tls]# openssl req -newkey rsa:4096 -nodes -sha256 -keyout certs/domain.key -x509 -days 365 -out certs/domain.crt

Generating a 4096 bit RSA private key
..................................................................................................................++
writing new private key to 'certs/domain.key'
-----
You are about to be asked to enter information that will be incorporated
into your certificate request.
What you are about to enter is what is called a Distinguished Name or a DN.
There are quite a few fields but you can leave some blank
For some fields there will be a default value,
If you enter '.', the field will be left blank.
-----
Country Name (2 letter code) [XX]:86 #国家名(2位代码)
State or Province Name (full name) []:jiangsu~^H^Hxvzhou #省/市/自治区名称(全名)
Locality Name (eg, city) [Default City]:xv^Hzhou #地点名称(例如,城市)
Organization Name (eg, company) [Default Company Ltd]:ni^Hhao #组织名称(例如,公司)
Organizational Unit Name (eg, section) []:jishu #组织单位名称(例如,部门)
Common Name (eg, your name or your server's hostname) []:hostname #公用名(例如,您的名称或服务器的主机名)
Email Address []:123@qq.com #邮箱地址

证书创建完毕后,在certs目录下出现了两个文件:证书文件domain.crt和私钥文件domain.key

在192.168.169.125上安装docker

yum -y install docker

将前面生成的domain.crt文件复制到/etc/docker/certs.d/192.168.169.125:5000目录下,然后重启docker进程:

mkdir -p /etc/docker/certs.d/192.168.0.205:5000
cp certs/domain.crt /etc/docker/certs.d/192.168.0.205:5000/ca.crt
systemctl restart docker

  

[root@localhost tls]# docker run -d -p 5000:5000 --restart=always --name registry   -v `pwd`/certs:/certs   -e REGISTRY_HTTP_TLS_CERTIFICATE=/certs/domain.crt   -e REGISTRY_HTTP_TLS_KEY=/certs/domain.key   registry:2
Unable to find image 'registry:2' locally
Trying to pull repository docker.io/library/registry ... 
2: Pulling from docker.io/library/registry
c87736221ed0: Pull complete 
1cc8e0bb44df: Pull complete 
54d33bcb37f5: Pull complete 
e8afc091c171: Pull complete 
b4541f6d3db6: Pull complete 
Digest: sha256:3b00e5438ebd8835bcfa7bf5246445a6b57b9a50473e89c02ecc8e575be3ebb5
Status: Downloaded newer image for docker.io/registry:2
/usr/bin/docker-current: Error response from daemon: Invalid container name (registry  ), only [a-zA-Z0-9][a-zA-Z0-9_.-] are allowed.
See '/usr/bin/docker-current run --help'.

最后,将domain.crt文件复制到Kubernetes集群里的所有节点的/etc/docker/certs.d/192.168.169.125:5000目录下,并重启各节点的docker进程,例如在192.168.169.121节点上运行:

mkdir -p /etc/docker/certs.d/192.168.169.125:5000
scp root@192.168.0.205:~/certs/domain.crt /etc/docker/certs.d/192.168.0.205:5000/ca.crt
systemctl restart docker

容器的简单命令:

1. 先查看所有的容器
# docker ps -a
CONTAINER ID        IMAGE               COMMAND             CREATED             STATUS                        PORTS               NAMES
e3274a72e8d6        tomcat              "catalina.sh run"   2 weeks ago         Exited (130) 19 minutes ago                       tomcat8080
看到了这个名为 “tomcat8080” 的容器,并且这个容器是非运行(Exited)状态。

注:“docker ps” 是查看当前运行的容器,“docker ps -a” 是查看所有容器(包括停止的)。

2. 移除这个“tomcat8080”容器
# docker rm e3274a72e8d6
e3274a72e8d6

3. 然后再创建新容器
# docker run --name tomcat8080 -d -p 8080:8080 tomcat
新容器创建成功,并且是运行状态:
# docker ps -a

  

Kubernetes Web UI搭建

这节我以搭建Kubernetes Web UI(kubernetes-dashboard)来简要演示如何使用Docker私有库。

由于我的Kubernetes集群无法直接从gcr.io拉取kubernetes-dashboard的镜像,我事先下载了镜像文件并使用docker load命令加载镜像:

# docker load < kubernetes-dashboard-amd64_v1.1.0.tar.gz
# docker images
REPOSITORY                                        TAG                 IMAGE ID            CREATED             SIZE
registry                                          2                   c6c14b3960bd        3 days ago          33.28 MB
ubuntu                                            latest              42118e3df429        9 days ago          124.8 MB
hello-world                                       latest              c54a2cc56cbb        4 weeks ago         1.848 kB
172.28.80.11:5000/kubernetes-dashboard-amd64      v1.1.0              20b7531358be        5 weeks ago         58.52 MB
registry                                          2                   8ff6a4aae657        7 weeks ago         171.5 MB


我为加载的kubernetes-dashboard镜像打上私有库的标签并推送到私有库:

# docker tag 20b7531358be 192.168.169.125:5000/kubernetes-dashboard-amd64
# docker push 192.168.169.125:5000/kubernetes-dashboard-amd64


从Kubernetes官网获取了kubernetes-dashboard的配置文件https://rawgit.com/kubernetes/dashboard/master/src/deploy/kubernetes-dashboard.yaml,对其进行编辑如下:

# Copyright 2015 Google Inc. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.

# Configuration to deploy release version of the Dashboard UI.
#
# Example usage: kubectl create -f <this_file>

kind: Deployment
apiVersion: extensions/v1beta1
metadata:
labels:
app: kubernetes-dashboard
version: v1.1.0
name: kubernetes-dashboard
namespace: kube-system
spec:
replicas: 1
selector:
matchLabels:
app: kubernetes-dashboard
template:
metadata:
labels:
app: kubernetes-dashboard
spec:
containers:
- name: kubernetes-dashboard
image: 192.168.169.125:5000/kubernetes-dashboard-amd64
imagePullPolicy: Always
ports:
- containerPort: 9090
protocol: TCP
args:
# Uncomment the following line to manually specify Kubernetes API server Host
# If not specified, Dashboard will attempt to auto discover the API server and connect
# to it. Uncomment only if the default does not work.
- --apiserver-host=192.168.169.120:8080
livenessProbe:
httpGet:
path: /
port: 9090
initialDelaySeconds: 30
timeoutSeconds: 30
---
kind: Service
apiVersion: v1
metadata:
labels:
app: kubernetes-dashboard
name: kubernetes-dashboard
namespace: kube-system
spec:
type: NodePort
ports:
- port: 80
targetPort: 9090
selector:
app: kubernetes-dashboard

尤其要注意:1 创建的Pods所要拉取的镜像是Docker私有库的192.168.169.125:5000/kubernetes-dashboard-adm64; 2 apiserver-host参数是192.168.169.120:8080,即Kubernetes Master节点的aipserver服务地址。
修改完kubernetes-dashboard.yaml后保存到Kubernetes Master节点192.168.169.120节点上,在该节点上用kubectl create命令创建kubernetes-dashboard:

# kubectl create -f kubernetes-dashboard.yaml


创建完成后,查看Pods和Service的详细信息:

# kubectl get pods --all-namespaces
NAMESPACE     NAME                                    READY     STATUS    RESTARTS   AGE
default       nginx                                   1/1       Running   0          3h
kube-system   kubernetes-dashboard-4164430742-lqhcg   1/1       Running   0          2h


# kubectl describe pods/kubernetes-dashboard-4164430742-lqhcg --namespace="kube-system"
Name:        kubernetes-dashboard-4164430742-lqhcg
Namespace:    kube-system
Node:        192.168.169.124/192.168.169.124
Start Time:    Mon, 01 Aug 2016 16:12:02 +0800
Labels:        app=kubernetes-dashboard,pod-template-hash=4164430742
Status:        Running
IP:        172.17.17.3
Controllers:    ReplicaSet/kubernetes-dashboard-4164430742
Containers:
  kubernetes-dashboard:
    Container ID:    docker://40ab377c5b8a333487f251547e5de51af63570c31f9ba05fe3030a02cbb3660c
    Image:        192.168.169.125:5000/kubernetes-dashboard-amd64
    Image ID:        docker://sha256:20b7531358be693a34eafdedee2954f381a95db469457667afd4ceeb7146cd1f
    Port:        9090/TCP
    Args:
      --apiserver-host=192.168.169.120:8080
    QoS Tier:
      cpu:        BestEffort
      memory:        BestEffort
    State:        Running
      Started:        Mon, 01 Aug 2016 16:12:03 +0800
    Ready:        True
    Restart Count:    0
    Liveness:        http-get http://:9090/ delay=30s timeout=30s period=10s #success=1 #failure=3
    Environment Variables:
Conditions:
  Type        Status
  Ready     True
No volumes.
No events.


# kubectl describe service/kubernetes-dashboard --namespace="kube-system"
Name:            kubernetes-dashboard
Namespace:        kube-system
Labels:            app=kubernetes-dashboard
Selector:        app=kubernetes-dashboard
Type:            NodePort
IP:            10.254.213.209
Port:            <unset>    80/TCP
NodePort:        <unset>    31482/TCP
Endpoints:        172.17.17.3:9090
Session Affinity:    None
No events.

7.发布nginx服务

7.1 创建pod : nginx-pod.yaml

kubectl create -f nginx-pod.yaml

apiVersion: v1
kind: Pod
metadata:
  name: nginx-pod
  labels:
    name: nginx-pod
spec:
  containers:
  - name: nginx
    image: nginx
    ports:
    - containerPort: 80

  

7.2 查看pod的状态

[root@localhost ~]# kubectl get pods
NAME        READY     STATUS              RESTARTS   AGE
nginx-pod   0/1       ContainerCreating   0          2h

等10分钟再试

NAME READY STATUS RESTARTS AGE 
nginx-pod 1/1 Running 0 13m 

PS:这里经常会因为网络问题失败,可以先使用docker手动pull镜像后再使用kubectl来create pod,如果还是不行,就delete pod之后再create pod,实在不行,可以重启机器试试,还是不行,那就是配置出问题了

7.3 创建replicationController : nginx-rc.yaml

kubectl create -f nginx-rc.yaml

apiVersion: v1
kind: ReplicationController
metadata:
  name: nginx-rc
spec:
  replicas: 1
  selector:
    name: nginx-pod
  template:
    metadata:
      labels:
        name: nginx-pod
    spec:
      containers:
      - name: nginx-pod
        image: nginx
        ports:
        - containerPort: 80

7.4 查看ReplicationController状况

[root@localhost ~]# kubectl get rc
NAME       DESIRED   CURRENT   READY     AGE
nginx-rc   1         1         0         1h

7.5 创建service : nginx-service.yaml

kubectl create -f nginx-service.yaml

apiVersion: v1
kind: Service
metadata:
  name: nginx-service
spec:
  type: NodePort
  ports:
  - port: 80
    nodePort: 30001
  selector:
    name: nginx-pod    

  

7.6 查看service状态

PS:其中Kubernetes服务为Kube系统自带服务,无需理会

7.7 测试发布的nginx服务
使用其他机器的浏览器访问node1机器的30001端口 

[root@localhost ~]# kubectl describe svc nginx
Name:			nginx-service
Namespace:		default
Labels:			<none>
Selector:		name=nginx-pod
Type:			NodePort
IP:			10.254.98.243
Port:			<unset>	80/TCP
NodePort:		<unset>	30001/TCP
Endpoints:		<none>
Session Affinity:	None
No events.

[root@localhost ~]# kubectl describe svc nginx-service
Name:			nginx-service
Namespace:		default
Labels:			<none>
Selector:		name=nginx-pod
Type:			NodePort
IP:			10.254.98.243
Port:			<unset>	80/TCP
NodePort:		<unset>	30001/TCP
Endpoints:		<none>
Session Affinity:	None
No events.

  

[root@localhost build]# sh postinstall.sh
postinstall.sh: line 19: cd: ./node_modules/wiredep: No such file or directory
postinstall.sh: line 20: ../../build/patch/wiredep/wiredep.patch: No such file or directory
postinstall.sh: line 21: cd: lib: No such file or directory
postinstall.sh: line 22: ../../../build/patch/wiredep/detect-dependencies.patch: No such file or directory
postinstall.sh: line 29: go: command not found
postinstall.sh: line 33: cd: ./.tools/: No such file or directory
Cloning into 'xtbgenerator'...
remote: Enumerating objects: 229, done.
remote: Total 229 (delta 0), reused 0 (delta 0), pack-reused 229
Receiving objects: 100% (229/229), 27.78 MiB | 45.00 KiB/s, done.
Resolving deltas: 100% (73/73), done.
Note: checking out 'd6a6c9ed0833f461508351a80bc36854bc5509b2'.

You are in 'detached HEAD' state. You can look around, make experimental
changes and commit them, and you can discard any commits you make in this
state without impacting any branches by performing another checkout.

If you want to create a new branch to retain commits you create, you may
do so (now or later) by using -b with the checkout command again. Example:

git checkout -b new_branch_name

HEAD is now at d6a6c9e... fix empty --js param, recompile bin/XtbGenerator.jar

[root@localhost build]# sh run-gulp-in-docker.sh
ERRO[0000] Can't add file /dashboard-1.8.3/build/xtbgenerator/.git/objects/pack/tmp_pack_eZzn8x to tar: archive/tar: write too long
Sending build context to Docker daemon 43.7 MB
Step 1/6 : FROM golang
Trying to pull repository docker.io/library/golang ...
latest: Pulling from docker.io/library/golang
e79bb959ec00: Already exists
d4b7902036fe: Already exists
1b2a72d4e030: Already exists
d54db43011fd: Pull complete
963c818ebafc: Pull complete
2c6333e9b74a: Pull complete
3b0c71504fac: Pull complete
Digest: sha256:62538d25400afa368551fdeebbeed63f37a388327037438199cdf60b7f465639
Status: Downloaded newer image for docker.io/golang:latest
---> 213fe73a3852
Step 2/6 : RUN curl -sL https://deb.nodesource.com/setup_9.x | bash - && apt-get install -y --no-install-recommends openjdk-8-jre nodejs patch && rm -rf /var/lib/apt/lists/* && apt-get clean
---> Running in ef39b4003488

nsenter: could not ensure we are a cloned binary: Invalid argument
container_linux.go:247: starting container process caused "write parent: broken pipe"
oci runtime error: container_linux.go:247: starting container process caused "write parent: broken pipe"

Unable to find image 'kubernetes-dashboard-build-image:latest' locally
Trying to pull repository docker.io/library/kubernetes-dashboard-build-image ...
/usr/bin/docker-current: repository docker.io/kubernetes-dashboard-build-image not found: does not exist or no pull access.
See '/usr/bin/docker-current run --help'.

原文地址:https://www.cnblogs.com/linuxws/p/10540188.html