k8s的存储卷

存储卷查看:kubectl explain pods.spec.volumes

一、简单的存储方式

1)2个容器之间共享存储.。(删除则数据消失)

apiVersion: v1
kind: Pod
metadata:
  name: pod-demo
  namespace: default
  labels: 
    app: myapp
    tier: frontend
  annotations:
    magedu.com/created-by: "clusten admin"
spec:
  containers:
  - name: myapp
    image: ikubernetes/myapp:v1
    ports:
    - name: http
      containerPort: 80
    volumeMounts:
    - name: html
      mountPath: /data/web/html/
  - name: busybox
    image: busybox:latest
    imagePullPolicy: IfNotPresent
    volumeMounts:
    - name: html
      mountPath: /data/
    command:
    - "/bin/sh"
    - "-c"
    - "sleep 3600"
  volumes:
  - name: html
    emptyDir: {}
pod-vol-demo.yaml

创建容器

[root@master volume]# kubectl apply -f pod-vol-demo.yaml 
pod/pod-demo created
View Code

进入容器测试

进入其中一个创建数据
[root@master volume]# kubectl exec -it pod-demo  -c busybox -- /bin/sh
/ # 
/ # echo $(date) >> /data/index.html
/ # echo $(date) >> /data/index.html
/ # cat /data/index.html
Sun Jun 9 03:48:49 UTC 2019
Sun Jun 9 03:49:10 UTC 2019
进入第二查看数据
[root@master volume]# kubectl exec -it pod-demo  -c myapp -- /bin/sh
/ # cat /data/web/html/index.html 
Sun Jun 9 03:48:49 UTC 2019
Sun Jun 9 03:49:10 UTC 2019
View Code

 2)根据2个容器的存储,一个容器创造数据,一个数据读取数据。(删除则数据消失)

apiVersion: v1
kind: Pod
metadata:
  name: pod-demo
  namespace: default
  labels: 
    app: myapp
    tier: frontend
  annotations:
    magedu.com/created-by: "clusten admin"
spec:
  containers:
  - name: myapp
    image: ikubernetes/myapp:v1
    ports:
    - name: http
      containerPort: 80
    volumeMounts:
    - name: html
      mountPath: /usr/share/nginx/html/
  - name: busybox
    image: busybox:latest
    imagePullPolicy: IfNotPresent
    volumeMounts:
    - name: html
      mountPath: /data/
    command: ["/bin/sh"]
    args: ["-c", "while true; do echo $(date) >> /data/index.html; sleep 2; done"]
  volumes:
  - name: html
    emptyDir: {}
pod-vol-demo.yaml

读取测试

[root@master volume]# kubectl apply -f pod-vol-demo.yaml
pod/pod-demo created
[root@master volume]# kubectl get pods -o wide
NAME       READY   STATUS    RESTARTS   AGE   IP           NODE     NOMINATED NODE   READINESS GATES
pod-demo   2/2     Running   0          10s   10.244.2.8   node01   <none>           <none>
[root@master volume]# curl 10.244.2.8
Sun Jun 9 04:09:46 UTC 2019
Sun Jun 9 04:09:48 UTC 2019
Sun Jun 9 04:09:50 UTC 2019
Sun Jun 9 04:09:52 UTC 2019
Sun Jun 9 04:09:54 UTC 2019
Sun Jun 9 04:09:56 UTC 2019
Sun Jun 9 04:09:58 UTC 2019
View Code

 3)将内容存储与节点机器上

apiVersion: v1
kind: Pod
metadata:
  name: pod-vol-hostpath
  namespace: default
spec:
  containers:
  - name: myapp
    image: ikubernetes/myapp:v1
    volumeMounts:
    - name: html
      mountPath: /usr/share/nginx/html
  volumes:
  - name: html
    hostPath:
      path: /data/pod/volume1
      type: DirectoryOrCreate
pod-hostpath-vol.yaml

创建容器

[root@master volume]# kubectl apply -f pod-hostpath-vol.yaml 
pod/pod-vol-hostpath created
[root@master volume]# kubectl get pods -o wide
NAME               READY   STATUS    RESTARTS   AGE   IP           NODE     NOMINATED NODE   READINESS GATES
pod-vol-hostpath   1/1     Running   0          51s   10.244.2.9   node01   <none>           <none>

进入容器内创建数据,则可在存储卷中查看到数据内容

[root@master volume]# kubectl exec -it pod-vol-hostpath /bin/sh
/ # cd /usr/share/nginx/html
/usr/share/nginx/html # echo "hello world" >> index.html
[root@master volume]# curl 10.244.2.9
hello world

登陆到 node01 服务器
[root@node01 ~]# cat /data/pod/volume1/index.html 
hello world

3.1)测试删除了容器,内容是否还在

[root@master volume]# kubectl delete -f pod-hostpath-vol.yaml 
pod "pod-vol-hostpath" deleted
[root@master volume]# kubectl get pods
No resources found.
[root@master volume]# kubectl apply -f pod-hostpath-vol.yaml 
pod/pod-vol-hostpath created
[root@master volume]# kubectl get pods -o wide
NAME               READY   STATUS    RESTARTS   AGE   IP            NODE     NOMINATED NODE   READINESS GATES
pod-vol-hostpath   1/1     Running   0          39s   10.244.2.10   node01   <none>           <none>
[root@master volume]# curl 10.244.2.10
hello world
View Code

二、使用nfs来做存储卷

1)先测试 nfs 共享存储的可用性

1.1)node 机器和pv机器编辑hosts主机解析

127.0.0.1   localhost localhost.localdomain localhost4 localhost4.localdomain4
::1         localhost localhost.localdomain localhost6 localhost6.localdomain6

192.168.1.5 master
192.168.1.6 node01 n1
192.168.1.7 node02 n2
192.168.1.8 pv01 p1
cat /etc/hosts

1.2)pv 机器安卓nfs 并启动

[root@pvz01 ~]# yum install -y nfs-utils
[root@pvz01 ~]# mkdir -pv /data/volumes
mkdir: created directory ‘/data’
mkdir: created directory ‘/data/volumes’
[root@pvz01 ~]# cat /etc/exports
/data/volumes 192.168.1.0/24(rw,no_root_squash)
[root@pvz01 ~]# systemctl start nfs
[root@pvz01 ~]# ss -tnl|grep 2049
LISTEN     0      64           *:2049                     *:*                  
LISTEN     0      64          :::2049                    :::
View Code

节点机器也需要安装nfs,并测试挂载服务

[root@node01 ~]# yum install -y nfs-utils
[root@node01 ~]# mount -t nfs pv01:/data/volumes /mnt
[root@node01 ~]# df -h|grep mnt
pv01:/data/volumes        19G  1.1G   18G   6% /mnt
[root@node01 ~]# df -h|grep mnt
pv01:/data/volumes        19G  1.1G   18G   6% /mnt
View Code

1.3)挂在服务正常可用。取消挂载

[root@node01 ~]# umount /mnt
[root@node01 ~]# df -h|grep mnt

2)使用k8s来管理节点调用 nfs的存储功能

[root@master ~]# cat /etc/hosts|grep 192.168.1.8
192.168.1.8 pv01 p1 pv01.test.com

2.1)master机器编辑 相应的yaml文件

apiVersion: v1
kind: Pod
metadata:
  name: pod-vol-nfs
  namespace: default
spec:
  containers:
  - name: myapp
    image: ikubernetes/myapp:v1
    volumeMounts:
    - name: html
      mountPath: /usr/share/nginx/html
  volumes:
  - name: html
    nfs:
      path: /data/volumes
      server: pv01.test.com
pod-vol-nfs.yaml

2.2)创建该pod

[root@master nfs_volume]# kubectl apply -f pod-vol-nfs.yaml 
pod/pod-vol-nfs created
[root@master nfs_volume]# kubectl get pods -o wide
NAME          READY   STATUS    RESTARTS   AGE   IP            NODE     NOMINATED NODE   READINESS GATES
pod-vol-nfs   1/1     Running   0          10s   10.244.2.11   node01   <none>           <none>

2.3)进入pv服务,在挂载卷下随便创建内容,进入访问测试

[root@pvz01 ~]# echo $(date) >  /data/volumes/index.html 
[root@pvz01 ~]# echo $(date) >>  /data/volumes/index.html 
[root@pvz01 ~]# echo $(date) >>  /data/volumes/index.html 

[root@master nfs_volume]# curl 10.244.2.11
Sun Jun 9 17:42:25 CST 2019
Sun Jun 9 17:42:29 CST 2019
Sun Jun 9 17:42:38 CST 2019
View Code

 三、pv 和 pvc的使用

1)在pv机器创建好存储卷

[root@pvz01 volumes]# ls
index.html
[root@pvz01 volumes]# mkdir v{1,2,3,4,5}
[root@pvz01 volumes]# ls
index.html  v1  v2  v3  v4  v5
[root@pvz01 volumes]# cat /etc/exports
/data/volumes/v1 192.168.1.0/24(rw,no_root_squash)
/data/volumes/v2 192.168.1.0/24(rw,no_root_squash)
/data/volumes/v3 192.168.1.0/24(rw,no_root_squash)
/data/volumes/v4 192.168.1.0/24(rw,no_root_squash)
/data/volumes/v5 192.168.1.0/24(rw,no_root_squash)
[root@pvz01 volumes]# exportfs -arv
exporting 192.168.1.0/24:/data/volumes/v5
exporting 192.168.1.0/24:/data/volumes/v4
exporting 192.168.1.0/24:/data/volumes/v3
exporting 192.168.1.0/24:/data/volumes/v2
exporting 192.168.1.0/24:/data/volumes/v1
[root@pvz01 volumes]# showmount -e
Export list for pvz01:
/data/volumes/v5 192.168.1.0/24
/data/volumes/v4 192.168.1.0/24
/data/volumes/v3 192.168.1.0/24
/data/volumes/v2 192.168.1.0/24
/data/volumes/v1 192.168.1.0/24
View Code

2)创建pv 绑定pv机器上的存储卷

[root@master ~]# kubectl explain pv    # pv 相关参数

apiVersion: v1
kind: PersistentVolume
metadata:
  name: pv001
  labels:
    name: pv001
spec:
  nfs:
    path: /data/volumes/v1
    server: pv01.test.com
  accessModes: ["ReadWriteMany","ReadWriteOnce"]
  capacity:
    storage: 2Gi
---
apiVersion: v1
kind: PersistentVolume
metadata:
  name: pv002
  labels:
    name: pv002
spec:
  nfs:
    path: /data/volumes/v2
    server: pv01.test.com
  accessModes: ["ReadWriteMany","ReadWriteOnce"]
  capacity:
    storage: 2Gi
---
apiVersion: v1
kind: PersistentVolume
metadata:
  name: pv003
  labels:
    name: pv003
spec:
  nfs:
    path: /data/volumes/v3
    server: pv01.test.com
  accessModes: ["ReadWriteMany","ReadWriteOnce"]
  capacity:
    storage: 1Gi
---
apiVersion: v1
kind: PersistentVolume
metadata:
  name: pv004
  labels:
    name: pv004
spec:
  nfs:
    path: /data/volumes/v4
    server: pv01.test.com
  accessModes: ["ReadWriteOnce"]
  capacity:
    storage: 5Gi
---
apiVersion: v1
kind: PersistentVolume
metadata:
  name: pv005
  labels:
    name: pv005
spec:
  nfs:
    path: /data/volumes/v5
    server: pv01.test.com
  accessModes: ["ReadWriteMany","ReadWriteOnce"]
  capacity:
    storage: 5Gi
pv-demo.yaml

创建可利用的pv

[root@master volume]# kubectl apply -f pv-demo.yaml 
persistentvolume/pv001 created
persistentvolume/pv002 created
persistentvolume/pv003 created
persistentvolume/pv004 created
persistentvolume/pv005 created
[root@master volume]# kubectl get pv
NAME    CAPACITY   ACCESS MODES   RECLAIM POLICY   STATUS      CLAIM   STORAGECLASS   REASON   AGE
pv001   2Gi        RWO,RWX        Retain           Available                                   37s
pv002   2Gi        RWO,RWX        Retain           Available                                   37s
pv003   1Gi        RWO,RWX        Retain           Available                                   37s
pv004   5Gi        RWO            Retain           Available                                   37s
pv005   5Gi        RWO,RWX        Retain           Available                                   37s

3)定义pod资源, 使用pvc调用可利用的pv

apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: mypvc
  namespace: default
spec:
  accessModes: ["ReadWriteMany"]
  resources:
    requests:
      storage: 4Gi
---
apiVersion: v1
kind: Pod
metadata:
  name: pod-vol-pvc
  namespace: default
spec:
  containers:
  - name: myapp
    image: ikubernetes/myapp:v1
    volumeMounts:
    - name: html
      mountPath: /usr/share/nginx/html
  volumes:
  - name: html
    persistentVolumeClaim:
      claimName: mypvc
pod-vol-pvc.yaml

创建pod的资源查看 pv 的情况

[root@master volume]# kubectl apply -f pod-vol-pvc.yaml 
persistentvolumeclaim/mypvc unchanged
pod/pod-vol-pvc created
[root@master ~]# kubectl get pods
NAME          READY   STATUS    RESTARTS   AGE
pod-vol-pvc   1/1     Running   0          6m16s
[root@master ~]# kubectl get pv
NAME    CAPACITY   ACCESS MODES   RECLAIM POLICY   STATUS      CLAIM           STORAGECLASS   REASON   AGE
pv001   2Gi        RWO,RWX        Retain           Available                                           22m
pv002   2Gi        RWO,RWX        Retain           Available                                           22m
pv003   1Gi        RWO,RWX        Retain           Available                                           22m
pv004   5Gi        RWO            Retain           Available                                           22m
pv005   5Gi        RWO,RWX        Retain           Bound       default/mypvc                           22m
[root@master ~]# kubectl get pvc
NAME    STATUS   VOLUME   CAPACITY   ACCESS MODES   STORAGECLASS   AGE
mypvc   Bound    pv005    5Gi        RWO,RWX                       8m33s

此时根据调度算法,数据已经绑定在了pv005中

原文地址:https://www.cnblogs.com/linu/p/10993077.html