Volume存储篇

持久化存储

我们知道,Pod是由容器组成的,而容器宕机或停止之后,数据就随之丢了,那么这也就意味着我们在做Kubernetes集群的时候就不得不考虑存储的问题,而存储卷就是为了Pod保存数据而生的。存储卷的类型有很多,我们常用到一般有四种:emptyDir,hostPath,NFS以及云存储等。

emptyDir

emptyDir类型的volume在pod分配到node上时被创建,kubernetes会在node上自动分配 一个目录,因此无需指定宿主机node上对应的目录文件。这个目录的初始内容为空,当Pod从node上移除时,emptyDir中的数据会被永久删除。emptyDir Volume主要用于某些应用程序无需永久保存的临时目录。

  • 经典案例

    kind: Deployment
    apiVersion: apps/v1
    metadata:
      name: test-volume-deployment
      namespace: default
      labels:
        app: test-volume-deployment
    spec:
      selector:
        matchLabels:
          app: test-volume-pod
      template:
        metadata:
          labels:
            app: test-volume-pod
        spec:
          containers:
            - name: nginx
              image: busybox
              imagePullPolicy: IfNotPresent
              ports:
                - containerPort: 80
                  name: http
                - containerPort: 443
                  name: https
              volumeMounts:
                - mountPath: /data/
                  name: empty
              command: ['/bin/sh','-c','while true;do echo $(date) >> /data/index.html;sleep 2;done']
            - name: os
              imagePullPolicy: IfNotPresent
              image: busybox
              volumeMounts:
                - mountPath: /data/
                  name: empty
              command: ['/bin/sh','-c','while true;do echo 'budybox' >> /data/index.html;sleep 2;done']
          volumes:
            - name: empty
              emptyDir: {}Copy to clipboardErrorCopied
    

    查看部署状态

    [root@kubernetes-master-01 ~]# kubectl get pods -l app=test-volume-pod
    NAME                                      READY   STATUS    RESTARTS   AGE
    test-volume-deployment-66ccd5586b-s7j26   2/2     Running   0          4m13sCopy to clipboardErrorCopied
    

hostPath

hostPath类型则是映射node文件系统中的文件或者目录到pod里。在使用hostPath类型的存储卷时,也可以设置type字段,支持的类型有文件、目录、File、Socket、CharDevice和BlockDevice。

  • 经典案例
apiVersion: v1
kind: Pod
metadata:
  name: vol-hostpath
  namespace: default
spec:
  volumes:
  - name: html
    hostPath:
      path: /data/pod/volume1/
      type: DirectoryOrCreate
  containers:
  - name: myapp
    image: nginx
    volumeMounts:
    - name: html
      mountPath: /usr/share/nginx/html/Copy to clipboardErrorCopied
  • 查看部署结果

    # 在node-02上执行
    [root@kubernetes-node-02 volume1]# echo "index" > index.html
    # 在master-01上执行
    [root@kubernetes-master-01 data]# kubectl get pods vol-hostpath -o wide
    NAME           READY   STATUS    RESTARTS   AGE   IP            NODE                 NOMINATED NODE   READINESS GATES
    vol-hostpath   1/1     Running   0          66s   10.244.2.34   kubernetes-node-02   <none>           <none>
    [root@kubernetes-master-01 data]#  curl 10.244.2.34
    indexCopy to clipboardErrorCopied
    

PV和PVC

PersistentVolume(PV)是集群中已由管理员配置的一段网络存储。 集群中的资源就像一个节点是一个集群资源。 PV是诸如卷之类的卷插件,但是具有独立于使用PV的任何单个pod的生命周期。 该API对象捕获存储的实现细节,即NFS,iSCSI或云提供商特定的存储系统。

PersistentVolumeClaim(PVC)是用户存储的请求。PVC的使用逻辑:在pod中定义一个存储卷(该存储卷类型为PVC),定义的时候直接指定大小,pvc必须与对应的pv建立关系,pvc会根据定义去pv申请,而pv是由存储空间创建出来的。pv和pvc是kubernetes抽象出来的一种存储资源。

nfs

nfs使得我们可以挂载已经存在的共享到我们的Pod中,和emptyDir不同的是,当Pod被删除时,emptyDir也会被删除。但是nfs不会被删除,仅仅是解除挂在状态而已,这就意味着NFS能够允许我们提前对数据进行处理,而且这些数据可以在Pod之间相互传递,并且nfs可以同时被多个pod挂在并进行读写。

  • 在每个需要装nfs的节点上安装nfs

    yum install nfs-utils.x86_64 -yCopy to clipboardErrorCopied
    
  • 在每一个节点上配置nfs

    [root@kubernetes-master-01 nfs]# mkdir -p /nfs/v{1..5}
    
    [root@kubernetes-master-01 nfs]# cat > /etc/exports <<EOF
    /nfs/v1  172.16.0.0/16(rw,no_root_squash)
    /nfs/v2  172.16.0.0/16(rw,no_root_squash)
    /nfs/v3  172.16.0.0/16(rw,no_root_squash)
    /nfs/v4  172.16.0.0/16(rw,no_root_squash)
    /nfs/v5  172.16.0.0/16(rw,no_root_squash)
    EOF
    [root@kubernetes-master-01 nfs]# exportfs -arv
    exporting 172.16.0.0/16:/nfs/v5
    exporting 172.16.0.0/16:/nfs/v4
    exporting 172.16.0.0/16:/nfs/v3
    exporting 172.16.0.0/16:/nfs/v2
    exporting 172.16.0.0/16:/nfs/v1
    [root@kubernetes-master-01 nfs]# showmount -e
    Export list for kubernetes-master-01:
    /nfs/v5 172.16.0.0/16
    /nfs/v4 172.16.0.0/16
    /nfs/v3 172.16.0.0/16
    /nfs/v2 172.16.0.0/16
    /nfs/v1 172.16.0.0/16Copy to clipboardErrorCopied
    
  • 创建POD使用Nfs

    [root@kubernetes-master-01 ~]# kubectl get pods -l app=nfs
    NAME                   READY   STATUS    RESTARTS   AGE
    nfs-5f56db5995-9shkg   1/1     Running   0          24s
    nfs-5f56db5995-ht7ww   1/1     Running   0          24s
    [root@kubernetes-master-01 ~]# echo "index" > /nfs/v1/index.html
    [root@kubernetes-master-01 ~]# kubectl exec -it nfs-5f56db5995-ht7ww -- bash
    root@nfs-5f56db5995-ht7ww:/# cd /usr/share/nginx/html/
    root@nfs-5f56db5995-ht7ww:/usr/share/nginx/html# ls
    index.htmlCopy to clipboardErrorCopied
    

PV 的访问模式

ReadWriteOnce(RWO) 可读可写,但只支持被单个节点挂载。
ReadOnlyMany(ROX) 只读,可以被多个节点挂载。
ReadWriteMany(RWX) 多路可读可写。这种存储可以以读写的方式被多个节点共享。不是每一种存储都支持这三种方式,像共享方式,目前支持的还比较少,比较常用的是 NFS。在 PVC 绑定 PV 时通常根据两个条件来绑定,一个是存储的大小,另一个就是访问模式。

PV 的回收策略

Retain 不清理, 保留 Volume(需要手动清理)
Recycle 删除数据,即 rm -rf /thevolume/*(只有 NFS 和 HostPath 支持)
Delete 删除存储资源,比如删除 AWS EBS 卷(只有 AWS EBS, GCE PD, Azure Disk 和 Cinder 支持)

PV 的四种状态

Available 可用。
Bound 已经分配给 PVC。
Released PVC 解绑但还未执行回收策略。
Failed 发生错误。

创建PV

kind: PersistentVolume
apiVersion: v1
metadata:
  name: pv001
  labels:
    app: pv001
spec:
  nfs:
    path: /nfs/v2
    server: 172.16.0.50
  accessModes:
    - "ReadWriteMany"
    - "ReadWriteOnce"
  capacity:
    storage: 2Gi
---
kind: PersistentVolume
apiVersion: v1
metadata:
  name: pv002
  labels:
    app: pv002
spec:
  nfs:
    path: /nfs/v3
    server: 172.16.0.50
  accessModes:
    - "ReadWriteMany"
    - "ReadWriteOnce"
  capacity:
    storage: 5Gi
  persistentVolumeReclaimPolicy: Delete
---
kind: PersistentVolume
apiVersion: v1
metadata:
  name: pv003
  labels:
    app: pv003
spec:
  nfs:
    path: /nfs/v4
    server: 172.16.0.50
  accessModes:
    - "ReadWriteMany"
    - "ReadWriteOnce"
  capacity:
    storage: 10Gi
---
kind: PersistentVolume
apiVersion: v1
metadata:
  name: pv004
  labels:
    app: pv004
spec:
  nfs:
    path: /nfs/v5
    server: 172.16.0.50
  accessModes:
    - "ReadWriteMany"
    - "ReadWriteOnce"
  capacity:
    storage: 20GiCopy to clipboardErrorCopied

通过PVC使用PV

[root@kubernetes-master-01 ~]# kubectl apply -f pv.yaml
persistentvolume/pv001 created
persistentvolume/pv002 created
persistentvolume/pv003 created
persistentvolume/pv004 created
[root@kubernetes-master-01 ~]# kubectl get pv
NAME    CAPACITY   ACCESS MODES   RECLAIM POLICY   STATUS      CLAIM   STORAGECLASS   REASON   AGE
pv001   2Gi        RWO,RWX        Retain           Available                                   10s
pv002   5Gi        RWO,RWX        Delete           Available                                   10s
pv003   10Gi       RWO,RWX        Retain           Available                                   10s
pv004   20Gi       RWO,RWX        Retain           Available                                   10s

[root@kubernetes-master-01 ~]# cat > pvc.yaml <<EOF
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: pvc
  namespace: default

spec:
  accessModes:
    - "ReadWriteMany"
  resources:
    requests:
      storage: "6Gi"

---
kind: Deployment
apiVersion: apps/v1
metadata:
  name: nfs
  namespace: default
spec:
  replicas: 2
  selector:
    matchLabels:
      app: nfs
  template:
    metadata:
      labels:
        app: nfs
    spec:
      containers:
        - name: nginx
          imagePullPolicy: IfNotPresent
          image: nginx
          volumeMounts:
            - name: html
              mountPath: /usr/share/nginx/html/
      volumes:
        - name: html
          persistentVolumeClaim:
            claimName: pvc # PVC 名字
EOF
[root@kubernetes-master-01 ~]# kubectl apply -f pv.yaml
persistentvolumeclaim/pvc created
deployment.apps/nfs configured
[root@kubernetes-master-01 ~]# kubectl get pvc
NAME   STATUS   VOLUME   CAPACITY   ACCESS MODES   STORAGECLASS   AGE
pvc    Bound    pv003    10Gi       RWO,RWX                       8s
[root@kubernetes-master-01 ~]# kubectl get pv
NAME    CAPACITY   ACCESS MODES   RECLAIM POLICY   STATUS      CLAIM         STORAGECLASS   REASON   AGE
pv001   2Gi        RWO,RWX        Retain           Available                                         4m18s
pv002   5Gi        RWO,RWX        Delete           Available                                         4m18s
pv003   10Gi       RWO,RWX        Retain           Bound       default/pvc                           4m18s
pv004   20Gi       RWO,RWX        Retain           Available                                         4m18sCopy to clipboardErrorCopied

StorageClass

在一个大规模的Kubernetes集群里,可能有成千上万个PVC,这就意味着运维人员必须实现创建出这个多个PV,此外,随着项目的需要,会有新的PVC不断被提交,那么运维人员就需要不断的添加新的,满足要求的PV,否则新的Pod就会因为PVC绑定不到PV而导致创建失败。而且通过 PVC 请求到一定的存储空间也很有可能不足以满足应用对于存储设备的各种需求,而且不同的应用程序对于存储性能的要求可能也不尽相同,比如读写速度、并发性能等,为了解决这一问题,Kubernetes 又为我们引入了一个新的资源对象:StorageClass,通过 StorageClass 的定义,管理员可以将存储资源定义为某种类型的资源,比如快速存储、慢速存储等,kubernetes根据 StorageClass 的描述就可以非常直观的知道各种存储资源的具体特性了,这样就可以根据应用的特性去申请合适的存储资源了。

定义StorageClass

每一个存储类都包含provisioner、parameters和reclaimPolicy这三个参数域,当一个属于某个类的PersistentVolume需要被动态提供时,将会使用上述的参数域。

  • 创建StorageClass
[root@kubernetes-master-01 ~]# cat > sc.yaml <<EOF
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
  name: nfs-storage-class
  annotations: 
    storageclass.kubernetes.io/is-default-class: "true"
provisioner: kubernetes/nfs-storage-class
parameters:
  archiveOnDelete: "false" # pod删除后,不删除数据
EOF
[root@kubernetes-master-01 ~]# kubectl apply -f sc.yaml
storageclass.storage.k8s.io/nfs-storage-class created
[root@kubernetes-master-01 ~]# kubectl get sc
NAME                          PROVISIONER                    RECLAIMPOLICY   VOLUMEBINDINGMODE   ALLOWVOLUMEEXPANSION   AGE
nfs-storage-class (default)   kubernetes/nfs-storage-class   Delete          Immediate           false                  5sCopy to clipboardErrorCopied
  • 创建PVC
[root@kubernetes-master-01 ~]# cat > sc.yaml <<EOF
kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  name: nfs-provisioner-runner
rules:
  - apiGroups: [""]
    resources: ["persistentvolumes"]
    verbs: ["get", "list", "watch", "create", "delete"]
  - apiGroups: [""]
    resources: ["persistentvolumeclaims"]
    verbs: ["get", "list", "watch", "update"]
  - apiGroups: ["storage.k8s.io"]
    resources: ["storageclasses"]
    verbs: ["get", "list", "watch"]
  - apiGroups: [""]
    resources: ["events"]
    verbs: ["create", "update", "patch"]
  - apiGroups: [""]
    resources: ["services", "endpoints"]
    verbs: ["get", "create", "update", "delete"]
  - apiGroups: ["extensions"]
    resources: ["podsecuritypolicies"]
    resourceNames: ["nfs-provisioner"]
    verbs: ["use"]
---
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  name: run-nfs-provisioner
subjects:
  - kind: ServiceAccount
    name: nfs-client-provisioner
     # replace with namespace where provisioner is deployed
    namespace: default
roleRef:
  kind: ClusterRole
  name: nfs-provisioner-runner
  apiGroup: rbac.authorization.k8s.io
---
kind: Role
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  name: leader-locking-nfs-provisioner
rules:
  - apiGroups: [""]
    resources: ["endpoints"]
    verbs: ["get", "list", "watch", "create", "update", "patch"]
---
kind: RoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  name: leader-locking-nfs-provisioner
subjects:
  - kind: ServiceAccount
    name: nfs-provisioner
    # replace with namespace where provisioner is deployed
    namespace: default
roleRef:
  kind: Role
  name: leader-locking-nfs-provisioner
  apiGroup: rbac.authorization.k8s.io
---
kind: Deployment
apiVersion: apps/v1
metadata:
  name: nfs-client-provisioner
spec:
  replicas: 1
  strategy:
    type: Recreate
  selector:
    matchLabels:
      app: nfs-client-provisioner
  template:
    metadata:
      labels:
        app: nfs-client-provisioner
    spec:
      serviceAccountName: nfs-client-provisioner
      containers:
        - name: nfs-client-provisioner
          image: registry.cn-hangzhou.aliyuncs.com/kubeapps/quay-nfs-client-provisioner
          volumeMounts:
            - name: nfs-client-root
              mountPath: /persistentvolumes
          env:
            - name: PROVISIONER_NAME
              value: pri/nfs
            - name: NFS_SERVER
              value: 172.16.0.50
            - name: NFS_PATH
              value: /nfs/v8
      volumes:
        - name: nfs-client-root
          nfs:
            server: 172.16.0.50
            path: /nfs/v8
---
kind: StorageClass
apiVersion: storage.k8s.io/v1
metadata:
  name: nfs
provisioner: pri/nfs
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: pvc-sc
spec:
  accessModes:
    - ReadWriteOnce
    - ReadWriteMany
  storageClassName: "nfs"
  resources:
    requests:
      storage: 1Gi
EOF
[root@kubernetes-master-01 ~]# kubectl apply -f sc.yaml
storageclass.storage.k8s.io/nfs-storage-class created
deployment.apps/nfs-client-provisioner created
persistentvolumeclaim/claim1 created
[root@kubernetes-master-01 ~]# kubectl get pvc -o wide
NAME     STATUS   VOLUME                                     CAPACITY   ACCESS MODES   STORAGECLASS   AGE   VOLUMEMODE
pvc-sc   Bound    pvc-33a45e8b-e178-4c6b-b64c-fe28073e0a30   1Gi        RWO,RWX        nfs            24s   Filesystem
[root@kubernetes-master-01 ~]# kubectl get deployments.apps
NAME                     READY   UP-TO-DATE   AVAILABLE   AGE
nfs-client-provisioner   1/1     1            1           118sCopy to clipboardErrorCopied
原文地址:https://www.cnblogs.com/tcy1/p/13832485.html