k8s持久卷 NFS 动态创建 PV & PVC

1、环境

[root@k8s-node01 dynamic-pv]# kubectl get node -o wide
NAME         STATUS   ROLES    AGE   VERSION   INTERNAL-IP     EXTERNAL-IP   OS-IMAGE                KERNEL-VERSION              CONTAINER-RUNTIME
k8s-master   Ready    master   91d   v1.18.8   192.168.1.230   <none>        CentOS Linux 7 (Core)   5.8.2-1.el7.elrepo.x86_64   docker://19.3.12
k8s-node01   Ready    <none>   91d   v1.18.8   192.168.1.231   <none>        CentOS Linux 7 (Core)   5.8.1-1.el7.elrepo.x86_64   docker://19.3.12
k8s-node02   Ready    <none>   91d   v1.18.8   192.168.1.232   <none>        CentOS Linux 7 (Core)   5.8.1-1.el7.elrepo.x86_64   docker://19.3.12

 2、创建 SC

[root@k8s-node01 dynamic-pv]# more bxy-nfs-sc.yaml 
apiVersion: storage.k8s.io/v1beta1
kind: StorageClass
metadata:
  name: managed-nfs-storage
provisioner: fuseim.pri/ifs  #必须匹配 deploy env PROVISIONER_NAME 字段
reclaimPolicy: Retain   #  fuseim.pri/ifs  插件只支持 Retain or  Delete 选项,并且指定 Delete 之后仍然会有一份 持久卷 备份目录
启动 & 状态
[root@k8s-node01 dynamic-pv]# kubectl apply -f bxy-nfs-sc.yaml storageclass.storage.k8s.io/managed-nfs-storage created [root@k8s-node01 dynamic-pv]# kubectl get -f bxy-nfs-sc.yaml NAME PROVISIONER RECLAIMPOLICY VOLUMEBINDINGMODE ALLOWVOLUMEEXPANSION AGE managed-nfs-storage fuseim.pri/ifs Retain Immediate false 4s

这里没有指定VolumBindMode ,默认 Immediate

3、创建 RBAC

[root@k8s-node01 dynamic-pv]# more rbac.yaml 
# 因为 storage 自动创建 pv 需要经过 kube-apiserver ,所以要进行授权 apiVersion: v1 kind: ServiceAccount metadata: name: nfs-client-provisioner --- kind: ClusterRole apiVersion: rbac.authorization.k8s.io/v1 metadata: # 此处的 "namespace" 被省略掉是因为 ClusterRoles 是没有命名空间的。 name: nfs-client-provisioner rules: - apiGroups: [""]       #代表所有 apiGroup 资源 resources: ["persistentvolumes"] #资源类型 verbs: ["get", "list", "watch", "create", "delete"] #权限 - apiGroups: [""] resources: ["persistentvolumeclaims"] verbs: ["get", "list", "watch", "update"] - apiGroups: ["storage.k8s.io"] resources: ["storageclasses"] verbs: ["get", "list", "watch"] - apiGroups: [""] resources: ["events"] verbs: ["list", "watch", "create", "update", "patch"] - apiGroups: [""] resources: ["endpoints"] verbs: ["list", "watch", "create", "update", "patch", "get"] --- kind: ClusterRoleBinding apiVersion: rbac.authorization.k8s.io/v1 metadata: name: run-nfs-client-provisioner subjects: - kind: ServiceAccount name: nfs-client-provisioner namespace: default roleRef:       #roleRef 里的内容决定了实际创建绑定的方法 kind: ClusterRole                #kind 可以是 Role 或 ClusterRole name: nfs-client-provisioner #name 将引用你要指定的 Role 或 ClusterRole 的名称 apiGroup: rbac.authorization.k8s.io
启动 & 状态
[root@k8s-node01 dynamic-pv]# kubectl apply -f rbac.yaml 
serviceaccount/nfs-client-provisioner created
clusterrole.rbac.authorization.k8s.io/nfs-client-provisioner created
clusterrolebinding.rbac.authorization.k8s.io/run-nfs-client-provisioner created
[root@k8s-node01 dynamic-pv]# kubectl get -f rbac.yaml 
NAME                                    SECRETS   AGE
serviceaccount/nfs-client-provisioner   1         10s

NAME                                                           CREATED AT
clusterrole.rbac.authorization.k8s.io/nfs-client-provisioner   2020-11-19T07:29:36Z

NAME                                                                      ROLE                                 AGE
clusterrolebinding.rbac.authorization.k8s.io/run-nfs-client-provisioner   ClusterRole/nfs-client-provisioner   10s

4、创建 NFS 扩展插件 Deployment

[root@k8s-node01 dynamic-pv]# more bxy-nfs-deploy.yaml 
apiVersion: apps/v1
kind: Deployment
metadata:
  name: nfs-client-provisioner
spec:
  replicas: 1
  strategy:        #容器重启策略 Recreate 删除所有已启动容器,重新启动新容器
    type: Recreate
  selector:
    matchLabels:
      app: nfs-client-provisioner
  template:
    metadata:
      labels:
        app: nfs-client-provisioner
    spec:
    #  imagePullSecrets:
    #    - name: registry-pull-secret
      serviceAccount: nfs-client-provisioner
      containers:
        - name: nfs-client-provisioner
          image: quay.io/external_storage/nfs-client-provisioner:latest
          imagePullPolicy: IfNotPresent
          #image: lizhenliang/nfs-client-provisioner:v2.0.0
          volumeMounts:
            - name: nfs-client-root
              mountPath: /persistentvolumes
          env:
            - name: PROVISIONER_NAME
              #定义 StorageClass 里面的 provisioner 字段
              value: fuseim.pri/ifs
            - name: NFS_SERVER
              value: 192.168.1.231  # NFS 真实服务器 IP
            - name: NFS_PATH
              value: /bxy/nfsdata
      volumes:
        - name: nfs-client-root
          nfs:
            server: 192.168.1.231  # NFS 真实服务器 IP
            path: /bxy/nfsdata
启动 & 状态
[root@k8s-node01 dynamic-pv]# kubectl apply -f bxy-nfs-deploy.yaml deployment.apps/nfs-client-provisioner created [root@k8s-node01 dynamic-pv]# kubectl get -f bxy-nfs-deploy.yaml NAME READY UP-TO-DATE AVAILABLE AGE nfs-client-provisioner 1/1 1 1 5s [root@k8s-node01 dynamic-pv]# kubectl get po NAME READY STATUS RESTARTS AGE bxy-local-nginx-deploy-59d9f57449-2lbrt 1/1 Running 0 78m bxy-local-nginx-deploy-59d9f57449-xbsmj 1/1 Running 0 78m nfs-client-provisioner-6ffd9d54c5-9htxz 1/1 Running 0 13s tomcat-cb9688cd5-xnwqb 1/1 Running 17 90d
nfs pods 描述信息
[root@k8s-node01 dynamic-pv]# kubectl describe pods nfs-client-provisioner-6ffd9d54c5-9htxz Name: nfs-client-provisioner-6ffd9d54c5-9htxz Namespace: default Priority: 0 Node: k8s-node01/192.168.1.231 Start Time: Thu, 19 Nov 2020 15:39:21 +0800 Labels: app=nfs-client-provisioner pod-template-hash=6ffd9d54c5 Annotations: <none> Status: Running IP: 10.244.1.85 IPs: IP: 10.244.1.85 ....... ....... Restart Count: 0 Environment: PROVISIONER_NAME: fuseim.pri/ifs NFS_SERVER: 192.168.1.231 NFS_PATH: /bxy/nfsdata Mounts: /persistentvolumes from nfs-client-root (rw) /var/run/secrets/kubernetes.io/serviceaccount from nfs-client-provisioner-token-ct675 (ro) ...... ...... Events: Type Reason Age From Message ---- ------ ---- ---- ------- Normal Scheduled 2m19s default-scheduler Successfully assigned default/nfs-client-provisioner-6ffd9d54c5-9htxz to k8s-node01 Normal Pulled 2m18s kubelet, k8s-node01 Container image "quay.io/external_storage/nfs-client-provisioner:latest" already present on machine Normal Created 2m18s kubelet, k8s-node01 Created container nfs-client-provisioner Normal Started 2m18s kubelet, k8s-node01 Started container nfs-client-provisioner

4、创建 NG 实例

[root@k8s-node01 dynamic-pv]# more bxy-nfs-nginx.yaml 
apiVersion: v1
kind: Service
metadata:
  name: nginx
  labels:
    app: nginx
spec:
  ports:
  - port: 80
    name: web
  clusterIP: None
  selector:
    app: nginx
---
apiVersion: apps/v1
kind: StatefulSet     #类型 StatefulSet 有状态
metadata:
  name: web
spec:
  serviceName: "nginx"   #声明它属于哪个Headless Service.  使用的是 Service 的 metadata.name
 
#当启动之后可以通过以下规则来实现pod之间的互相访问,
#statefulSetName-{0..N-1}.serviceName.namespace.svc.cluster.local ,

  #其中 serviceName 为 spec.serviceName ,并且需要 Service 和 StatefulSet 必须在同一个 namespace 下。
  replicas: 2
  selector:
    matchLabels:
      app: nginx
  template:
    metadata:
      labels:
        app: nginx
    spec:
      containers:
      - name: nginx
        image: nginx
        ports:
        - containerPort: 80
          name: web
        volumeMounts:
        - name: www
          mountPath: /usr/share/nginx/html
  volumeClaimTemplates:          #可看作 PVC 模板
  - metadata:
      name: www
    spec:
      accessModes: [ "ReadWriteOnce" ]
      storageClassName: "managed-nfs-storage"
      resources:
        requests:
          storage: 1Gi
启动 & 状态
[root@k8s-node01 dynamic-pv]# kubectl apply -f bxy-nfs-nginx.yaml service/nginx created statefulset.apps/web created [root@k8s-node01 dynamic-pv]# kubectl get -f bxy-nfs-nginx.yaml NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE service/nginx ClusterIP None <none> 80/TCP 6s NAME READY AGE statefulset.apps/web 1/2 6s

#NG 正常启动后 PV & PVC 将会自动创建,并绑定
#PV & PVC 状态 [root@k8s
-node01 dynamic-pv]# kubectl get pv,pvc NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE persistentvolume/bxy-local-pv-volume 5Gi RWO Delete Bound default/bxy-local-pvc-volume bxy-local-sc-volume 109m persistentvolume/pvc-0517358e-17ca-4dce-9fff-a0e15494c1a1 1Gi RWO Retain Bound default/www-web-1 managed-nfs-storage 26s persistentvolume/pvc-a421c94c-fb55-4705-af1a-37dd07537f58 1Gi RWO Retain Bound default/www-web-0 managed-nfs-storage 31s NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE persistentvolumeclaim/bxy-local-pvc-volume Bound bxy-local-pv-volume 5Gi RWO bxy-local-sc-volume 103m persistentvolumeclaim/www-web-0 Bound pvc-a421c94c-fb55-4705-af1a-37dd07537f58 1Gi RWO managed-nfs-storage 31s persistentvolumeclaim/www-web-1 Bound pvc-0517358e-17ca-4dce-9fff-a0e15494c1a1 1Gi RWO managed-nfs-storage 26s

可以看到,它动态创建了 PV & PVC 并绑定
回收策略 Retain 是在 SC 里面指定好的
POD 状态 & 访问状态
[root@k8s-node01 dynamic-pv]# kubectl get po NAME READY STATUS RESTARTS AGE bxy-local-nginx-deploy-59d9f57449-2lbrt 1/1 Running 0 91m bxy-local-nginx-deploy-59d9f57449-xbsmj 1/1 Running 0 91m nfs-client-provisioner-6ffd9d54c5-9htxz 1/1 Running 0 13m tomcat-cb9688cd5-xnwqb 1/1 Running 17 90d web-0 1/1 Running 0 5m31s web-1 1/1 Running 0 5m26s [root@k8s-node01 dynamic-pv]# kubectl exec -it web-0 /bin/bash kubectl exec [POD] [COMMAND] is DEPRECATED and will be removed in a future version. Use kubectl kubectl exec [POD] -- [COMMAND] instead. root@web-0:/# curl 127.0.0.1 <html> <head><title>403 Forbidden</title></head> <body> <center><h1>403 Forbidden</h1></center> <hr><center>nginx/1.19.4</center> </body> </html>


访问不到,403 说明挂载成功,因为挂载目录下没有添加任何数据 (如果没有挂载成功,NG 启动后有默认访问页面)

5、添加测试文件

查看挂载目录状态,生成两个权限为 777 的新目录与 NG 的 两个 POD 实例对应
[root@k8s-node01 dynamic-pv]# cd /bxy/nfsdata/
[root@k8s-node01 nfsdata]# ls
default-www-web-0-pvc-a421c94c-fb55-4705-af1a-37dd07537f58  default-www-web-1-pvc-0517358e-17ca-4dce-9fff-a0e15494c1a1

我给 web-0 目录添加文件后测试访问效果

[root@k8s-node01 nfsdata]# ll default-www-web-0-pvc-a421c94c-fb55-4705-af1a-37dd07537f58/
总用量 0
[root@k8s-node01 nfsdata]# echo 'k8s nfs dynamic mount test !!!' > default-www-web-0-pvc-a421c94c-fb55-4705-af1a-37dd07537f58/index.html

因为没有设置集群 IP ,继续进入容器 web-0 访问 NG 看看效果

[root@k8s-node01 nfsdata]# kubectl exec -it web-0 /bin/bash
kubectl exec [POD] [COMMAND] is DEPRECATED and will be removed in a future version. Use kubectl kubectl exec [POD] -- [COMMAND] instead.
root@web-0:/# curl 127.0.0.1
k8s nfs dynamic mount test !!!

没毛病,说明动态挂载很成功。

题外话:
kind: StatefulSet 为有状态类型,这样启动的容器都是按照 0,1,2,3 顺序排列,并且在删除时也会按照 3,2,1,0 这种倒叙来删除容器
把刚才的 NG 实例数由 2 增加到 5 ,看看效果
[root@k8s-node01 nfsdata]# kubectl get statefulset  
NAME   READY   AGE
web    2/2     21m
[root@k8s-node01 nfsdata]# kubectl edit statefulset web
# Please edit the object below. Lines beginning with a '#' will be ignored,
# and an empty file will abort the edit. If an error occurs while saving this file will be
# reopened with the relevant failures.
#
apiVersion: apps/v1
kind: StatefulSet
metadata:
  annotations:
---
spec:
  podManagementPolicy: OrderedReady
  replicas: 2        #将此处改为 5 个实例
  revisionHistoryLimit: 10
  selector:
    matchLabels:
      app: nginx
  serviceName: nginx
  template:
    metadata:
      creationTimestamp: null
      labels:
        app: nginx
    spec:
      containers:
      - image: nginx
        imagePullPolicy: Always
        name: nginx
        ports:
        - containerPort: 80
----
将 spec.replicas: 2 改成 5 点击保存后,将会出发自动构建效果
部分截图:
[root@k8s-master ~]# kubectl get pod -w  -l app=nginx
NAME    READY   STATUS    RESTARTS   AGE
web-0   1/1     Running   0          21m
web-1   1/1     Running   0          21m
web-2   0/1     Pending   0          0s
web-2   0/1     Pending   0          0s
web-2   0/1     Pending   0          2s
web-2   0/1     ContainerCreating   0          2s
web-2   1/1     Running             0          5s
web-3   0/1     Pending             0          0s
web-3   0/1     Pending             0          0s
web-3   0/1     Pending             0          2s
web-3   0/1     ContainerCreating   0          2s


不会动态,没办法。
[root@k8s-master ~]# kubectl get pod -w  -l app=nginx
NAME    READY   STATUS    RESTARTS   AGE
web-0   1/1     Running   0          21m
web-1   1/1     Running   0          21m
web-2   0/1     Pending   0          0s
web-2   0/1     Pending   0          0s
web-2   0/1     Pending   0          2s
web-2   0/1     ContainerCreating   0          2s
web-2   1/1     Running             0          5s
web-3   0/1     Pending             0          0s
web-3   0/1     Pending             0          0s
web-3   0/1     Pending             0          2s
web-3   0/1     ContainerCreating   0          2s
web-3   1/1     Running             0          11s
web-4   0/1     Pending             0          0s
web-4   0/1     Pending             0          0s
web-4   0/1     Pending             0          2s
web-4   0/1     ContainerCreating   0          2s
web-4   1/1     Running             0          5s


可以看到 web-0 , web-1 时之前创建的,web-2/4 是刚新增的。
状态为 web-2 创建启动成功后才会顺序启动 web-3 , web -4 。删除也是一样,这里就不在演示。
原文地址:https://www.cnblogs.com/mybxy/p/14006247.html