K9S之glusterfs

一、搭建环境信息

三台机器,操作系统CentOS 7.4:
hanyu-210 10.20.0.210
hanyu-211 10.20.0.211
hanyu-212 10.20.0.212

前提条件:

已搭建K8S集群(1个master 2个node节点)

1、搭建glusterFS集群(三个节点执行)

[root@hanyu-210 k8s_glusterfs]# yum install centos-release-gluster
[root@hanyu-210 k8s_glusterfs]# yum install -y glusterfs glusterfs-server glusterfs-fuse glusterfs-rdma

2、配置 GlusterFS 集群

[root@hanyu-210 k8s_glusterfs]# systemctl start glusterd.service
[root@hanyu-210 k8s_glusterfs]# systemctl enable glusterd.service

3、K8S-master节点执行

[root@k8s-master ~]# gluster peer probe k8s-master
peer probe: success. Probe on localhost not needed
[root@k8s-master ~]# gluster peer probe k8s-node1
peer probe: success. 
[root@k8s-master ~]# gluster peer probe k8s-node2
peer probe: success. 

4、创建复制卷

[root@k8s-master ~]# mkdir -p /opt/gfs_data
[root@k8s-master ~]# gluster volume create k8s-volume replica 3 k8s-master:/opt/gfs_data k8s-node1:/opt/gfs_data k8s-node2:/opt/gfs_data force

5、启动复制卷

[root@k8s-master ~]# gluster volume start k8s-volume
volume start: k8s-volume: success
[root@k8s-master ~]# gluster volume status

6、查看复制卷信息

[root@k8s-master ~]# gluster volume start k8s-volume
volume start: k8s-volume: success
[root@k8s-master ~]# gluster volume status
Status of volume: k8s-volume
Gluster process                             TCP Port  RDMA Port  Online  Pid
------------------------------------------------------------------------------
Brick k8s-master:/opt/gfs_data              49152     0          Y       38304
Brick k8s-node1:/opt/gfs_data               49152     0          Y       2027 
Brick k8s-node2:/opt/gfs_data               49152     0          Y       2043 
Self-heal Daemon on localhost               N/A       N/A        Y       38325
Self-heal Daemon on k8s-node1               N/A       N/A        Y       2048 
Self-heal Daemon on k8s-node2               N/A       N/A        Y       2064 
 
Task Status of Volume k8s-volume
------------------------------------------------------------------------------
There are no active volume tasks
 
[root@k8s-master ~]# gluster volume info
 
Volume Name: k8s-volume
Type: Replicate
Volume ID: b7725bc0-bbaa-4f7a-89bd-eec36644e164
Status: Started
Snapshot Count: 0
Number of Bricks: 1 x 3 = 3
Transport-type: tcp
Bricks:
Brick1: k8s-master:/opt/gfs_data
Brick2: k8s-node1:/opt/gfs_data
Brick3: k8s-node2:/opt/gfs_data
Options Reconfigured:
transport.address-family: inet
storage.fips-mode-rchecksum: on
nfs.disable: on
performance.client-io-threads: off

7、验证复制卷可用

yum install -y glusterfs glusterfs-fuse
mkdir -p /root/test
mount -t glusterfs k8s-master:k8s-volume /root/test
df -h

二、K8S集群使用glusterfs

1、创建glusterfs的endpoints:kubectl apply -f glusterfs-cluster.yaml

[root@k8s-master ~]# cat glusterfs-cluster.yaml 
apiVersion: v1
kind: Endpoints
metadata:
  name: glusterfs-cluster
  namespace: default
subsets:
- addresses:
  - ip: 192.168.48.100
  - ip: 192.168.48.120
  - ip: 192.168.48.200
  ports:
  - port: 49152
    protocol: TCP
[root@k8s-master ~]# 

2、查看ep

[root@k8s-master ~]# kubectl get ep
NAME                ENDPOINTS                                                        AGE
glusterfs-cluster   192.168.48.100:49152,192.168.48.120:49152,192.168.48.200:49152   11s
kubernetes          192.168.48.100:6443                                              81m

  

原文地址:https://www.cnblogs.com/wuchangblog/p/14046287.html