Glutes install and configue

Environment:

System: CentOS release 6.6 (Final)

Hosts:

172.17.17.30 GFS-Master

172.17.17.31 GFS-C1

172.17.17.35 GFS-C2

172.17.17.33 GFS-C3

172.17.17.34 GFS-C4

# rpm -ivh http://dl.fedoraproject.org/pub/epel/6/x86_64/epel-release-6-8.noarch.rpm

# yum clean all

# yum install glusterfs-server

1. Add the FUSE loadable kernel module (LKM) to the Linux kernel:

# modprobe fuse

2. Verify that the FUSE module is loaded:

# dmesg | grep -i fuse

fuse init (API version 7.13)

# yum -y install fuse fuse-libs

# /etc/init.d/glusterd start

# chkconfig glusterd on

# chkconfig –add glusterd

#配置集群:

# gluster peer probe GFS-2

# gluster peer probe GFS-3

# gluster peer probe GFS-4

#卷类作用描述:

• Distributed - Distributed volumes distributes files throughout the bricks in the volume. You can use distributed volumes where the requirement is to scale storage and the redundancy is either not important or is provided by other hardware/software layers. For more information, see Section 5.1,
“Creating Distributed Volumes” .
Replicated – Replicated volumes replicates files across bricks in the volume. You can use replicated volumes in environments where high-availability and high-reliability are critical. For more information, see Section 5.2, “Creating Replicated Volumes ”.
• Striped – Striped volumes stripes data across bricks in the volume. For best results, you should use striped volumes only in high concurrency environments accessing very large files. For more information, see Section 5.3, “Creating Striped Volumes”.
Distributed Striped - Distributed striped volumes stripe data across two or more nodes in the cluster. You should use distributed striped volumes where the requirement is to scale storage and in high concurrency environments accessing very large files is critical. For more information, see Section 5.4, “Creating Distributed Striped Volumes ”.
Distributed Replicated - Distributed replicated volumes distributes files across replicated bricks in the volume. You can use distributed replicated volumes in environments where the requirement is to scale storage and high-reliability is critical. Distributed replicated volumes also offer improved read performance in most environments. For more information, see Section 5.5, “Creating Distributed Replicated Volumes ”.
Distributed Striped Replicated – Distributed striped replicated volumes distributes striped data across replicated bricks in the cluster. For best results, you should use distributed striped replicated volumes in highly concurrent environments where parallel access of very large files and performance is critical. In this release, configuration of this volume type is supported only for Map Reduce workloads. For more information, see Section 5.6, “Creating Distributed Striped Replicated Volumes ”.
Striped Replicated – Striped replicated volumes stripes data across replicated bricks in the cluster. For best results, you should use striped replicated volumes in highly concurrent environments where there is parallel access of very large files and performance is critical. In this
release, configuration of this volume type is supported only for Map Reduce workloads. For more information, see Section 5.7, “Creating Striped Replicated Volumes ”


#创建卷

[root@GFS-C1 ~]# gluster volume create Rep-volume replica 4 transport tcp 
GFS-C1:/gfs_lvm/Rep-volume 
GFS-C2:/gfs_lvm/Rep-volume 
GFS-C3:/gfs_lvm/Rep-volume 
GFS-C4:/gfs_lvm/Rep-volume force
volume create: Rep-volume: success: please start the volume to access data
[root@GFS-C1 ~]# gluster volume create Str-volume stripe 4 transport tcp 
GFS-C1:/gfs_lvm/Str-volume 
GFS-C2:/gfs_lvm/Str-volume 
GFS-C3:/gfs_lvm/Str-volume 
 GFS-C4:/gfs_lvm/Str-volume force                          
volume create: Str-volume: success: please start the volume to access data
[root@GFS-C1 ~]# gluster volume create Rep-Str-volume stripe 2 replica 2 transport tcp 
GFS-C1:/gfs_lvm/Rep-Str-volume 
GFS-C2:/gfs_lvm/Rep-Str-volume 
GFS-C3:/gfs_lvm/Rep-Str-volume 
GFS-C4:/gfs_lvm/Rep-Str-volume force  
volume create: Rep-Str-volume: success: please start the volume to access data

#查看卷信息

[root@GFS-C1 ~]# gluster volume info
 
Volume Name: Rep-Str-volume
Type: Striped-Replicate
Volume ID: 955f4548-e113-43d2-a2a3-5beb28bd1a30
Status: Started
Number of Bricks: 1 x 2 x 2 = 4
Transport-type: tcp
Bricks:
Brick1: GFS-C1:/gfs_lvm/Rep-Str-volume
Brick2: GFS-C2:/gfs_lvm/Rep-Str-volume
Brick3: GFS-C3:/gfs_lvm/Rep-Str-volume
Brick4: GFS-C4:/gfs_lvm/Rep-Str-volume
Options Reconfigured:
performance.readdir-ahead: on
 
Volume Name: Rep-volume
Type: Replicate
Volume ID: f6c8f6c0-d436-450e-9d38-c9193a8d434d
Status: Started
Number of Bricks: 1 x 4 = 4
Transport-type: tcp
Bricks:
Brick1: GFS-C1:/gfs_lvm/Rep-volume
Brick2: GFS-C2:/gfs_lvm/Rep-volume
Brick3: GFS-C3:/gfs_lvm/Rep-volume
Brick4: GFS-C4:/gfs_lvm/Rep-volume
Options Reconfigured:
features.quota-deem-statfs: on
features.inode-quota: on
features.quota: on
performance.readdir-ahead: on
 
Volume Name: Str-volume
Type: Stripe
Volume ID: 654a9c53-9d88-483c-bab0-5eab61539a1d
Status: Started
Number of Bricks: 1 x 4 = 4
Transport-type: tcp
Bricks:
Brick1: GFS-C1:/gfs_lvm/Str-volume
Brick2: GFS-C2:/gfs_lvm/Str-volume
Brick3: GFS-C3:/gfs_lvm/Str-volume
Brick4: GFS-C4:/gfs_lvm/Str-volume
Options Reconfigured:
features.quota-deem-statfs: on
features.inode-quota: on
features.quota: on
performance.readdir-ahead: on

创建卷注意事项及例子:

Creating Distributed Volumes: Disk/server failure in distributed volumes can result in a serious loss of data because directory
contents are spread randomly across the bricks in the volume.

# gluster volume create NEW-VOLNAME [transport [tcp | rdma | tcp,rdma]]
NEW-BRICK...

# gluster volume create test-volume server1:/exp1 server2:/exp2 server3:/exp3 server4:/
exp4

Creating Replicated Volumes:The number of bricks should be equal to of the replica count for a replicated volume. To protect against server and disk failures, it is recommended that the bricks of the volume are fromdifferent servers.

image

# gluster volume create test-volume replica 2 transport tcp server1:/exp1 server2:/exp2 
creation of test-volume has been successful:Please start the volume to access data.


Create Stripe Volume :The number of bricks should be equal to of the replica count for a replicated volume. To protect
against server and disk failures, it is recommended that the bricks of the volume are from different servers

image

# gluster volume create NEW-VOLNAME [replica COUNT] [transport [tcp |
rdma | tcp,rdma]] NEW-BRICK...
# gluster volume create test-volume replica 2 transport tcp server1:/exp1 server2:/exp2
Creation of test-volume has been successful
Please start the volume to access data.

Creating Striped Volumes:The number of bricks should be a equal to the stripe count for a striped volume.

image

# gluster volume create NEW-VOLNAME [stripe COUNT] [transport [tcp |
rdma | tcp,rdma]] NEW-BRICK...

# gluster volume create test-volume stripe 2 transport tcp server1:/exp1 server2:/exp2
Creation of test-volume has been successful
Please start the volume to access data.

Creating Distributed Striped Volumes: The number of bricks should be a multiple of the stripe count for a distributed striped volume.

image

# gluster volume create NEW-VOLNAME [stripe COUNT] [transport [tcp |
rdma | tcp,rdma]] NEW-BRICK...

# gluster volume create test-volume stripe 4 transport tcp server1:/exp1 server2:/exp2
server3:/exp3 server4:/exp4 server5:/exp5 server6:/exp6 server7:/exp7 server8:/exp8
Creation of test-volume has been successful
Please start the volume to access data.

Creating Distributed Replicated Volumes: The number of bricks should be a multiple of the replica count for a distributed replicated
volume. Also, the order in which bricks are specified has a great effect on data protection. Each replica_count consecutive bricks in the list you give will form a replica set, with all replica sets combined into a volume-wide distribute set. To make sure that replica-set members are not placed on the same node, list the first brick on every server, then the second brick on every server in the same order, and so on.

image

# gluster volume create NEW-VOLNAME [replica COUNT] [transport [tcp |
rdma | tcp,rdma]] NEW-BRICK...

# gluster volume create test-volume replica 2 transport tcp server1:/exp1 server2:/exp2
server3:/exp3 server4:/exp4
Creation of test-volume has been successful
Please start the volume to access data.

For example, to create a six node distributed (replicated) volume with a two-way mirror:

# gluster volume create test-volume replica 2 transport tcp server1:/exp1 server2:/exp2
server3:/exp3 server4:/exp4 server5:/exp5 server6:/exp6
Creation of test-volume has been successful
Please start the volume to access data

Creating Distributed Striped Replicated Volumes:The number of bricks should be a multiples of number of stripe count and replica count for a distributed striped replicated volume.

# gluster volume create NEW-VOLNAME [stripe COUNT] [replica COUNT]
[transport [tcp | rdma | tcp,rdma]] NEW-BRICK...

# gluster volume create test-volume stripe 2 replica 2 transport tcp server1:/exp1
server2:/exp2 server3:/exp3 server4:/exp4 server5:/exp5 server6:/exp6 server7:/exp7
server8:/exp8
Creation of test-volume has been successful
Please start the volume to access data.

Creating Striped Replicated Volumes: The number of bricks should be a multiple of the replicate count and stripe count for a striped
replicated volume.

image

# gluster volume create NEW-VOLNAME [stripe COUNT] [replica COUNT]
[transport [tcp | rdma | tcp,rdma]] NEW-BRICK...

# gluster volume create test-volume stripe 2 replica 2 transport tcp server1:/exp1
server2:/exp2 server3:/exp3 server4:/exp4
Creation of test-volume has been successful
Please start the volume to access data.

To create a striped replicated volume across six storage servers:

# gluster volume create test-volume stripe 3 replica 2 transport tcp server1:/exp1
server2:/exp2 server3:/exp3 server4:/exp4 server5:/exp5 server6:/exp6
Creation of test-volume has been successful
Please start the volume to access data.

#启动卷

# gluster volume start VOLNAME

[root@GFS-C1 ~]# gluster volume start Rep-volume
volume start: Rep-volume: success
[root@GFS-C1 ~]# gluster volume start Str-volume
volume start: Str-volume: success
[root@GFS-C1 ~]# gluster volume start Rep-Str-volume
volume start: Rep-Str-volume: success
[root@GFS-C1 ~]#

#Quota限额

#quota开关
//# gluster volume quota VOLNAME enable
//# gluster volume quota VOLNAME disable
[root@GFS-C1 ~]# gluster volume quota Rep-volume enable
volume quota : success
[root@GFS-C1 ~]# gluster volume quota Str-volume enable   
volume quota : success
#配置quota
[root@GFS-C1 Rep-volume]# gluster volume quota Rep-volume limit-usage / 5GB
volume quota : success
[root@GFS-C1 Rep-volume]# gluster volume quota Str-volume limit-usage / 2GB   
volume quota : success
#查看quota列表:

[root@GFS-C4 Str-volume]# gluster volume quota Rep-volume list
                  Path                   Hard-limit  Soft-limit      Used  Available  Soft-limit exceeded? Hard-limit exceeded?
-------------------------------------------------------------------------------------------------------------------------------
/                                         5.0GB     80%(4.0GB)    1.2GB   3.8GB              No                   No
/data                                    100.0MB     80%(80.0MB)  110.0MB  0Bytes             Yes                  Yes
/data1                                     2.0GB     80%(1.6GB)    1.1GB 948.0MB              No                   No
/data2                                     2.0GB     80%(1.6GB) 512Bytes   2.0GB              No                   No

更新内存缓存大小:

每5秒更新一次:
[root@GFS-C4 Str-volume]#  gluster volume set Rep-volume features.quota-timeout 5
volume set: success
[root@GFS-C4 Str-volume]#  gluster volume set Str-volume features.quota-timeout 5   
volume set: success
[root@GFS-C4 Str-volume]#  gluster volume set Rep-Str-volume features.quota-timeout 5
volume set: success

Quota目录配额测试:

使用DD建立一个100M的文件。
[root@GFS-Master data]# dd if=/dev/zero of=10Mfile1 bs=100M count=1
1+0 records in
1+0 records out
104857600 bytes (105 MB) copied, 0.658241 s, 159 MB/s
[root@GFS-Master data]# ls
10Mfile1
[root@GFS-Master data]# ls -lh
total 100M
-rw-r--r-- 1 root root 100M Oct 19  2015 10Mfile1
[root@GFS-Master data]# cp 10Mfile1 100Mfile
按理来说,已经写入了100M, 就不能再写入了,为什么我还能COPY,我也没弄明白
[root@GFS-Master data]# ls
100Mfile  10Mfile1
[root@GFS-Master data]# ls -lh
total 200M
-rw-r--r-- 1 root root 100M Oct 19 14:28 100Mfile
-rw-r--r-- 1 root root 100M Oct 19 14:27 10Mfile1
[root@GFS-Master data]# cp 10Mfile1 100Mfil2
cp: cannot create regular file `100Mfil2': Disk quota exceeded
当我copy第二次的时候,就提示超过磁盘空间限制了。

GlusterFS中断测试:

#分别将GFS-C(1-3) 网络停止:

[root@GFS-C4 Str-volume]# ping GFS-C1
PING GFS-C1 (172.17.17.31) 56(84) bytes of data.
--- GFS-C1 ping statistics ---
2 packets transmitted, 0 received, 100% packet loss, time 1641ms

[root@GFS-C4 Str-volume]# ping GFS-C2
PING GFS-C2 (172.17.17.35) 56(84) bytes of data.
From GFS-C4 (172.17.17.34) icmp_seq=1 Destination Host Unreachable
--- GFS-C2 ping statistics ---
2 packets transmitted, 0 received, +1 errors, 100% packet loss, time 1478ms

[root@GFS-C4 Str-volume]# ping GFS-C3
PING GFS-C3 (172.17.17.33) 56(84) bytes of data.
From GFS-C4 (172.17.17.34) icmp_seq=1 Destination Host Unreachable
--- GFS-C3 ping statistics ---
3 packets transmitted, 0 received, +1 errors, 100% packet loss, time 2447ms
客户端:
[root@GFS-Client data1]# ls
1.txt
[root@GFS-Client data1]# echo "This is a test" > 2.txt
[root@GFS-Client data1]# echo "This is a test" > 2.txt3
[root@GFS-Client data1]# echo "This is a test" > 2.txt4
[root@GFS-Client data1]# echo "This is a test" > 2.txt5
[root@GFS-Client data1]# echo "This is a test" > 2.txt6
[root@GFS-Client data1]# ls
1.txt  2.txt  2.txt3  2.txt4  2.txt5  2.txt6

查看服务GFSserver:

[root@GFS-C4 data1]# ls
1.txt 2.txt 2.txt3 2.txt4 2.txt5 2.txt6 [root@GFS-C4 data1]#

再启动关闭的三台GFS SVR
[root@GFS-C1 data1]# service network start
[root@GFS-C2 data1]# service network start
[root@GFS-C3 data1]# service network start
查看数据是否同步:
同步成功:
如果有大量数据,可能需要一些时间同步:
四个Node, 只要有任何一个Node工作都可以。

 

删除GlusterFS磁盘:

# gluster volume stop Rep-volume
# gluster volume delete Rep-volume

卸载GlusterFS磁盘:

# gluster peer detach (servers)

访问控制:

# gluster volume set Rep-volume auth.allow 172.17.16.*, 172.17.17.*

添加GlusterFS节点:

# gluster peer probe GFS-C5
# gluster peer probe GFS-C6

# gluster volume add-brick Rep-volume GFS-C5:/gfs_volume/Rep-volume GFS-C6:/gfs_volume/Rep-volume

迁移GlusterFS磁盘数据:

# gluster volume remove-brick Rep-volume GFS-C1:/gfs_lvm/Rep-volume  GFS-C6:/gfs_lvm/Rep-volume start

# gluster volume remove-brick Rep-volume GFS-C1:/gfs_lvm/Rep-volume  GFS-C6:/gfs_lvm/Rep-volume status

# gluster volume remove-brick Rep-volume GFS-C1:/gfs_lvm/Rep-volume  GFS-C6:/gfs_lvm/Rep-volume commit

数据重新分配:

# gluster volume rebalance Rep-volume start

# gluster volume rebalance Rep-volume status

# gluster volume rebalance Rep-volume stop

修复GlusterFS磁盘数据(例如在c1宕机的情况下):

# gluster volume replace-brick Rep-volume GFS-C1:/gfs_lvm/Rep-volume GFS-C6:/gfs_lvm/Rep-volume commit -force 

# gluster volume heal Rep-volume  full

#客户端安装:

# rpm -ivh http://dl.fedoraproject.org/pub/epel/6/x86_64/epel-release-6-8.noarch.rpm

# yum clean all

# yum install glusterfs-server
#挂载卷

#mount.glusterfs GFS-C1:/Rep-volume /Rep-data/

#mount.glusterfs GFS-C1:/Str-volume /Str-data/

原文地址:https://www.cnblogs.com/gyming/p/4892591.html