Centos6下DRBD的安装配置

导读 Distributed Replicated Block Device(DRBD)是一个用软件实现的、无共享的、服务器之间镜像块设备内容的存储复制解决方案。数据镜像:实时、透明、同步(所有服务器都成功后返回)、异步(本地服务器成功后返回)。DRBD的核心功能通过Linux的内核实现,最接近系统的IO栈,但它不能神奇地添加上层的功能比如检测到EXT3文件系统的崩溃。DRBD的位置处于文件系统以下,比文件系统更加靠近操作系统内核及IO栈。

Centos6下DRBD的安装配置Centos6下DRBD的安装配置

一、安装环境说明
系统版本:CentOS6.5 DRBD版本:DRBD-8.4.3 node1:   192.168.7.88(drbd1) node2:   192.168.7.89 (drbd2) (node1)为仅主节点配置 (node2)为仅从节点配置 (node1,node2)为主从节点共同配置 
二、准备环境:(node1,node2)

1.关闭iptables和SELINUX,避免安装过程中报错

service iptables stop                       //关闭iptables setenforcing 0                                 //暂时关闭selinux sed -i 's/SELINUX=enforcing/SELINUX=disabled/' /etc/sysconfig/selinux     //永久关闭selinux 

2.设置hosts文件

vi /etc/hosts 127.0.0.1 localhost localhost.localdomain localhost4 localhost4.localdomain4 ::1 localhost localhost.localdomain localhost6 localhost6.localdomain6 192.168.7.88    drbd1 192.168.7.89    drbd2 

3.在两台虚拟机上添加一块2G硬盘sdb作为DRBD,分别分区为sdb1,大小1G,并在本地系统创建/data目录,不做挂载操作。

fdisk /dev/sdb ---------------- n-p-1-回车-"+1G"-wq ---------------- mkdir /data

4.时间同步:

ntpdate -u asia.pool.ntp.org 
三、DRBD的安装部署

1.安装依赖包:(node1,node2)

yum install gcc gcc-c++ make glibc flex kernel-devel kernel-headers 

2.安装DRBD:(node1,node2)

wget http://oss.linbit.com/drbd/8.4/drbd-8.4.3.tar.gz tar zxvf drbd-8.4.3.tar.gz cd drbd-8.4.3 ./configure --prefix=/usr/local/drbd --with-km make && make install mkdir -p /usr/local/drbd/var/run/drbd cp /usr/local/drbd/etc/rc.d/init.d/drbd /etc/rc.d/init.d chkconfig --add drbd chkconfig drbd on 加载DRBD模块: modprobe drbd 查看DRBD模块是否加载到内核: lsmod |grep drbd 
四、DRBD的配置

1.参数配置:(node1,node2)

vim /usr/local/drbd/etc/drbd.conf 清空里面的配置,添加如下配置: resource r0{ protocol C; startup { wfc-timeout 0; degr-wfc-timeout 120;} disk { on-io-error detach;} net{ timeout 60; connect-int 10; ping-int 10; max-buffers 2048; max-epoch-size 2048; } syncer { rate 30M;} on drbd1.example.com{ device /dev/drbd0; disk /dev/sdb1; address 192.168.7.88:7788; meta-disk internal; } on drbd2.example.com{ device /dev/drbd0; disk /dev/sdb1; address 192.168.7.89:7788; meta-disk internal; } } 

2.创建DRBD设备并激活r0资源:(node1,node2)

mknod /dev/drbd0 b 147 0 drbdadm create-md r0 等待片刻,显示success表示drbd块创建成功 ---------------- Writing meta data... initializing activity log NOT initializing bitmap New drbd meta data block successfully created. --== Creating metadata ==-- As with nodes, we count the total number of devices mirrored by DRBD at http://usage.drbd.org. The counter works anonymously. It creates a random number to identify the device and sends that random number, along with the kernel and DRBD version, to usage.drbd.org. http://usage.drbd.org/cgi-bin/insert_usage.pl? nu=716310175600466686&ru=15741444353112217792&rs=1085704704 * If you wish to opt out entirely, simply enter 'no'. * To continue, just press [RETURN] //出现[RETURN]按回车 success ---------------- 再次输入该命令: # drbdadm create-md r0 成功激活r0 ---------------- [need to type 'yes' to confirm] yes Writing meta data... initializing activity log NOT initializing bitmap New drbd meta data block successfully created. 

3.启动DRBD服务:(node1,node2)

service drbd start 

:需要主从共同启动方能生效
4.查看状态:(node1,node2)

cat /proc/drbd version: 8.4.3 (api:1/proto:86-101) GIT-hash: 89a294209144b68adb3ee85a73221f964d3ee515 build by root@drbd1.example.com, 2013-05-27 20:45:19 0: cs:Connected ro:Secondary/Secondary ds:Inconsistent/Inconsistent C r----- ns:0 nr:0 dw:0 dr:0 al:0 bm:0 lo:0 pe:0 ua:0 ap:0 ep:1 wo:f oos:1060184 或者 service drbd status drbd driver loaded OK; device status: version: 8.4.3 (api:1/proto:86-101) GIT-hash: 89a294209144b68adb3ee85a73221f964d3ee515 build by root@drbd1.example.com, 2013-05-27 20:45:19 m:res cs ro ds p mounted fstype 0:r0 Connected Secondary/Secondary Inconsistent/Inconsistent C 

:这里ro:Secondary/Secondary表示两台主机的状态都是备机状态,ds是磁盘状态,显示的状态内容为“不一致”,这是因为DRBD无法判断哪一方为主机,应以哪一方的磁盘数据作为标准。

5.将drbd1主机配置为主节点:(node1)

drbdsetup /dev/drbd0 primary --force

分别查看主从DRBD状态:

(node1) service drbd status drbd driver loaded OK; device status: version: 8.4.3 (api:1/proto:86-101) GIT-hash: 89a294209144b68adb3ee85a73221f964d3ee515 build by root@drbd1.example.com, 2013-05-27 20:45:19 m:res  cs         ro                 ds                 p  mounted  fstype 0:r0   Connected  Primary/Secondary  UpToDate/UpToDate  C (node2) service drbd status drbd driver loaded OK; device status: version: 8.4.3 (api:1/proto:86-101) GIT-hash: 89a294209144b68adb3ee85a73221f964d3ee515 build by root@drbd2.example.com, 2013-05-27 20:49:06 m:res  cs         ro                 ds                 p  mounted  fstype 0:r0   Connected  Secondary/PrimaryUpToDate/UpToDate  C 

:ro在主从服务器上分别显示 Primary/Secondary和Secondary/Primary ds显示UpToDate/UpToDate表示主从配置成功。
6.挂载DRBD:(node1)

从刚才的状态上看到mounted和fstype参数为空,所以我们这步开始挂载DRBD到系统目录 mkfs.ext4 /dev/drbd0 mount /dev/drbd0 /data :Secondary节点上不允许对DRBD设备进行任何操作,包括只读,所有的读写操作只能 在Primary节点上进行,只有当Primary节点挂掉时,Secondary节点才能提升为Primary节点继续工作。 
五、模拟故障

(node1)

cd /data touch 1 2 3 4 5 cd .. umount /data drbdsetup /dev/drbd0 secondary 

:这里实际生产环境若DRBD1宕机,在DRBD2状态信息中ro的值会显示为Secondary/Unknown,只需要进行DRBD提权操作即可。

(node2)

drbdsetup /dev/drbd0 primary mount  /dev/drbd0 /data cd /data touch 6 7 8 9 10 ls -------------- 1  10  2  3  4  5  6  7  8  9  lost+found 

查看(node1)(node2)DRBD状态:

(node2) service drbd status drbd driver loaded OK; device status: version: 8.4.3 (api:1/proto:86-101) GIT-hash: 89a294209144b68adb3ee85a73221f964d3ee515 build by root@drbd2.example.com, 2013-05-27 20:49:06 m:res cs ro ds p mounted fstype 0:r0 Connected Primary/Secondary UpToDate/UpToDate C /data ext4 (node1) service drbd status drbd driver loaded OK; device status: version: 8.4.3 (api:1/proto:86-101) GIT-hash: 89a294209144b68adb3ee85a73221f964d3ee515 build by root@drbd1.example.com, 2013-05-27 20:45:19 m:res cs ro ds p mounted fstype 0:r0 Connected Secondary/Primary UpToDate/UpToDate C

本文转载自:http://www.linuxprobe.com/centos6-drdb-setup-instal.html

更多Linux干货请访问:http://www.linuxprobe.com/

原文地址:https://www.cnblogs.com/probemark/p/5880070.html