Fence 设备

RHCS中必须有Fence设备,在设备为知故障发生时,Fence负责让占有浮动资源的设备与集群断开。

REDHAT的fence device有两种,

内部fence设备:

IBM RSAII卡,HP的iLO卡,Dell的DRAC,还有IPMI的设备;

外部fence 设备:

UPS,SAN SWITCH,NETWORK SWITCH等。

对于外部fence 设备,可以做拔电源的测试,因为备机可以接受到fence device返回的信号,备机可以正常接管服务,

对于内部fence 设备,不能做拔电源的测试,因为主机断电后,备机接受不到主板芯片做为fence device返备的信号,就不能接管服务,clustat会看到资源的属主是unknow,查看日志会看到持续报fence failed的信息。

 

软fence配置

CentOS与Redhat使用KVM虚拟机做RHCS时可以通过做软fence来实现fence功能。

配置如下:

1、在真机上运行fence服务
yum list | grep --color fence
yum -y install fence-virtd fence-virtd-libvirt fence-virtd-mult

2 、创建密码文件,并密码文件scp给 web1 web2 台服务器,

宿主计算机上做:

[root@localhost ~]# mkdir /etc/cluster
[root@localhost ~]# dd if=/dev/urandom of=/etc/cluster/fence_xvm.key bs=4K count=1
1+0 records in
1+0 records out
4096 bytes (4.1 kB) copied, 0.000848692 s, 4.8 MB/s
[root@localhost ~]# ll /etc/cluster/
total 4
-rw-r--r-- 1 root root 4096 Mar 24 16:58 fence_xvm.key
[root@localhost ~]#
[root@localhost ~]# scp /etc/cluster/fence_xvm.key root@10.37.129.5:/etc/cluster/

/etc/cluster/
3、宿主机配置fencefence_virtd -c

红色为手动,其他为enter

[root@localhost ~]# fence_virtd -c######
Module search path [/usr/lib64/fence-virt]:

Available backends:
libvirt 0.1

Listener modules are responsible for accepting requests
from fencing clients.

Listener module [multicast]:
No listener module named multicast found!
Use this value anyway [y/N]? y#####

The multicast listener module is designed for use environments
where the guests and hosts may communicate over a network using
multicast.

The multicast address is the address that a client will use to
send fencing requests to fence_virtd.

Multicast IP Address [225.0.0.12]:

Using ipv4 as family.

Multicast IP Port [1229]:

Setting a preferred interface causes fence_virtd to listen only
on that interface. Normally, it listens on the default network
interface. In environments where the virtual machines are
using the host machine as a gateway, this *must* be set
(typically to virbr0).
Set to 'none' for no interface.

Interface [none]: private#####

The key file is the shared key information which is used to
authenticate fencing requests. The contents of this file must
be distributed to each physical host and virtual machine within
a cluster.

Key File [/etc/cluster/fence_xvm.key]:

Backend modules are responsible for routing requests to
the appropriate hypervisor or management layer.

Backend module [checkpoint]: libvirt#####

The libvirt backend module is designed for single desktops or
servers. Do not use in environments where virtual machines
may be migrated between hosts.

Libvirt URI [qemu:///system]:

Configuration complete.

=== Begin Configuration ===
backends {
libvirt {
uri = "qemu:///system";
}

}

listeners {
multicast {
interface = "private";
port = "1229";
family = "ipv4";
address = "225.0.0.12";
key_file = "/etc/cluster/fence_xvm.key";
}

}

fence_virtd {
module_path = "/usr/lib64/fence-virt";
backend = "libvirt";
listener = "multicast";
}
=== End Configuration ===
Replace /etc/fence_virt.conf with the above [y/N]? y
[root@localhost ~]#

4、启动fence服务并开机运行

/etc/init.d/fence_virtd start
chkconfig fence_virtd on

 #luci添加软fence


#

 

公众号请关注:侠之大者
原文地址:https://www.cnblogs.com/kamil/p/5162761.html