Keepalived实现双主模型的ipvs高可用集群+实现双主模型的nginx高可用集群

HA Cluster的配置前提:
(1) 各节点时间必须同步;
ntp, chrony
(2) 确保iptables及selinux不会成为阻碍;
(3) 各节点之间可通过主机名互相通信(对KA并非必须);
建议使用/etc/hosts文件实现;
(4) 确保各节点的用于集群服务的接口支持MULTICAST通信;
D类:224-239; ip link set dev eth0 multicast on

服务器IP地址规划(10.x模拟公网地址,192.x私网地址,172.x私网地址)

WEB1:192.168.30.17
WEB2:192.168.30.27
LVS1+Keepalived: 192.168.30.7 VIP:10.0.0.100
LVS2+Keepalived: 192.168.30.37 VIP:10.0.0.101
DNS:172.20.42.27
Route:192.168.30.208, 10.0.0.200,172.20.42.200
Client: Windows IP 172.20.42.222

LVS1+Keepalived配置

1. 网络
     ifcfg-eth0
    DEVICE=eth0
    BOOTPROTO=none
    IPADDR=192.168.30.7
    PREFIX=24
    GATEWAY=192.168.30.208
2. 安装keepalived
    yum install keepalived -y
3. 准备脚本
    #!/bin/bash

contact='root@localhost'

notify() {
local mailsubject="$(hostname) to be $1, vip floating"
local mailbody="$(date +'%F %T'): vrrp transition, $(hostname) changed to be $1"
echo "$mailbody" | mail -s "$mailsubject" $contact
}

case $1 in
master)
notify master
;;
backup)
notify backup
;;
fault)
notify fault
;;
*)
echo "Usage: $(basename $0) {master|backup|fault}"
exit 1
;;
esac
4. 配置keepalived
global_defs {
notification_email {
root@localhost
}
notification_email_from keepalive@localhost
smtp_server 127.0.0.1
smtp_connect_timeout 30
router_id LVS_DEVEL
vrrp_skip_check_adv_addr
vrrp_strict
vrrp_garp_interval 0
vrrp_gna_interval 0
vrrp_mcast_group4 224.0.111.111
}

vrrp_instance VI_1 {
state MASTER
interface eth0
virtual_router_id 51
priority 100
advert_int 1
authentication {
auth_type PASS
auth_pass OB0Q67DM
}
virtual_ipaddress {
10.0.0.100/8
}
track_interface {
eth0
}

    notify_master "/etc/keepalived/notify.sh master"
    notify_backup "/etc/keepalived/notify.sh backup"
    notify_fault "/etc/keepalived/notify.sh fault"

}

vrrp_instance VI_2 {
state BACKUP
interface eth0
virtual_router_id 61
priority 98
advert_int 1
authentication {
auth_type PASS
auth_pass 2f118245
}
virtual_ipaddress {
10.0.0.101/8
}
track_interface {
eth0
}

    notify_master "/etc/keepalived/notify.sh master"
    notify_backup "/etc/keepalived/notify.sh backup"
    notify_fault "/etc/keepalived/notify.sh fault"

}

virtual_server 10.0.0.101 80 {
delay_loop 2
lb_algo rr
lb_kind DR
persistence_timeout 50
protocol TCP

real_server 192.168.30.17 80 {
    weight 1
    HTTP_GET {
        url {
          path /
          status_code 200
        }
        connect_timeout 2
        nb_get_retry 3
        delay_before_retry 1
    }
}
real_server 192.168.30.27 80 {
    weight 1
    HTTP_GET {
        url {
          path /
          status_code 200
        }
        connect_timeout 2
        nb_get_retry 3
        delay_before_retry 1
    }
}

}

virtual_server 10.0.0.100 80 {
delay_loop 2
lb_algo rr
lb_kind DR
persistence_timeout 50
protocol TCP

real_server 192.168.30.17 80 {
    weight 1
    HTTP_GET {
        url {
          path /
          status_code 200
        }
        connect_timeout 2
        nb_get_retry 3
        delay_before_retry 1
    }
}
real_server 192.168.30.27 80 {
    weight 1
    HTTP_GET {
        url {
          path /
          status_code 200
        }
        connect_timeout 2
        nb_get_retry 3
        delay_before_retry 1
    }
}

}
5. 启动keepalived: systemctl start keepalived
10.0.0.100 VIP在此服务器上

LVS2+Keepalived配置

和LVS1+Keepalived配置步骤相同,只是要相应的把同sync group的vrrp instance的主备和优先级做相应的调整。
vrrp_instance VI_1 {
state MASTER
interface eth0
virtual_router_id 51
priority 100

vrrp_instance VI_2 {
state BACKUP
interface eth0
virtual_router_id 61
priority 98

启动keepalived: systemctl start keepalived
    10.0.0.101 VIP在此服务器上

DNS:172.20.42.27

vim /var/named/blog.com.zone
$TTL 1D

@ IN SOA master.blog.com admin.blog.com. (
0 ; serial
1D ; refresh
1H ; retry
1W ; expire
3H ) ; minimum
NS master
master A 172.20.42.27
www A 10.0.0.100
www A 10.0.0.101

Route配置

1. 启用ipforward
    echo 1 >/proc/sys/net/ipv4/ip_forward
2. 网络配置
 ifcfg-eth0
    DEVICE=eth0
    BOOTPROTO=none
I    PADDR=192.168.30.208
    PREFIX=24
ifcfg-eth0:1
    DEVICE=eth0:1
    BOOTPROTO=none
    IPADDR=10.0.0.200
    PREFIX=8
ifcfg-eth1
    DEVICE=eth1
    BOOTPROTO=none
    IPADDR=172.20.42.200
    PREFIX=16

WEB1配置

1. 准备配置脚本并执行
setpara.sh
#!/bin/bash

vip1="10.0.0.100"
vip2="10.0.0.101"
mask="255.255.255.255"
iface1="lo:0"
iface2="lo:1"
case $1 in
start)
echo 1 > /proc/sys/net/ipv4/conf/all/arp_ignore
echo 1 > /proc/sys/net/ipv4/conf/lo/arp_ignore
echo 1 > /proc/sys/net/ipv4/conf/all/arp_announce
echo 1 > /proc/sys/net/ipv4/conf/lo/arp_announce
ifconfig $iface1 $vip1 netmask $mask broadcast $vip1 up
ifconfig $iface2 $vip2 netmask $mask broadcast $vip2 up
;;
stop)
echo 0 > /proc/sys/net/ipv4/conf/all/arp_ignore
echo 0 > /proc/sys/net/ipv4/conf/lo/arp_ignore
echo 0 > /proc/sys/net/ipv4/conf/all/arp_announce
echo 0 > /proc/sys/net/ipv4/conf/lo/arp_announce
ifconfig $iface1 down
ifconfig $iface2 down
;;
esac
2 安装appache,yum install httpd -y
3. 生成主页echo web1 > /var/www/html/index.html

WEB2配置

和WEB1步骤相同,echo web2 >  /var/www/html/index.html

测试

在LVS1+Keepalived和LVS2+Keepalived上安装ipvsadm: yum install ipvsadm -y
1. 在上述服务器上执行ipvsadm -Ln
    IP Virtual Server version 1.2.1 (size=4096)
    Prot LocalAddress:Port Scheduler Flags
      -> RemoteAddress:Port           Forward Weight ActiveConn InActConn
    TCP  10.0.0.100:80 rr persistent 50
      -> 192.168.30.17:80             Route   1      0          0
      -> 192.168.30.27:80             Route   1      0          0
    TCP  10.0.0.101:80 rr persistent 50
      -> 192.168.30.17:80             Route   1      0          0
      -> 192.168.30.27:80             Route   1      0          0
2. 无论停止哪一个节点的keepalive,VIP都是成功漂移到另一台服务器上,提供正常的服务。

实现双主模型的nginx高可用集群

Keepalived无法为Nginx提供直接的配置,借助vrrp_script{}去探测关键进程的执行状态结果来决定对是否动态修改优先级,来决定VIP的漂移。
借助notify.sh来重启Nginx,
如backup) 
systemctl restart nginx
notify backup
分两步:
1. 先定义一个脚本
2. 在vrrp实例中调用此脚本

安装配置Nginx反向代理

yum install nginx -y
vim /etc/nginx/conf.d/www.conf
        upstream websrvs {
            server 192.168.30.17:80;
            server 192.168.30.27:80;
    }

server {
        listen 80 default_server;
        server_name 192.168.30.7;
        root /usr/share/nginx/html;
        location / {
            proxy_pass http://websrvs;

        }
}
第二台服务器:
upstream websrvs {
        server 192.168.30.17:80;
        server 192.168.30.27:80;
}

server {
        listen 80 default_server;
        server_name 192.168.30.37;
        root /usr/share/nginx/html;
        location / {
                proxy_pass http://websrvs;

        }
}

配置keeplived

vrrp_script ngxhelth {
    script  "killall -0 nginx && exit 0 || exit 1"
    interval 1
    weight -5
}
分别在vrrp_instance VI_2和vrrp_instance VI_1中调用此脚本
 track_script {
    ngxhealth
    }

测试

在一台服务器停止keepalived,vip会转移到另一台服务器,但是网页访问仍然正常。
原文地址:https://www.cnblogs.com/liangjindong/p/9301013.html