keepalived 高可用 lvs dr 模型

首先,我们先搭建一个LVS-DR示例:

(1)准备两个节点做为Real Server

RS1:192.168.2.50,node4.ckh.com,node4
RS2:192.168.2.80,node3.ckh.com,node3
vip:192.168.2.120
director1:192.168.2.20,node1.ckh.com,node1
director2:192.168.2.40,node2.ckh.com,node2

并且都安装httpd服务,为各RS添加index.html测试页面

# echo "rs1 web server" > /var/www/html/index.html
# echo "rs2 web server" > /var/www/html/index.html

启动httpd服务。

各RS配置vip: 写一个脚本编译配置:dr.sh

#!/bin/bash
#
vip=192.168.2.110
host="-host"

case $1 in
start)
        echo 1 > /proc/sys/net/ipv4/conf/all/arp_ignore
        echo 1 > /proc/sys/net/ipv4/conf/lo/arp_ignore
        echo 2 > /proc/sys/net/ipv4/conf/all/arp_announce
        echo 2 > /proc/sys/net/ipv4/conf/lo/arp_announce
        ifconfig lo:0 $vip netmask 255.255.255.255 broadcast $vip up
        route add $host $vip dev lo:0
        ;;
stop)
        echo 0 > /proc/sys/net/ipv4/conf/all/arp_ignore
        echo 0 > /proc/sys/net/ipv4/conf/lo/arp_ignore
        echo 0 > /proc/sys/net/ipv4/conf/all/arp_announce
        echo 0 > /proc/sys/net/ipv4/conf/lo/arp_announce
        ifconfig lo:0 down
        ;;
esac
#!/bin/bash
#
vip="192.168.2.246"
vip2="192.168.2.244"
eth="lo"
host="-host"
case $1 in
start)
	echo 1 > /proc/sys/net/ipv4/conf/all/arp_ignore
	echo 1 > /proc/sys/net/ipv4/conf/all/arp_ignore
	echo 2 > /proc/sys/net/ipv4/conf/all/arp_announce
	echo 2 > /proc/sys/net/ipv4/conf/all/arp_announce
        ifconfig $eth:1 $vip/32 broadcast $vip up
        route add $host $vip dev $eth:1
        ifconfig $eth:2 $vip2/32 broadcast $vip2 up
        route add $host $vip2 dev $eth:2
        ;;
stop)

	echo 0 > /proc/sys/net/ipv4/conf/all/arp_ignore
	echo 0 > /proc/sys/net/ipv4/conf/all/arp_ignore
	echo 0 > /proc/sys/net/ipv4/conf/all/arp_announce
	echo 0 > /proc/sys/net/ipv4/conf/all/arp_announce
        ifconfig $eth:1 down
        ifconfig $eth:2 down
        ;;
esac
执行脚本:./dr.sh start

(2)在第一个director上配置vip,然后配置ipvsadm规则

director1:192.168.2.20
vip:192.168.2.120

先安装ipvsadm

# yum install ipvsadm -y

配置vip

# ifconfig eth0:0 192.168.2.110/32 broadcast 192.168.2.110 up
# route add -host 192.168.2.110 dev eth0:0

在director1上添加ipvs规则

# ipvsadm -A -t 192.168.2.110:80 -s rr
# ipvsadm -a -t 192.168.2.110:80 -r 192.168.2.50 -g -w 1
# ipvsadm -a -t 192.168.2.110:80 -r 192.168.2.80 -g -w 2

访问都没问题。

下面我们使用keepalived来实现这个功能

1、先清空ipvs规则

# ipvsadm -C

2、删除配置的vip

# ifconfig eth0:0 down

3、准备第二个director,配置ipvsamd规则及vip,测试能够正常使用

然后也将vip删除,清空ipvs规则

4、现在我们在两个director上安装httpd服务,我们要提供sorry server服务

# echo "Sorry,under maintanance" > /var/www/html/index.html

启动httpd服务。

5、两个director上安装keepalived,并对keepalived进行配置

node1完整配置如下:

! Configuration File for keepalived

global_defs {
   notification_email {
        root@localhost  #修改收邮件地址
   }

   notification_email_from kaadmin@localhost  #修改发件人
   smtp_server 127.0.0.1  #从本机发送邮件
   smtp_connect_timeout 30
   router_id LVS_DEVEL
   vrrp_mcast_group4 224.18.0.100 #定义组播地址,默认为224.0.0.18
}

vrrp_script chk_maintanance {
    script "[[ -f /etc/keepalived/down ]] && exit 1 || exit 0"  #shell命令或脚本路径
    interval 1  #间隔时间,单位为秒,默认1秒
    timeout 1   #超时时间
    fall 1      #脚本连续几次失败转换为失败
    rise 1      #脚本连续监测结果后,把服务器从失败标记为成功的次数
    #user USERNAME [GROUPNAME] #执行监测的用户和组
    init_fail    #设置默认标记为失败状态,监测成功之后转换为成功状态
    weight -2    #权重,当脚本成功或失败对当前节点的优先级是增加还是减少
}
vrrp_instance VI_1 {  #vrrp实例
    state MASTER
    interface eth0
    virtual_router_id 51  #虚拟路由id
    priority 100  #MASTER优先级我们设置高点
    advert_int 1
    authentication {
        auth_type PASS
        auth_pass 1111  #简单字串,建议使用命令生成,不要随便取:#openssl rand -hex 4
    }
    track_script {  #检测脚本
        chk_maintanance
    }
    virtual_ipaddress {
        192.168.2.110/32 dev eth0 label eth0:1
    }
    notify_master "/etc/keepalived/notify.sh master"  #运行notify.sh脚本,发送通知
    notify_backup "/etc/keepalived/notify.sh backup"
    notify_fault "/etc/keepalived/notify.sh fault"
}

virtual_server 192.168.2.110 80 { #定义虚拟服务器,vip地址,服务端口80
    delay_loop 6
    lb_algo rr
    lb_kind DR    #lvs类型
    nat_mask 255.255.255.255  #vip我们一般是32位掩码
#persistence_timeout 50 #持久时长
    protocol TCP  #协议
    sorry_server 127.0.0.1 80 #定义sorry server

    real_server 192.168.2.50 80 { //Real Server1
        weight 1  #权重
        HTTP_GET {  #这里是向Real Server根路径发起访问请求的方式进行健康状态检测的,所以是HTTP_GET
            url {
              path /
                status_code 200 #返回的结果中状态码为200,就通过
            }
            connect_timeout 3
            nb_get_retry 3
            delay_before_retry 3
        }
    }

real_server 192.168.2.80 80 { //Real Server2
        weight 1
        HTTP_GET {
            url {
              path /
                status_code 200
            }
            connect_timeout 3
            nb_get_retry 3
            delay_before_retry 3
        }
    }
}

node2完整配置如下:

! Configuration File for keepalived

global_defs {
   notification_email {
        root@localhost
   }

   notification_email_from kaadmin@localhost
   smtp_server 127.0.0.1
   smtp_connect_timeout 30
   router_id LVS_DEVEL  
   vrrp_mcast_group4 224.18.0.100 #组播地址和MASTER节点一样
}

vrrp_script chk_maintanance {
    script "[[ -f /etc/keepalived/down ]] && exit 1 || exit 0"  #shell命令或脚本路径
    interval 1  #间隔时间,单位为秒,默认1秒
    timeout 1   #超时时间
    fall 1      #脚本连续几次失败转换为失败
    rise 1      #脚本连续监测结果后,把服务器从失败标记为成功的次数
    #user USERNAME [GROUPNAME] #执行监测的用户和组
    init_fail    #设置默认标记为失败状态,监测成功之后转换为成功状态
    weight -2    #权重,当脚本成功或失败对当前节点的优先级是增加还是减少
}
vrrp_instance VI_1 {
    state BACKUP    #这是备节点
    interface eth0
    virtual_router_id 51  #虚拟路由id和MASTER
    priority 99 #备用节点优先级设置低于MASTER
    advert_int 1
    authentication {
        auth_type PASS
        auth_pass 1111  #简单字串和MASTER一样
    }
    track_script {
        chk_maintanance
    }
    virtual_ipaddress {
        192.168.2.110/32 dev eth0 label eth0:1
    }
    notify_master "/etc/keepalived/notify.sh master"
    notify_backup "/etc/keepalived/notify.sh backup"
    notify_fault "/etc/keepalived/notify.sh fault"
}
virtual_server 192.168.2.110 80 {
    delay_loop 6
    lb_algo rr
    lb_kind DR
    nat_mask 255.255.255.255
#persistence_timeout 50
    protocol TCP
    sorry_server 127.0.0.1 80
    real_server 192.168.2.50 80 {
        weight 1
        HTTP_GET {  #---------begin
            url {
              path /
                status_code 200
            }
            connect_timeout 3
            nb_get_retry 3
            delay_before_retry 3
        }  #----------end
    }
real_server 192.168.2.80 80 {
        weight 1
        HTTP_GET {  #-------begin
            url {
              path /
                status_code 200
            }
            connect_timeout 3
            nb_get_retry 3
            delay_before_retry 3
        }  #--------end
   }
}

别忘了还要写一个脚本 notify.sh

#!/bin/bash
# Author:
# Description: An example of notify script
#
 
vip=192.168.2.120
contact='root@localhost'
 
notify() {
    mailsubject="$(hostname) to be $1: $vip floating"
   mailbody="$(date +'%F %H:%M:%S'): vrrp transition,$(hostname) changed to be $1"
  echo $mailbody | mail -s "$mailsubject" $contact
}
 
case $1 in
master)
    notify master
    exit 0
    ;;
backup)
    notify backup
    exit 0
    ;;
fault)
    notify fault
    exit 0
    ;;
*)
    echo "Usage:$(basename $0) {master|backup|fault}"
    exit 1
    ;;
esac

我们现在让其中一个Real Server停止httpd服务,会发现在director中的ipvs规则会少一个条目,当Real Server中再次启用httpd服务,ipvs中规则又会增加一条。

如果我们让director下线一个(创建down文件:# touch down),会发现虚拟ip地址流转到另一个director上了。

如果两个Real Server都挂了,那么访问时将会显示sorry server的内容。

健康状态检测的方式除了HTTP_GET,还有SSL_GET、TCP_CHECK/SMTP_CHECK/MISC_CHECK

如果使用TCP_CHECK的方式做健康状态检测,我们只需要将HTTP_GET的begin和end之间的内容换成TCP_CHECK的内容就可以。不设置ip和端口的话,默认是使用Real Server的IP和端口。

TCP_CHECK {  #TCP_CHECK配置简单,但是健康状态检查的精度就没有 HTTP_GET 高了
  connect_timeout 3  #只需要设置连接超时时长
}

以上是 MASTER/BACKUP 的高可用实现方式。

下面我们继续扩展,提供两个 VIP ,通过 DNS 进行解析,一定程度上也可以实现负载均衡的效果

node1 添加第二个实例:

vrrp_instance VI_2 {
    state BACKUP
    interface eth0
    virtual_router_id 52
    priority 99
    advert_int 1
    authentication {
        auth_type PASS
        auth_pass 1122
    }
    virtual_ipaddress {
        192.168.2.244/32 dev eth0 label eth0:2
    }
    notify_master "/etc/keepalived/notify.sh master"
    notify_backup "/etc/keepalived/notify.sh backup"
    notify_fault "/etc/keepalived/notify.sh fault"
}

virtual_server 192.168.2.244 80 {
    delay_loop 6
    lb_algo rr
    lb_kind DR
    #persistence_timeout 50
    protocol TCP
    sorry_server 127.0.0.1 80

    real_server 192.168.2.11 80 {
        weight 1
        HTTP_GET {
            url {
              path /
              status_code 200
            }
            connect_timeout 3
            nb_get_retry 3
            delay_before_retry 3
        }
    }
   real_server 192.168.2.14 80 {
        weight 1
        HTTP_GET {
            url {
              path /
              status_code 200
            }
            connect_timeout 3
            nb_get_retry 3
            delay_before_retry 3
        }
    }
}

node2 也添加第二个实例:

vrrp_instance VI_2 {
    state MASTER
    interface eth0
    virtual_router_id 52
    priority 100
    advert_int 1
    authentication {
        auth_type PASS
        auth_pass 1122
    }
    virtual_ipaddress {
        192.168.2.244/32 dev eth0 label eth0:2
    }
    notify_master "/etc/keepalived/notify.sh master"
    notify_backup "/etc/keepalived/notify.sh backup"
    notify_fault "/etc/keepalived/notify.sh fault"
}

virtual_server 192.168.2.244 80 {
    delay_loop 6
    lb_algo rr
    lb_kind DR
    #persistence_timeout 50
    protocol TCP
    sorry_server 127.0.0.1 80

    real_server 192.168.2.11 80 {
        weight 1
        HTTP_GET {
            url {
              path /
              status_code 200
            }
            connect_timeout 3
            nb_get_retry 3
            delay_before_retry 3
        }
    }
   real_server 192.168.2.14 80 {
        weight 1
        HTTP_GET {
            url {
              path /
              status_code 200
            }
            connect_timeout 3
            nb_get_retry 3
            delay_before_retry 3
        }
    }
} 

另外更重要的是别忘了,在我们的 RS 上要配置 第二个 VIP:

ifconfig lo:2 192.168.2.244/32 broadcast 192.168.2.244 up
route add -host 192.168.2.244 dev lo:2

到此,以上 lvs dr 的双主高可用已经实现。

原文地址:https://www.cnblogs.com/ckh2014/p/15777023.html