Flume数据流监控--Ganglia

Flume数据流监控--Ganglia

1.Ganglia的安装与部署

1.1三台机器安装epel源

  sudo yum install -y epel-release

1.2在102安装web,meta和monitor

  sudo yum -y install ganglia-gmetad ganglia-web ganglia-gmond

1.3在103、104安装monitor

  sudo yum -y install ganglia-gmond
  sudo yum -y install ganglia-gmond
      Ganglia由gmond、gmetad和gweb三部分组成。
      gmond(Ganglia Monitoring Daemon)是一种轻量级服务,安装在每台需要收集指标数据的节点主机上。使用gmond,你可以很容易收集很多系统指标数据,如CPU、内存、磁盘、网络和活跃进程的数据等。
      gmetad(Ganglia Meta Daemon)整合所有信息,并将其以RRD格式存储至磁盘的服务。
      gweb(Ganglia Web)Ganglia可视化工具,gweb是一种利用浏览器显示gmetad所存储数据的PHP前端。在Web界面中以图表方式展现集群的运行状态下收集的多种不同指标数据

1.4 修改配置文件/etc/httpd/conf.d/ganglia.conf(前端配置,ganglia 访问权限的配置)

   sudo vim /etc/httpd/conf.d/ganglia.conf   配置谁可以访问前端页面
   这里找到我们的物理机 vnet8下的地址即可
#
# Ganglia monitoring system php web frontend
#

Alias /ganglia /usr/share/ganglia

<Location /ganglia>
  #Require local
  Require ip 192.168.44.1
  Require all granted
  # Require ip 10.1.2.3
  # Require host example.org
</Location>

1.5 修改配置文件/etc/ganglia/gmetad.conf(数据库配置)

   sudo vim /etc/ganglia/gmetad.conf
   修改后面的数据源,因为我们的是hadoop102,所以修改为hadoop102
   data_source "my cluster" hadoop102

1.6 修改配置文件/etc/ganglia/gmond.conf

   要修改的地方
   1.cluster name : 和上面的data_source 一致
   2.udp_send_channel:监控的数据(接受的数据)发送给谁   我们都发送给hadoop102 然后加#号在 mcast_join = 239.2.11.71前面
   3.udp_recv_channel:接收数据 每个都监控自己本机端口  bind = 0.0.0.0
   4.将修改后的文件同步到hadoop103,hadoop104
cluster {
  name = "my cluster"
  owner = "unspecified"
  latlong = "unspecified"
  url = "unspecified"
}
udp_send_channel {
  #bind_hostname = yes # Highly recommended, soon to be default.
                       # This option tells gmond to use a source address
                       # that resolves to the machine's hostname.  Without
                       # this, the metrics may appear to come from any
                       # interface and the DNS names associated with
                       # those IPs will be used to create the RRDs.
  # mcast_join = 239.2.11.71
  host = hadoop102
  port = 8649
  ttl = 1
}
udp_recv_channel {
  # mcast_join = 239.2.11.71
  port = 8649
  bind = 0.0.0.0
  retry_bind = true
  # Size of the UDP buffer. If you are handling lots of metrics you really
  # should bump it up to e.g. 10MB or even higher.
  # buffer = 10485760
}

1.7 修改配置文件/etc/selinux/config

   sudo vim /etc/selinux/config
# This file controls the state of SELinux on the system.
# SELINUX= can take one of these three values:
#     enforcing - SELinux security policy is enforced.
#     permissive - SELinux prints warnings instead of enforcing.
#     disabled - No SELinux policy is loaded.
SELINUX=disabled
# SELINUXTYPE= can take one of these two values:
#     targeted - Targeted processes are protected,
#     mls - Multi Level Security protection.
SELINUXTYPE=targeted

  selinux本次生效关闭必须重启,如果此时不想重启,可以临时生效:sudo setenforce 0

1.8 102启动ganglia三个后台,103,104启动gmond

  hadoop102:
            sudo systemctl start httpd
            sudo systemctl start gmetad
            sudo systemctl start gmond
  hadoop103:
            sudo systemctl start gmond
  hadoop104:
            sudo systemctl start gmond

1.9 打开网页浏览ganglia页面

   http://hadoop102/ganglia

2.操作Flume测试监控

2.1启动Flume任务(先启动下游,在启动上游)

  bin/flume-ng agent -n a2 -c conf -f job/group1/a2.conf -Dflume.monitoring.type=ganglia -Dflume.monitoring.hosts=hadoop102:8649
  bin/flume-ng agent -n a3 -c conf -f job/group1/a3.conf -Dflume.monitoring.type=ganglia -Dflume.monitoring.hosts=hadoop102:8649
  bin/flume-ng agent -n a1 -c conf -f job/group1/a1.conf -Dflume.monitoring.type=ganglia -Dflume.monitoring.hosts=hadoop102:8649

2.2发送数据观察ganglia监测图

原文地址:https://www.cnblogs.com/xiao-bu/p/14359324.html