使用redis作为缓存收集日志

一、部署环境

系统版本 主机名 IP地址 运行的服务
Centos 7.5 node 192.168.1.1 ES、Kibana
Centos 7.5 logstash 192.168.1.2 Logstash
Centos 7.5 redis 192.168.1.3 Redis
Centos 7.5 app 192.168.1.4 Nginx、Filebeat

二、搭建Redis

$ http://download.redis.io/releases/redis-4.0.14.tar.gz
$ tar zxf redis-4.0.14.tar.gz -C /usr/src
$ cd /usr/src/redis-4.0.14/
$ make PREFIX=/usr/local/ && make install
$ cd utils/
$ ./install_server.sh 
$ ss -lnt | grep 6379
LISTEN     0      128    127.0.0.1:6379                     *:*    
$ vim /etc/redis/6379.conf
bind 192.168.1.3
port 6379
daemonize yes
pidfile /var/run/redis_6379.pid
loglevel notice
logfile /var/log/redis_6379.log
$ /etc/init.d/redis_6379 restart
$ ss -lnt | grep 6379
LISTEN     0      128    192.168.1.3:6379                     *:*          
$ redis-cli -h 192.168.1.3
192.168.1.3:6379> 

三、搭建Nginx、Filebeat

$ vim /etc/yum.repos.d/nginx.repo
[nginx-stable]
name=nginx stable repo
baseurl=http://nginx.org/packages/centos/$releasever/$basearch/
gpgcheck=0
enabled=1
$ yum -y install nginx httpd-tools
$ vim /etc/nginx/nginx.conf
#添加以下内容将其日志格式转换为json格式
    log_format json '{ "@time_local": "$time_local", '
                        '"remote_addr": "$remote_addr", '
                        '"referer": "$http_referer", '
                        '"request": "$request", '
                        '"status": $status, '
                        '"bytes": $body_bytes_sent, '
                        '"agent": "$http_user_agent", '
                        '"x_forwarded": "$http_x_forwarded_for", '
                        '"up_addr": "$upstream_addr",'
                        '"up_host": "$upstream_http_host",'
                        '"up_resp_time": "$upstream_response_time",'
                        '"request_time": "$request_time"'
' }';  

    access_log  /var/log/nginx/access.log  json;
$ nginx	
$ wget https://artifacts.elastic.co/downloads/beats/filebeat/filebeat-6.6.0-x86_64.rpm
$ yum -y install filebeat-6.6.0-x86_64.rpm 
$ vim /etc/filebeat/filebeat.yml 
filebeat.inputs:
- type: log
  enabled: true
  paths:
    - /var/log/nginx/access.log
  json.keys_under_root: true
  json.overwrite_keys: true
  tags: ["access"]

- type: log
  enabled: true
  paths:
    - /var/log/nginx/error.log
  tags: ["error"]

output.redis:
  hosts: ["192.168.1.3"]
  keys:
    - key: "nginx_access"
      when.contains:
        tags: "access"
    - key: "nginx_error"
      when.contains:
        tags: "error"
$ systemctl start filebeat
$ ab -c 100 -n 100 http://192.168.1.4/

四、确认redis已经获取到数据

$ redis-cli -h 192.168.1.3
192.168.1.3:6379> keys *
1) "nginx_error"
2) "nginx_access"
192.168.1.3:6379> type nginx_error
list
192.168.1.3:6379> type nginx_access
list
192.168.1.3:6379> llen nginx_error
(integer) 100
192.168.1.3:6379> llen nginx_access
(integer) 200
192.168.1.3:6379> lrange nginx_error 1 100
192.168.1.3:6379> lrange nginx_access 1 200

五、搭建ES、Kibana

$ wget https://artifacts.elastic.co/downloads/elasticsearch/elasticsearch-6.6.0.rpm
$ yum -y install elasticsearch-6.6.0.rpm
$ egrep -v '#|^$' /etc/elasticsearch/elasticsearch.yml 
node.name: node
path.data: /elk/data
path.logs: /elk/log
network.host: 192.168.1.1
http.port: 9200
$ mkdir -p /elk/{data,log}
$ chown elasticsearch.elasticsearch /elk -R
$ systemctl start elasticsearch
$ ss -lnt | grep 9200
LISTEN     0      128     ::ffff:192.168.1.1:9200                    :::*    
$ wget https://artifacts.elastic.co/downloads/kibana/kibana-6.6.0-x86_64.rpm
$ yum -y install kibana-6.6.0-x86_64.rpm
$ egrep -v '#|^$' /etc/kibana/kibana.yml 
server.port: 5601
server.host: "192.168.1.1"
server.name: "node"
elasticsearch.hosts: ["http://192.168.1.1:9200"]
kibana.index: ".kibana"
$ systemctl start kibana
$ ss -lnt | grep 5601
LISTEN     0      128    192.168.1.1:5601                     *:*         

访问页面:
20200327084824

六、搭建logstash

$ wget https://artifacts.elastic.co/downloads/logstash/logstash-6.6.0.rpm
$ yum -y install logstash-6.6.0.rpm
$ vim /etc/logstash/conf.d/redis.conf
#名称可以自定义,保证是在这个路径下
input{
        redis {
                host => "192.168.1.3"
                port => "6379"
                db => "0"
                key => "nginx_access"
                data_type => "list"
        }

        redis {
                host => "192.168.1.3"
                port => "6379"
                db => "0"
                key => "nginx_error"
                data_type => "list"
        }
}

filter {
        mutate {
                convert => ["upstream_time","float"]
                convert => ["request_time","float"]
        }
}

output {
        stdout {}
        if "access" in [tags] {
                elasticsearch {
                        hosts => "http://192.168.1.1:9200"
                        manage_template => false
                        index => "nginx_access-%{+yyyy.MM}"
                        }
                }
        if "error" in [tags] {
                elasticsearch {
                        hosts => "http://192.168.1.1:9200"
                        manage_template => false
                        index => "nginx_error-%{+yyyy.MM}"
                        }
                }

}
$ /usr/share/logstash/bin/logstash  -f /etc/logstash/conf.d/redis.conf

当启动Logstash时,可以看到Redis中数据正在减少,都被读到Logstash,经过格式化处理给到了ES并由kibana展示出来!

20200327085651

自定义索引,便可看到以下画面:

20200327085828

*************** 当你发现自己的才华撑不起野心时,就请安静下来学习吧!***************
原文地址:https://www.cnblogs.com/lvzhenjiang/p/14199367.html