elk日志系统

elk日志搜索安装:

  yum install elasticsearch-5.4.0.rpm

 hostnamectl set-hostname linux-host1.example.com && reboot

vi /etc/security/limits.conf  添加开启文件最大数

*       soft    nofile  65536   添加

*       hard    nofile  65536   添加

vi /etc/sysctl.conf 

vm.max_map_count=655360  添加

执行:sysctl -p

下载--past releases

elk设置:

[root@linux-host1 ~]# grep "^[a-Z]" /etc/elasticsearch/elasticsearch.yml

cluster.name: elk-cluster  集群名称 集群名称相同

node.name: elk-node1    节点名称

path.data: /data/elkdata  数据路径

path.logs: /data/logs  日志路径

bootstrap.memory_lock: true 锁定内存 不开启

network.host: 192.168.122.1 监听地址

http.port: 9200  查看的时候端口  9300 elasticsearch之间通信的

discovery.zen.ping.unicast.hosts: ["192.168.122.1"] 组播

mkdir -pv  /data/{elkdata,log}

 chown elasticsearch.elasticsearch /data/ -R

                                 :::* 

vi /etc/elasticsearch/jvm.options

-Xms2g

-Xmx2g

最小内容和最大内存设置成一样 以防java频繁释放内存

取消内存限制

vi /usr/lib/systemd/system/elasticsearch.service

LimitMEMLOCK=infinity

systemctl daemon-reload

systemctl start elasticsearch.service

ss -tnl  查看端口是否开启

ps -ef|grep java 查看java运行内存

 tail /data/logs/elk-cluster.log  查看日志是否开启服务

[2019-07-13T09:47:03,528][INFO ][o.e.n.Node ] [elk-node1]   started

[2019-07-13T09:47:03,541][INFO ][o.e.g.GatewayService] [elk-node1] recovered [0] indices into cluster_state

LISTEN      0      128         ::ffff:192.168.122.1:9200                                      :::*                  

LISTEN      0      128         ::ffff:192.168.122.1:9300      

查看集群状态

 curl http://192.168.81.100:9200/_cluster/health?pretty=true

  "cluster_name" : "elk-cluster",  集群名称

  "status" : "green",    green表示状态正常

  "timed_out" : false,

  "number_of_nodes" : 1,

  "number_of_data_nodes" : 1,

  "active_primary_shards" : 0,

  "active_shards" : 0,

  "relocating_shards" : 0,

  "initializing_shards" : 0,

  "unassigned_shards" : 0,

  "delayed_unassigned_shards" : 0,

  "number_of_pending_tasks" : 0,

  "number_of_in_flight_fetch" : 0,

  "task_max_waiting_in_queue_millis" : 0,

  "active_shards_percent_as_number" : 100.0

安装head插件监控集群状态

yum install git npm  -y

  • git clone git://github.com/mobz/elasticsearch-head.git
  • cd elasticsearch-head
  • npm install
  • npm run start &  
  • open http:/192.168.81.100:9100/

复合查询

test1=index/test1

{"name":"hello","job":"boss"}

 概览

 

vi /etc/elasticsearch/elasticsearch.yml

http.cors.enabled: true

http.cors.allow-origin: “*”  允许访问列表 head才能连

 

 logstash 日志收集安装

yum install logstash-5.4.0.rpm -y

 查看配置文件

rpm -qpl logstash-5.4.0.rpm

配置文件存放路劲

/etc/logstash/conf.d/

测试

/usr/share/logstash/bin/logstash -e 'input{ stdin {} } output {stdout {codec => rubydebug }}'

input{ stdin {} } output {stdout {codec => rubydebug }}

input输入  stdin 输入到标准输入

output 输出  stdout 输出到标准输出(打印到屏幕)

codec => rubydebug 输出编码

 {  { } } 大括号里指定参数

    "@timestamp" => 2019-07-13T05:28:31.763Z,  当前事件发生时间

      "@version" => "1",    事件版本号

          "host" => "linux-host1.example.com" ,  发生的主机

       "message" => "hrello" 日志内容

/usr/share/logstash/bin/logstash -e 'input{ stdin {} } output {file {path => "/tmp/test-%{+YYYY.MM.dd}.log.tar.gz" gzip=>true }}'

日志输出到文件  开启压缩

/usr/share/logstash/bin/logstash -e 'input{ stdin {} } output { elasticsearch { hosts=> ["192.168.81.100:9200"] index=> "logstash-test-%{+YYYY.MM.dd}" }}'  

日志输出到elasticsearch

日志保存到 /data/elkdata/nodes/0/indices/

安装kibana 图形化界面

 yum install kibana-5.4.0-x86_64.rpm -y

配置文件

 vim /etc/kibana/kibana.yml

server.port: 5601

server.host: "192.168.81.100"

elasticsearch.url: "http://192.168.81.100:9200"

systemctl restart kibana.service

systemctl enable kibana.service

systemctl enable elasticsearch.service

systemctl enable logstash.service

查看状态

ss -ntl

LISTEN     0     128     192.168.81.100:5601

http://192.168.81.100:5601/status

收集上面那条发给elasticsearch的日志

Index name or pattern

[logstash-test-]YYYY.MM.DD

选择  Use event times to create index names

点击 Index pattern interval

选择日志按天小时月来匹配 

最后点击 左边 discover

收集系统日志文件

 vi /etc/logstash/conf.d/system_log.conf

     

input {  输入来源

  file {

        path => "/var/log/messages" 日志路径

        type => "systemlog" 定义type

        start_position => "beginning"  #开始位置 开始还是结束

        stat_interval => "2"   #间隔多久收一次

        }

}

output {  输出到哪里

        elasticsearch {

                        hosts => ["192.168.81.100:9200"]    elastic主机

                        index => "logstash-systemlog-%{+YYYY.MM.dd}"  定义kibana的正则表达式

                        }

        }

/usr/share/logstash/bin/logstash -f /etc/logstash/conf.d/system_log.conf -t  配置文件语法检测

chmod 644 /var/log/messages 修改权限

 systemctl restart logstash.service 重启之后就可以收到配置文件的日志

加到kibana界面

[logstash-systemlog-]YYYY.MM.DD  正则匹配

按日收取 discover一下 鼠标选择时间轴

system_log.conf

带条件判断的 可收取多个 也可以保存多个的conf

input {

  file {

        path => "/var/log/messages"

        type => "systemlog"

        start_position => "beginning"

        stat_interval => "2"

        }

  file {

        path => "/var/log/lastlog"

        type => "system-last"        

        start_position => "beginning"

        stat_interval => "2"

        }

}

output {

  if [type] == "systemlog" {

        elasticsearch {

                hosts => ["192.168.81.100:9200"]

                index => "logstash-systemlog-%{+YYYY.MM.dd}"

        }

        file {

                path => "/tmp/system.log"

        }

        }

  if [type] == "system-last" {

        elasticsearch {

                hosts => ["192.168.81.100:9200"]

                index => "logstash-lastlog-%{+YYYY.MM.dd}"

                }}

}

input收取两个  output代条件判断 当满足条件一时日志写入到 elasticsearch file

正则匹配为:

[logstash-systemlog-]YYYY.MM.DD

[logstash-lastlog-]YYYY.MM.DD

再测试一下

 /usr/share/logstash/bin/logstash-f /etc/logstash/conf.d/system_log.conf -t

收集nginx日志

 yum install gcc gcc-c++ automake pcre pcre-devel zlip zlip-devel openssl openssl-devel -y

tar xf nginx-1.10.3.tar.gz

cd nginx-1.10.3/

./configure --prefix=/usr/local/nginx-1.10.3

make && make install

软连接  ln -sv /usr/local/nginx-1.10.3/ /usr/local/nginx

vi conf/nginx.conf

添加

         location /nginxweb {

            root   html;

            index  index.html index.htm;

        }

cd html/

mkdir nginxweb

cd nginxweb

echo "Nginx web" > index.html

/usr/local/nginx/sbin/nginx

ss -ntl

LISTEN     0      128     *:80

开启json日志

vi conf/nginx.conf

    #log_format  main  '$remote_addr - $remote_user [$time_local] "$request" '

    #                  '$status $body_bytes_sent "$http_referer" '

    #                  '"$http_user_agent" "$http_x_forwarded_for"';

    #access_log  logs/access.log  main;

   log_format access_json '{"@timestamp":"$time_iso8601",'

                '"host":"$server_addr",'

                '"clientip":"$remote_addr",'

                '"size":"$body_bytes_sent",'

                '"responsetime":"$request_time",'

                '"upstreamtime":"$upstream_response_time",'

                '"upstreamhost":"$upstream_addr",'

                '"http_host":"$host",'

                '"url":"$uri",'

                '"domain":"$host",'

                '"xff":"$http_x_forwarded_for",'

                '"referer":"$http_referer",'

                '"status":"$status"}';

        access_log /var/log/nginx/access.log access_json;

安装 httpd-tool 模拟访问nginx

 yum install httpd-tools -y

ab -n1000 -c100 http://192.168.81.100/nginxweb/index.html

1000个请求 100个命令

logstatsh配置文件

/etc/logstash/conf.d/nginx_accesslog.conf

input {

  file {

    path => "/var/log/nginx/acees.log"

    type => "nginx-access-log"

    start_position => "beginning"

    stat_interval => "2"

  }

}

output {

  elasticsearch {

    hosts => ["192.168.81.100:9200"]

    index => "logstash-nginx-access-log-%{+YYYY.MM.dd}"

}

}

测试配置文件

 /usr/share/logstash/bin/logstash -f /etc/logstash/conf.d/nginx_accesslog.conf  -t

logstash通过配置文件收取日志

/usr/share/logstash/bin/logstash -f /etc/logstash/conf.d/nginx_accesslog.conf

转成json格式,方便取日子里的数据

log={"@timestamp":"2019-07-14T15:17:51+08:00","host":"192.168.81.100","clientip":"192.168.81.100",
     "size":"10","responsetime":"0.010","upstreamtime":"-","upstreamhost":"-",
      "http_host":"192.168.81.100","url":"/nginxweb/index.html",
     "domain":"192.168.81.100","xff":"-","referer":"-","status":"200"}

ip=log.get('clientip')
print(ip)

for k,v in log.items():
    print(k,":",v)

kibana正则匹配

[logstash-nginx-access-log-]YYYY.MM.DD

tomcat日志

tar xf apache-tomcat-8.0.36.tar.gz -C /usr/local/src/

ln -sv /usr/local/src/apache-tomcat-8.0.36/  /usr/local/tomcat

cd tomcat/conf/ 

cd /usr/local/tomcat/webapps/

mkdir webdir

cp webdir/ webdir2 -r

echo “<h1> Tomcat Page</h1>”  >  index.html

cp webdir/ webdir2 -r

bin/startup.sh &

http://192.168.81.100:8080/webdir/

修改tomcat日志

最后一行

vi conf/server.xml

        <Valve className="org.apache.catalina.valves.AccessLogValve" directory="logs"

               prefix="tomcat_access_log" suffix=".log"

pattern="%h %l %u %t &quot;%r&quot; %s %b" />

然后重启

bin/shutdown.sh

bin/startup.sh

logstash配置文件

input {

  file{

    path => "/usr/local/tomcat/logs/tomcat_access_log.*.log"

    type => "tomcat_access_log"

    start_position => "beginning"

    stat_interval => "2"    

  }

}

output {

  if [type] == "tomcat_access_log" {

  elasticsearch {

    hosts => ["192.168.81.100:9200"]

    index => "logstash-tomcat-accesslog-%{+YYYY.MM.dd}"

 }

}

}

检测语法

/usr/share/logstash/bin/logstash -f /etc/logstash/conf.d/tomcat_access.conf -t

开启 head

cd elasticsearch-head/

npm run start &  

语法正常 重启 logstash

systemctl restart logstash.service

kibana加上正则

[logstash-tomcat-accesslog-]YYYY.MM.DD

测试访问

 ab -n1000 -c100  http://192.168.81.100:8080/webdir2/index.html

java日志收集 multiline 多行匹配

https://www.elastic.co/guide/en/logstash/current/plugins-codecs-multiline.html

input {

  stdin {

    codec => multiline {      多行匹配

      pattern => "pattern, a regexp"  每一行日志的正则表达式

      negate => "true" or "false" 是否合并

      what => "previous" or "next" 与上面的还是与下面的

    }

  }

/usr/share/logstash/bin/logstash -e 'input { stdin {codec => multiline {pattern => "^[" negate => true what => "previous" } }} output{ stdout { codec => rubydebug }}'

input { stdin {codec => multiline {pattern => "^[" negate => true what => "previous" }

                   多行匹配   匹配以中括号开头  合并     抓取中括号以上的

实例:获取elasticsearch的日志

cat java.conf

input {

  file {

    path => "/data/logs/elk-cluster.log"

    type => "elasticsearch-java-log"

    start_position => "beginning"

    stat_interval => "2"

    codec => multiline

    {pattern => "^["

     negate => true

     what => "previous" }

   }}

output{

  if [type] == "elasticsearch-java-log" {

  elasticsearch {

    hosts => ["192.168.81.100:9200"]

    index => "elasticsearch-java-log-%{+YYYY.MM.dd}"

   }}

}

测试语法

/usr/share/logstash/bin/logstash -f /etc/logstash/conf.d/java.conf -t

systemctl restart logstash.service

tail -f /var/log/logstash/logstash-plain.log  看看有没有启动logstash

kibana正则表达式

[elasticsearch-java-log-]YYYY.MM.DD

收集tcp日志

vi tcp.conf

input {

  tcp {

    port => 5600

    mode => "server"

    type => "tcplog"

 }

}

output {

  stdout {

    codec => rubydebug

}

}

测试配置文件

/usr/share/logstash/bin/logstash -f /etc/logstash/conf.d/tcp.conf -t

安装网络工具

yum install nc -y

/usr/share/logstash/bin/logstash -f /etc/logstash/conf.d/tcp.conf  启动

echo "tscpdata" | nc 192.168.81.100 5600

 传输的内容         对端地址端口

nc 192.168.81.100 5600 < /etc/passwd

伪设备

echo "ssssssssddddddddddddddd" > /dev/tcp/192.168.81.100/5600

展示在kibana的配置文件 与tomcat写一起

tomcat_tcp.conf

input {

  file{

    path => "/usr/local/tomcat/logs/tomcat_access_log.*.log"

    type => "tomcat_access_log"

    start_position => "beginning"

    stat_interval => "2"

  }

  tcp {

    port => 7800

    mode => "server"

    type => "tcplog"

  }

}

output {

  if [type] == "tomcat_access_log" {

  elasticsearch {

    hosts => ["192.168.81.100:9200"]

    index => "logstash-tomcat-accesslog-%{+YYYY.MM.dd}"

 }}

  

  if [type] == "tcplog" {

  elasticsearch {

    hosts => ["192.168.81.100:9200"]

    index => "tcp-81100-%{+YYYY.MM.dd}" 

 }}     

}

测试配置文件

/usr/share/logstash/bin/logstash -f /etc/logstash/conf.d/tomcat_access.conf -t

systemctl restart logstash

ss  -ntl确认7800是否开启

kibana正则表达式

[tcp-81100-]YYYY.MM.DD

原文地址:https://www.cnblogs.com/leiwenbin627/p/11190095.html