ELK logstash geoip值为空故障排查

首先我们用的是elasticsearch+kibana+logstash+filebeat

客户端filebeat收集日志后经过服务端logstash规则处理后储存到elasticsearch中,在kibana中展示。

以nginx日志为例

1.我遇到的问题是,logstash中filter的规则似乎未生效,kibana中新建索引总是没有geoip参数

logstash配置文件如下

input {
beats{
port => 5044
codec => json {
charset => "UTF-8"
}
}
}

filter{
grok {
match => {"message" => '%{DATA:http_x_forwarded_for} - %{DATA:remote_user} [%{HTTPDATE:time_local}] "%{DATA:request_uri}"%{NUMBER:status:int} %{NUMBER:body_bytes_sent:int} %{DATA:http_referer} "%{DATA:http_user_agent}"'}
}
if "63nginx_access" in [tags] {
json{
source => "message"
}
if [user_ua] != "-" {
useragent {
target => "agent" #agent将过来出的user agent的信息配置到了单独的字段中
source => "user_ua" #这个表示对message里面的哪个字段进行分析
}
}
if [http_x_forwarded_for] != "-" {
geoip {
source => "http_x_forwarded_for"
target => "geoip"
add_field => [ "[geoip][coordinates]", "%{[geoip][longitude]}" ]
add_field => [ "[geoip][coordinates]", "%{[geoip][latitude]}" ]
}
mutate {
convert => [ "[geoip][coordinates]", "float"]
}
}
}
}

output {
if[type] == "63nginx_access"{
elasticsearch {
hosts => ["127.0.0.1:9200"]
index => "logstash_63nginx_access.%{+YYYY.MM.dd}"
}
}

1.1 创建logstash测试文件用来调试  vim logstash.test.conf

input {
stdin {}
}


filter {
grok {
match => {"message" => '%{DATA:http_x_forwarded_for} - %{DATA:remote_user} [%{HTTPDATE:time_local}] "%{DATA:request_uri}"%{NUMBER:status:int} %{NUMBER:body_bytes_sent:int} %{DATA:http_referer} "%{DATA:http_user_agent}"'}
}


if [http_x_forwarded_for] != '-'{
geoip {
source => "http_x_forwarded_for"
target => "geoip"
add_field => [ "[geoip][coordinates]", "%{[geoip][longitude]}" ]
add_field => [ "[geoip][coordinates]", "%{[geoip][latitude]}" ]
}
}

}

output {
stdout {
codec => rubydebug
}
}

启动logstash

./bin/logstash -f   logstash.test.conf

启动后粘贴一行nginx的日志

geoip为空,因为我们nginx的http_x_forwarded_for获取到两个ip,接着我用单ip测试,一定要是公网ip(内网ip在规则中被过滤了)

启动logstash

./bin/logstash -f   logstash.test.conf

输入

211.154.222.21 - - [26/Oct/2018:15:07:20 +0800] "GET /pp/index.php?/categories/posted-monthly-list-any-any/start-111210 HTTP/1.0"200 21761  "-""Sogou web spider/4.0(+http://www.sogou.com/docs/help/webmasters.htm#07)"

显然这样就获取到geoip的信息了,接着需要调整下nginx日志了

================

nginx日志格式改动牵扯的比较多,还是从logstash中找方法吧

mutate {

split => ["http_x_forwarded_for",","]

add_field => ["real_remote_addr","%{http_x_forwarded_for[0]}"]

}

 当http_x_forwarded_for获取到多个ip时,可以采取以上方式

so我logstash的filter配置文件如下:

filter {
grok {
match => {"message" => '%{DATA:http_x_forwarded_for} - %{DATA:remote_user} [%{HTTPDATE:time_local}] "%{DATA:request_uri}"%{NUMBER:status:int} %{NUMBER:body_bytes_sent:int} %{DATA:http_referer} "%{DATA:http_user_agent}"'}
}

mutate {
split => ["http_x_forwarded_for",","]
add_field => ["real_remote_addr","%{http_x_forwarded_for[0]}"]
}
geoip {
source => "real_remote_addr"
target => "geoip"
add_field => [ "[geoip][coordinates]", "%{[geoip][longitude]}" ]
add_field => [ "[geoip][coordinates]", "%{[geoip][latitude]}" ]
}
}

最后再啰嗦一句

kibana中创建索引一定要以logstash-*开头,要不kibana中创建地图时识别不了

 

原文地址:https://www.cnblogs.com/rutor/p/10000169.html