2 插件管理

input {
  stdin {
       add_field => { "@timestamp" => "2016-08-31T06:35:18.536Z" } 
     codec=>"plain" 
     tags =>["add"]
    type=>"std" 
     }
}

output {
 stdout {
  codec=>rubydebug{}
   }
 }

zjtest7-frontend:/usr/local/logstash-2.3.4/config# ../bin/logstash -f stdin.conf  
Settings: Default pipeline workers: 1
Pipeline main started
Hello World
A plugin had an unrecoverable error. Will restart this plugin.
  Plugin: <LogStash::Inputs::Stdin add_field=>{"@timestamp"=>"2016-08-31T06:35:18.536Z"}, codec=><LogStash::Codecs::Plain charset=>"UTF-8">, tags=>["add"], type=>"std">
  Error: The field '@timestamp' must be a (LogStash::Timestamp, not a Array (["2016-08-31T07:58:54.464Z", "2016-08-31T06:35:18.536Z"]) {:level=>:error}

/***修改后:

zjtest7-frontend:/usr/local/logstash-2.3.4/config# cat stdin.conf 
input {
  stdin {
     add_field => { "@timestamp1" => "2016-08-31T06:35:18.536Z" } 
     codec=>"plain" 
     tags =>["add"]
     type=>"std" 
     }
}

output {
 stdout {
  codec=>rubydebug{}
   }
 }

zjtest7-frontend:/usr/local/logstash-2.3.4/config# ../bin/logstash -f stdin.conf  
Settings: Default pipeline workers: 1
Pipeline main started
Hello World
{
        "message" => "Hello World",
       "@version" => "1",
     "@timestamp" => "2016-08-31T08:01:09.018Z",
           "type" => "std",
    "@timestamp1" => "2016-08-31T06:35:18.536Z",
           "tags" => [
        [0] "add"
    ],
           "host" => "0.0.0.0"
}

2.1.3 TCP输入:

 未来你可能会用Redis 服务器或者其他的消息队列系统来作为Logstash Broker 的角色。

不过Logstash 其实也有自己的TCP/UDP 插件。


2.2 编解配置:

事实上,我们第一个"Hello World" 用例就已经使用Codec了 rubydebug 就是一种Codec

虽然它一般只会在stdout 插件中,作为配置测试或者调试的工具。


2.2.2 多行事件编码:


Logstash 正为此准备好了codec/multiline插件!当然,multiline 插件也可以用于其他类似的堆栈信息,比如Linux的内核日志。


zjtest7-frontend:/usr/local/logstash-2.3.4/config# ../bin/logstash -f m.conf 
Settings: Default pipeline workers: 1
Pipeline main started
[Aug/08/08 14:54:03] hello world

[Aug/08/08 14:54:03] hello world
{
    "@timestamp" => "2016-08-31T09:00:45.163Z",
       "message" => "[Aug/08/08 14:54:03] hello world",
      "@version" => "1",
          "host" => "0.0.0.0"
}
he[Aug/08/08 14:54:03] hello logstash   
best practice
hello scan
[Aug/08/08 14:54:03] end
{
    "@timestamp" => "2016-08-31T09:01:18.622Z",
       "message" => "[Aug/08/08 14:54:03] hello world
he[Aug/08/08 14:54:03] hello logstash
best practice
hello scan",
      "@version" => "1",
          "tags" => [
        [0] "multiline"
    ],
          "host" => "0.0.0.0"
}


其实这个插件的原理很简单,就是把当前行的数据添加到前面一行后面,直到新进的当前行匹配^[正则为止。


2.3.2 grok 正则捕获:


1、命名分组格式为(?<grp name>),反向引用时用k<grp name>  
  
2、命名分组的匹配的结果存在在变量%+变量中,取命名分组值,$+{grp name}.  


zjtest7-frontend:/root/test# cat a2.pl 
my $str="begin 123.456 end";
if ($str =~/s+(?<request_time>d+(?:.d+)?)s+/){my ($request_time) = ($+{request_time});   print "$request_time
"};
zjtest7-frontend:/root/test# perl a2.pl 
123.456




2.4.1 输出插件


1.配置示例

output {
  elasticsearch {
    host=>"192.168.0.2"
    protocol =>"http"
    index=>"logstash-%{type}-%{+YYYY.MM.dd}"
    index_type =>"%type"
    workers =>5
    template_overwrite =>true
    }
}

原文地址:https://www.cnblogs.com/hzcya1995/p/13350318.html