ELK收集微服务日志

前提:centos7 docker环境已经搭建完毕

安装Docker compose

1.获取docker compose稳定版

curl -L "https://github.com/docker/compose/releases/download/1.24.1/docker-compose-$(uname -s)-$(uname -m)" -o /usr/local/bin/docker-compose

2.对二进制文件授予可执行权限:

chmod +x /usr/local/bin/docker-compose

3.创建link:

ln -s /usr/local/bin/docker-compose /usr/bin/docker-compose

查看是否安装成功:

docker-compose -v

image-20210318144305311

Docker Compose搭建ELK

Elasticsearch默认使用mmapfs目录来存储索引。操作系统默认的mmap计数太低可能导致内存不足,我们可以使用下面这条命令来增加内存:

sysctl -w vm.max_map_count=262144

创建Elasticsearch数据挂载路径:

mkdir -p /app/elasticsearch/data

对该路径授予777权限:

chmod 777 /app/elasticsearch/data

创建Elasticsearch插件挂载路径:

mkdir -p /app/elasticsearch/plugins

创建Logstash配置文件存储路径:

mkdir -p /app/logstash

在该路径下创建logstash-app.conf配置文件:

vi /app/logstash/logstash-app.conf

input {
  tcp {
    mode => "server"
    host => "0.0.0.0"
    port => 4560
    codec => json_lines
  }
}
output {
  elasticsearch {
    hosts => "es:9200"
    index => "app-logstash-%{+YYYY.MM.dd}"
  }
}

创建ELK Docker Compose文件存储路径:

mkdir -p /app/elk

在该目录下创建docker-compose.yml文件:

vim /app/elk/docker-compose.yml

version: '3'
services:
  elasticsearch:
    image: elasticsearch:6.4.1
    container_name: elasticsearch
    environment:
      - "cluster.name=elasticsearch" #集群名称为elasticsearch
      - "discovery.type=single-node" #单节点启动
      - "ES_JAVA_OPTS=-Xms512m -Xmx512m" #jvm内存分配为512MB
    volumes:
      - /app/elasticsearch/plugins:/usr/share/elasticsearch/plugins
      - /app/elasticsearch/data:/usr/share/elasticsearch/data
    ports:
      - 9200:9200
  kibana:
    image: kibana:6.4.1
    container_name: kibana
    links:
      - elasticsearch:es #配置elasticsearch域名为es
    depends_on:
      - elasticsearch
    environment:
      - "elasticsearch.hosts=http://es:9200" #因为上面配置了域名,所以这里可以简写为http://es:9200
    ports:
      - 5601:5601
  logstash:
    image: logstash:6.4.1
    container_name: logstash
    volumes:
      - /app/logstash/logstash-app.conf:/usr/share/logstash/pipeline/logstash.conf
    depends_on:
      - elasticsearch
    links:
      - elasticsearch:es
    ports:
      - 4560:4560

执行docker-compose命令:

cd /app/elk
docker-compose up -d

image-20210318145439029

Logstash中安装json_lines插件

1.安装gem

yum install gem 

2.检查并修改镜像源

检查当前镜像:

gem sources -l

修改为清华镜像:

gem sources --add https://mirrors.tuna.tsinghua.edu.cn/rubygems/ --remove https://rubygems.org/

3.安装 bundle

gem install bundler -v 1.17.3

4.更改镜像源

bundle config mirror.https://rubygems.org https://mirrors.tuna.tsinghua.edu.cn/rubygems

5.修改logstash的 gem 镜像源

docker exec -it logstash /bin/bash

我们需要修改默认安装源,改成清华镜像源:

vi Gemfile

source "https://mirrors.tuna.tsinghua.edu.cn/rubygems/"

安装:

cd /bin
./logstash-plugin install logstash-codec-json_lines

image-20210318160131378

然后访问端口为5601的页面,我这里是http://192.168.1.75:5601

image-20210318183727371

微服务引入Logstash

pom依赖:

<dependency>
    <groupId>net.logstash.logback</groupId>
    <artifactId>logstash-logback-encoder</artifactId>
    <version>6.1</version>
</dependency>

resources目录下新建logback-spring.xml

<?xml version="1.0" encoding="UTF-8"?>
<configuration scan="true" scanPeriod="60 seconds" debug="false">
    <contextName>boot</contextName>
    <springProperty scope="context" name="springAppName" source="spring.application.name"/>
    <property name="log.path" value="log/cloud-eureka" />
    <property name="log.maxHistory" value="15" />
    <property name="log.colorPattern" value="%magenta(%d{yyyy-MM-dd HH:mm:ss}) %highlight(%-5level) %boldCyan([${springAppName:-},%X{X-B3-TraceId:-},%X{X-B3-SpanId:-},%X{X-Span-Export:-}]) %yellow(%thread) %green(%logger) %msg%n"/>
    <property name="log.pattern" value="%d{yyyy-MM-dd HH:mm:ss} %-5level [${springAppName:-},%X{X-B3-TraceId:-},%X{X-B3-SpanId:-},%X{X-Span-Export:-}] %thread %logger %msg%n"/>

    <!--输出到控制台-->
    <appender name="console" class="ch.qos.logback.core.ConsoleAppender">
        <encoder>
            <pattern>${log.colorPattern}</pattern>
        </encoder>
    </appender>

    <!--输出到文件-->
    <appender name="file_info" class="ch.qos.logback.core.rolling.RollingFileAppender">
        <rollingPolicy class="ch.qos.logback.core.rolling.TimeBasedRollingPolicy">
            <fileNamePattern>${log.path}/info/info.%d{yyyy-MM-dd}.log</fileNamePattern>
            <MaxHistory>${log.maxHistory}</MaxHistory>
        </rollingPolicy>
        <encoder>
            <pattern>${log.pattern}</pattern>
        </encoder>
        <filter class="ch.qos.logback.classic.filter.LevelFilter">
            <level>INFO</level>
            <onMatch>ACCEPT</onMatch>
            <onMismatch>DENY</onMismatch>
        </filter>
    </appender>

    <!--输出到 logstash的 appender-->
    <appender name="logstash" class="net.logstash.logback.appender.LogstashTcpSocketAppender">
        <destination>192.168.1.75:4560</destination>
        <encoder charset="UTF-8" class="net.logstash.logback.encoder.LogstashEncoder"/>
    </appender>

    <appender name="file_error" class="ch.qos.logback.core.rolling.RollingFileAppender">
        <rollingPolicy class="ch.qos.logback.core.rolling.TimeBasedRollingPolicy">
            <fileNamePattern>${log.path}/error/error.%d{yyyy-MM-dd}.log</fileNamePattern>
        </rollingPolicy>
        <encoder>
            <pattern>${log.pattern}</pattern>
        </encoder>
        <filter class="ch.qos.logback.classic.filter.LevelFilter">
            <level>ERROR</level>
            <onMatch>ACCEPT</onMatch>
            <onMismatch>DENY</onMismatch>
        </filter>
    </appender>

    <root level="debug">
        <appender-ref ref="console" />
    </root>

    <root level="info">
        <appender-ref ref="file_info" />
        <appender-ref ref="file_error" />
        <appender-ref ref="logstash" />
    </root>
</configuration>

然后启动微服务

点击index patterns

image-20210318190322262

输入app-logstash-*,点击next step

image-20210318190405353

选择timestamp,点击create index pattern

image-20210318190428798

点击discover

image-20210318190631874

原文地址:https://www.cnblogs.com/wwjj4811/p/14563015.html