日志采集k8s容器和nginx

日志采集

互联网域名服务器使用nginx,使用filebeat采集json格式的日志

kubernetes集群使用阿里开源的log-pilot采集日志

采集nginx日志

nginx.conf转换成json格式

http {
    log_format json  '{ "time_local": "$time_local", '
                          '"remote_addr": "$remote_addr", '
                          '"referer": "$http_referer", '
                          '"request": "$request", '
                          '"status": $status, '
                          '"bytes": $body_bytes_sent, '
                          '"agent": "$http_user_agent", '
                          '"x_forwarded": "$http_x_forwarded_for", '
                          '"up_addr": "$upstream_addr",'
                          '"up_host": "$upstream_http_host",'
                          '"upstream_time": "$upstream_response_time",'
                          '"request_time": "$request_time"'
						' }';
}

安装elasticsearch

安装elasticsearch之前事先安装jdk环境

tar xvf jdk-8u211-linux-x64.tar.gz -C /usr/local/
ln -sv /usr/local/jdk1.8.0_211/ /usr/local/jdk
ln -sv /usr/local/jdk1.8.0_211/bin/java /usr/bin/java
echo 'export JAVA_HOME=/usr/local/jdk' >> /etc/profile
echo 'export PATH=$JAVA_HOME/bin:\$JAVA_HOME/jre/bin:$PATH' >> /etc/profile
echo 'export CLASSPATH=.$CLASSPATH:$JAVA_HOME/lib:$JAVA_HOME/jre/lib:$JAVA_HOME/lib/tools.jar' >> /etc/profile
source /etc/profile
java -version

安装elasticsearch

rpm -ivh elasticsearch-6.6.0.rpm

编辑/etc/elasticsearch/elasticsearch.yml

path.data: /var/lib/elasticsearch
path.logs: /var/log/elasticsearch
# 开启内存锁定
bootstrap.memory_lock: true 
network.host: 0.0.0.0
http.port: 9200
# 允许跨域
http.cors.enabled: true
http.cors.allow-origin: "*"

编辑内存锁定大小/etc/elasticsearch/jvm.options

-Xms4g #这两个值最好设置一样,防止内存被其它程序抢用。值参考这台服务器的一半设置,具体参考性能使用率
-Xmx4g

rpm基本帮你优化完了,加多一项/usr/lib/systemd/system/elasticsearch.service

[Service]
LimitMEMLOCK=infinity

启动elasticsearch

systemctl daemon-reload
systemctl enable elasticsearch.service
systemctl start elasticsearch.service

安装kibana

rpm -ivh kibana-6.6.0-x86_64.rpm

修改配置文件/etc/kibana/kibana.yml

server.port: 5601
server.host: "kibana"
elasticsearch.hosts: ["http://elasticsearch:9200"]
kibana.index: ".kibana" # elasticsearch上不要删除这个索引

启动kibana

systemctl start kibana
systemctl enable kibana

安装filebeat

rpm -ivh filebeat-6.6.0-x86_64.rpm

编辑/etc/filebeat/filebeat.yml

json解析参考官网https://www.elastic.co/guide/en/beats/filebeat/6.6/filebeat-input-log.html

filebeat.inputs:
- type: log
  enabled: true 
  paths:
    - /var/log/nginx/access.log
  #下面两项启用JSON解析模式
  json.keys_under_root: true
  json.overwrite_keys: true
  tags: ["access"] 

- type: log
  enabled: true 
  paths:
    - /var/log/nginx/error.log
  tags: ["error"]

setup.kibana:
  host: "kibana:5601"

output.elasticsearch:
  hosts: ["elasticsearch:9200"]
  indices:
    - index: "nginx-access-%{[beat.version]}-%{+yyyy.MM}"
      when.contains:
        tags: "access"
    - index: "nginx-error-%{[beat.version]}-%{+yyyy.MM}"
      when.contains:
        tags: "error"

采集k8s集群容器日志

阿里开源的日志收集工具Log-pilot.yaml

apiVersion: apps/v1
kind: DaemonSet
metadata:
  name: log-pilot
  labels:
    app: log-pilot
  # 设置期望部署的namespace。
  namespace: kube-system
spec:
  selector:
    matchLabels:
      app: log-pilot
  updateStrategy:
    type: RollingUpdate
  template:
    metadata:
      labels:
        app: log-pilot
      annotations:
        scheduler.alpha.kubernetes.io/critical-pod: ''
    spec:
      # 是否允许部署到Master节点上tolerations。
      tolerations:
      - key: node-role.kubernetes.io/master
        effect: NoSchedule
      containers:
      - name: log-pilot
        # 版本请参考https://github.com/AliyunContainerService/log-pilot/releases。
        image: registry.cn-hangzhou.aliyuncs.com/acs/log-pilot:0.9.6-filebeat
        resources:
          limits:
            memory: 500Mi
          requests:
            cpu: 200m
            memory: 200Mi
        env:
          - name: "NODE_NAME"
            valueFrom:
              fieldRef:
                fieldPath: spec.nodeName
          - name: "LOGGING_OUTPUT"
            value: "elasticsearch"
          # 请确保集群到ES网络可达。
          - name: "ELASTICSEARCH_HOSTS"
            value: "elasticsearch:9200"
          # 配置ES访问权限。
          # - name: "ELASTICSEARCH_USER"
          #   value: "{es_username}"
          # - name: "ELASTICSEARCH_PASSWORD"
          #   value: "{es_password}"
        volumeMounts:
        - name: sock
          mountPath: /var/run/docker.sock
        - name: root
          mountPath: /host
          readOnly: true
        - name: varlib
          mountPath: /var/lib/filebeat
        - name: varlog
          mountPath: /var/log/filebeat
        - name: localtime
          mountPath: /etc/localtime
          readOnly: true
        livenessProbe:
          failureThreshold: 3
          exec:
            command:
            - /pilot/healthz
          initialDelaySeconds: 10
          periodSeconds: 10
          successThreshold: 1
          timeoutSeconds: 2
        securityContext:
          capabilities:
            add:
            - SYS_ADMIN
      terminationGracePeriodSeconds: 30
      volumes:
      - name: sock
        hostPath:
          path: /var/run/docker.sock
      - name: root
        hostPath:
          path: /
      - name: varlib
        hostPath:
          path: /var/lib/filebeat
          type: DirectoryOrCreate
      - name: varlog
        hostPath:
          path: /var/log/filebeat
          type: DirectoryOrCreate
      - name: localtime
        hostPath:
          path: /etc/localtime

需要采集数据的pod在env中添加一个配置

env:
# 1、stdout为约定关键字,表示采集标准输出日志。
# 2、配置标准输出日志采集到ES的tos索引下。
- name: aliyun_logs_tos  #aliyun_logs_不能改,修改后面的区分用以区分哪个系统的日志,这里的pod属于tos系统的
  value: "stdout"

采集到的索引格式是tos-2021.03.01

每天进步一点点
原文地址:https://www.cnblogs.com/Otiger/p/14469863.html