ELK

安装


172.16.240.20   es kibana
172.16.240.30    logstash

安装jdk1.8

2个节点都安装


  • 安装
cd /usr/local/src/

ls
  jdk-8u231-linux-x64.tar.gz
  
tar -zxf jdk-8u231-linux-x64.tar.gz 
ls
  jdk1.8.0_231  jdk-8u231-linux-x64.tar.gz
  
mv jdk1.8.0_231/ /usr/local/

/usr/local/jdk1.8.0_231/bin/java -version
  java version "1.8.0_231"
  Java(TM) SE Runtime Environment (build 1.8.0_231-b11)
  Java HotSpot(TM) 64-Bit Server VM (build 25.231-b11, mixed mode)

  • 配置环境变量
vim  /etc/profile
  jave_home=/usr/local/jdk1.8.0_231/bin
  PATH=$PATH:$HOME/bin:$jave_home

安装kibana

172.16.240.20节点安装

cd /usr/local/src/
tar -zxf kibana-6.6.0-linux-x86_64.tar.gz
mv kibana-6.6.0-linux-x86_64 /usr/local/kibana-6.6.0
vim /usr/local/kibana-6.6.0/config/kibana.yml
  server.port: 5601
  server.host: "0.0.0.0"
  
/usr/local/kibana-6.6.0/bin/kibana &     # 前台启动kibana服务
nohup /usr/local/kibana-6.6.0/bin/kibana >> /tmp/kibana.log 2>/tmp/kibana.log &

Kibana通过nginx实现认证

  • Kibana监听在127.0.0.1
  • 部署Nginx,使用Nginx来转发
yum install -y lrzsz wget gcc gcc-c++ make pcre pcre-devel zlib zlib-devel
cd /usr/local/src/
tar -zxf nginx-1.14.2.tar.gz 
cd nginx-1.14.2/
./configure --prefix=/usr/local/nginx && make && make install
vim etc/profile
  nginx_path=/usr/local/nginx/sbin/
  PATH=$PATH:$HOME/bin:$jave_home:$nginx_path

kibana端配置

vim /usr/local/kibana-6.6.0/config/kibana.yml
  server.port: 5601
  server.host: "127.0.0.1"

nginx通过添加白名单访问

>>>> 查看本机的vmnet8的ip地址
ip a |grep vmnet8 |awk 'NR==2{print $2}'|cut -d '/' -f1
172.16.240.1


vim /usr/local/nginx/conf/nginx.conf
  worker_processes  1;
  events {
      worker_connections  1024;
  }
  http {
      include       mime.types;
      default_type  application/octet-stream;
      log_format  main  '$remote_addr - $remote_user [$time_local] "$request" '
                        '$status $body_bytes_sent "$http_referer" '
                        '"$http_user_agent" "$http_x_forwarded_for"';
      sendfile        on;
      keepalive_timeout  65;
      server {
          listen       80;
          server_name  localhost;
          location / {
                  allow 127.0.0.1;
                  allow 172.16.240.1;
                  deny all;
                  proxy_pass http://127.0.0.1:5601;
          }
          error_page   500 502 503 504  /50x.html;
          location = /50x.html {
              root   html;
          }
      }
  }
  

>>> 重新加载nginx
/usr/local/nginx/sbin/nginx -s reload

nginx通过验证用户名和密码

vim /usr/local/nginx/conf/nginx.conf
  worker_processes  1;
  events {
      worker_connections  1024;
  }
  http {
      include       mime.types;
      default_type  application/octet-stream;
      log_format  main  '$remote_addr - $remote_user [$time_local] "$request" '
                        '$status $body_bytes_sent "$http_referer" '
                        '"$http_user_agent" "$http_x_forwarded_for"';
      sendfile        on;
      keepalive_timeout  65;
      server {
          listen       80;
          server_name  localhost;
          location / {
                  auth_basic "elk auth";
                  auth_basic_user_file /usr/local/nginx/conf/htpasswd;
                  proxy_pass http://127.0.0.1:5601;
          }
          error_page   500 502 503 504  /50x.html;
          location = /50x.html {
              root   html;
          }
      }
  }

 printf "elk:$(openssl passwd -1 elkpassword)
" > /usr/local/nginx/conf/htpasswd
 
/usr/local/nginx/sbin/nginx -s reload

elasticsearch的安装

cd /usr/local/src/
tar -zxf elasticsearch-6.6.0.tar.gz
mv elasticsearch-6.6.0 /usr/local/

vim /usr/local/elasticsearch-6.6.0/config/elasticsearch.yml
  path.data:  /usr/local/elasticsearch-6.6.0/data
  path.logs: /usr/local/elasticsearch-6.6.0/logs 
  network.host: 127.0.0.1
  http.port: 9200
  
vim /usr/local/elasticsearch-6.6.0/config/jvm.options
  -Xms128M
  -Xmx128M
  

>>>> elasticsearch不能通过root用户启动

useradd -s /sbin/nologin elk
chown -R elk:elk /usr/local/elasticsearch-6.6.0/
su - elk -s /bin/bash
/usr/local/elasticsearch-6.6.0/bin/elasticsearch -d

注意

Elasticsearch启动注意
Elasticsearch如果启动在127.0.0.1的话,可以启动成功
Elasticsearch如果要跨机器通讯,需要监听在真实网卡上
监听在真实网卡需要调整系统参数才能正常启动

Elasticsearch监听在非127.0.0.1
监听在0.0.0.0或者内网地址
以上两种监听都需要调整系统参数

ulimit -a
  core file size          (blocks, -c) 0
  data seg size           (kbytes, -d) unlimited
  scheduling priority             (-e) 0
  file size               (blocks, -f) unlimited
  pending signals                 (-i) 7827
  max locked memory       (kbytes, -l) 64
  max memory size         (kbytes, -m) unlimited
  open files                      (-n) 65536
  pipe size            (512 bytes, -p) 8
  POSIX message queues     (bytes, -q) 819200
  real-time priority              (-r) 0
  stack size              (kbytes, -s) 8192
  cpu time               (seconds, -t) unlimited
  max user processes              (-u) 4096
  virtual memory          (kbytes, -v) unlimited
  file locks                      (-x) unlimited

ES启动三个报错的处理
[1]: max file descriptors [4096] for elasticsearch process is too low, increase to at least [65536]
[2]: max number of threads [3829] for user [elk] is too low, increase to at least [4096]
[3]: max virtual memory areas vm.max_map_count [65530] is too low, increase to at least [262144]

最大文件打开数调整/etc/security/limits.conf
* - nofile 65536

最大打开进程数调整/etc/security/limits.d/20-nproc.conf
* - nproc 10240

内核参数调整
vm.max_map_count = 262144

Elasticsearch监听网卡建议
如果学习,建议监听在127.0.0.1
如果是云服务器的话,一定把9200和9300公网入口在安全组限制一下
自建机房的话,建议监听在内网网卡,监听在公网会被入侵

Elasticsearch操作


结构: 索引--->> 类型 --->>id

索引层操作

PUT /lyysb    创建索引 lyysb
GET _cat/indices?v    获取所有的索引
DELETE /lyysb    删除索引 lyysb

文档层操作

/索引/类型/文档(_id)

创建和插入数据

PUT /lyysb/users/1
{
  "name": "lyysb",
  "age": 38
}

根据类型和id查询数据

GET /lyysb/users/1

查询索引下的所有数据

GET /lyysb/_search?q=*

修改数据(PUT)

操作和创建数据都是一样的

如果所有字段都改变, 则这个过程就是创建

如果修改部分字段的值, 这个过程就是更新

PUT /weixinyu/users/2
{
  "name": "wxy",
  "age": 18
}

PUT /weixinyu/users/2
{
  "name": "wxy",
  "age": 128
}

删除数据

DELETE /weixinyu/users/2

修改数据(POST)

状态永远为update

GET /weixinyu/_search?q=*

{
  "took" : 5,
  "timed_out" : false,
  "_shards" : {
    "total" : 5,
    "successful" : 5,
    "skipped" : 0,
    "failed" : 0
  },
  "hits" : {
    "total" : 1,
    "max_score" : 1.0,
    "hits" : [
      {
        "_index" : "weixinyu",
        "_type" : "users",
        "_id" : "1",
        "_score" : 1.0,
        "_source" : {
          "name" : "wxyzaruan33",
          "age" : 1122
        }
      }
    ]
  }
}

POST /weixinyu/users/1
{
  "name": "wxyzarddduan33",
  "age": 112222
}


{
  "_index" : "weixinyu",
  "_type" : "users",
  "_id" : "1",
  "_version" : 6,
  "result" : "updated",
  "_shards" : {
    "total" : 2,
    "successful" : 1,
    "failed" : 0
  },
  "_seq_no" : 5,
  "_primary_term" : 1
}

更改所有数据

PUT /index/type/1
{
  "name": "wxy",
  "age": 33
}

PUT /index/type/2
{
  "name": "chenjun",
  "age": 31
}

PUT /index/type/3
{
  "name": "sharen",
  "age": 22
}

批量将所有人的年龄改为111

POST /index/type/_update_by_query
{
  "script": {
    "source": "ctx._source['age']=111"
  },
  
  "query": {
    "match_all": {}
  }
}

增加一个字段

POST /index/type/_update_by_query
{
  "script": {
    "source": "ctx._source['city']='hangzhou'"
  },
  
  "query": {
    "match_all": {}
  }
}

logstash安装

节点172.16.240.30安装

cd /usr/local/src/
tar -zxf logstash-6.6.0.tar.gz 
mv logstash-6.6.0 /usr/local/
vim /usr/local/logstash-6.6.0/config/jvm.options 
  -Xms200M
  -Xmx200M
  
vim /usr/local/logstash-6.6.0/config/logstash.conf
  input{
    stdin{}
  }
  output{
    stdout{
      codec=>rubydebug
    }
  }

  • 启动logstash
yum install -y epel-release
yum install haveged -y
systemctl enable haveged; systemctl start haveged
/usr/local/logstash-6.6.0/bin/logstash -f /usr/local/logstash-6.6.0/config/logstash.conf

netstat -ntulp |grep java
tcp6       0      0 127.0.0.1:9600          :::*                    LISTEN      21644/java


读取登录日志

vim  /usr/local/logstash-6.6.0/config/logstash.conf 

input {
  file {
    path => "/var/log/secure"
  }
}
output{
  stdout{
    codec=>rubydebug
  }
}

/usr/local/logstash-6.6.0/bin/logstash -f /usr/local/logstash-6.6.0/config/logstash.conf


监控nginx日志

172.16.240.10 nginx服务器

  • logstash配置文件
vim /usr/local/logstash-6.6.0/config/logstash.conf 
  input {
    file {
      path => "/usr/local/nginx/logs/access.log"
    }
  }
  output {
    elasticsearch {
      hosts => ["http://172.16.240.20:9200"]
    }
  }

  • 重启服务
/usr/local/logstash-6.6.0/bin/logstash -f /usr/local/logstash-6.6.0/config/logstash.conf

  • kibana上通过es查看日志
GET /logstash-2019.12.30/_search?q=*


  • kibana通过discover查询日志

    Discover - Create Index 第一步填写索引, 第二步选择@timestamp

ELK流程

Logstash读取日志 -> ES存储数据 -> Kibana展现


ELK优化


基本正则表达式

. 		任意一个字符 
* 		前面一个字符出现0次或者多次
[abc] 	中括号内任意一个字符
[^abc]	非中括号内的字符
[0-9] 		表示一个数字
[a-z]  		表示一个小写字母
[A-Z] 		表示一个大写字母
[a-zA-Z] 		表示一个字母, 范围是所有字母
[a-zA-Z0-9]	表示一个字母或数字, 范围是所有字母+数字
[^0-9]		表示一个字符, 非数字
^xx 			以xx开头
xx$ 			以xx结尾
d			任何一个数字
s			任何一个空白字符

扩展正则表达式

?	前面字符出现0或者1次
+	前面字符出现1或者多次
{n}		前面字符匹配n次
{a,b} 	前面字符匹配a到b次
{,b} 		前面字符匹配0次到b次
{a,} 		前面字符匹配a或a+次
(string1|string2)	string1或string2

Grok Debugger提取nginx日志

172.16.240.1 - - [30/Dec/2019:20:12:14 +0800] "GET /lyysb HTTP/1.1" 404 571 "-" "Mozilla/5.0 (Macintosh; Intel Mac OS X 10_14_6) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/79.0.3945.88 Safari/537.36"

172.16.240.1 - - [30/Dec/2019:10:23:43 +0800] "GET /favicon.ico HTTP/1.1" 404 571 "http://172.16.240.30/" "Mozilla/5.0 (Macintosh; Intel Mac OS X 10_14_6) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/79.0.3945.88 Safari/537.36"

日志字段

访问IP地址
访问时间
请求方式(GET/POST)
请求URL
状态码
响应body大小
Referer
User Agent

使用Grok Debugger提取日志 (Dev Tools-Grok Debugger)

  • sample data
172.16.240.1 - - [30/Dec/2019:10:34:04 +0800] "GET / HTTP/1.1" 304 0 "-" "Mozilla/5.0 (Macintosh; Intel Mac OS X 10_14_6) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/79.0.3945.88 Safari/537.36"

  • Grok Pattern
(?<clientip>[0-9]{1,3}.[0-9]{1,3}.[0-9]{1,3}.[0-9]{1,3}) - - [(?<requesttime>[^ ]+ +[0-9]+)] "(?<requesttype>[A-Z]+) (?<requesturl>[^ ]+) HTTP/d.d" (?<status>[0-9]+) (?<bodysize>[0-9]+) "[^"]+" "(?<ua>[^"]+)"

  • Structured Data
{
  "requesttype": "GET",
  "requesturl": "/",
  "clientip": "172.16.240.1",
  "requesttime": "30/Dec/2019:10:34:04 +0800",
  "bodysize": "0",
  "ua": "Mozilla/5.0 (Macintosh; Intel Mac OS X 10_14_6) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/79.0.3945.88 Safari/537.36",
  "status": "304"
}

Logstash正则提取nginx日志

vim /usr/local/logstash-6.6.0/config/logstash.conf 
  input{
    file{
      path=>"/usr/local/nginx/logs/access.log"
    }
  }
  filter{
      grok{
          match=>{
              "message" => '(?<clientip>[0-9]{1,3}.[0-9]{1,3}.[0-9]{1,3}.[0-9]{1,3}) - - [(?<requesttime>[^ ]+ +[0-9]+)] "(?<requesttype>[A-Z]+) (?<requesturl>[^ ]+) HTTP/d.d" (?<status>[0-9]+) (?<bodysize>[0-9]+) "[^"]+" "(?<ua>[^"]+)"'
          } 
      }
  }
  output{
    elasticsearch{
      hosts=>["http://172.16.240.20:9200"]
    }
  }


正则提取失败的情况

vim /usr/local/logstash-6.6.0/config/logstash.conf 
  input{
    file{
      path=>"/usr/local/nginx/logs/access.log"
    }
  }
  filter{
      grok{
          match=>{
              "message" => '(?<clientip>[0-9]{1,3}.[0-9]{1,3}.[0-9]{1,3}.[0-9]{1,3}) - - [(?<requesttime>[^ ]+ +[0-9]+)] "(?<requesttype>[A-Z]+) (?<requesturl>[^ ]+) HTTP/d.d" (?<status>[0-9]+) (?<bodysize>[0-9]+) "[^"]+" "(?<ua>[^"]+)"'
          } 
      }
  }
  output{
    elasticsearch{
      if "_grokparsefailure" not in [tags] and "_dateparsefailure" not in [tags] {
      hosts=>["http://172.16.240.20:9200"]
    }
  }
  }

logstash去除不需要的字段

好处 减小ES数据库的大小 提升搜索效率

vim /usr/local/logstash-6.6.0/config/logstash.conf 
  input{
      file{
        path=>"/usr/local/nginx/logs/access.log"
      }
    }
    filter{
        grok{
            match=>{
                "message" => '(?<clientip>[0-9]{1,3}.[0-9]{1,3}.[0-9]{1,3}.[0-9]{1,3}) - - [(?<requesttime>[^ ]+ +[0-9]+)] "(?<requesttype>[A-Z]+) (?<requesturl>[^ ]+) HTTP/d.d" (?<status>[0-9]+) (?<bodysize>[0-9]+) "[^"]+" "(?<ua>[^"]+)"'
            } 
           remove_field => ["message","@version","path"]
        }
    }
    output{
      elasticsearch{
        hosts=>["http://172.16.240.20:9200"]
      }
    }
    
/usr/local/logstash-6.6.0/bin/logstash -f /usr/local/logstash-6.6.0/config/logstash.conf


全量分析日志

vim /usr/local/logstash-6.6.0/config/logstash.conf 
  input{
        file{
          path=>"/usr/local/nginx/logs/access.log"
          start_position=>"beginning"
          sincedb_path=>"/dev/null"
        }
      }
      filter{
          grok{
              match=>{
                  "message" => '(?<clientip>[0-9]{1,3}.[0-9]{1,3}.[0-9]{1,3}.[0-9]{1,3}) - - [(?<requesttime>[^ ]+ +[0-9]+)] "(?<requesttype>[A-Z]+) (?<requesturl>[^ ]+) HTTP/d.d" (?<status>[0-9]+) (?<bodysize>[0-9]+) "[^"]+" "(?<ua>[^"]+)"'
              } 
             remove_field => ["message","@version","path"]
          }
      }


  output{
     elasticsearch{
       hosts=>["http://172.16.240.20:9200"]
        }
     }
     
/usr/local/logstash-6.6.0/bin/logstash -f /usr/local/logstash-6.6.0/config/logstash.conf

vim /usr/local/logstash-6.6.0/config/logstash.conf 
  input{
        file{
          path=>"/usr/local/nginx/logs/access.log"
          start_position=>"beginning"
          sincedb_path=>"/dev/null"
        }
      }
      filter{
          grok{
              match=>{
                  "message" => '(?<clientip>[0-9]{1,3}.[0-9]{1,3}.[0-9]{1,3}.[0-9]{1,3}) - - [(?<requesttime>[^ ]+ +[0-9]+)] "(?<requesttype>[A-Z]+) (?<requesturl>[^ ]+) HTTP/d.d" (?<status>[0-9]+) (?<bodysize>[0-9]+) "[^"]+" "(?<ua>[^"]+)"'
              } 
             remove_field => ["message","@version","path"]
          }
          
        date {
         match => ["requesttime", "dd/MMM/yyyy:HH:mm:ss Z"]
         target => "@timestamp"
      }


  output{
     elasticsearch{
       hosts=>["http://172.16.240.20:9200"]
        }
     }
     
/usr/local/logstash-6.6.0/bin/logstash -f /usr/local/logstash-6.6.0/config/logstash.conf

更新时间戳 @timestamp为请求时间

vim /usr/local/logstash-6.6.0/config/logstash.conf
  input{
    file{
       path=>"/usr/local/nginx/logs/access.log"
       start_position=>"beginning"
       sincedb_path=>"/dev/null"
         }
        }

    filter{
       grok{
         match=>{
           "message" => '(?<clientip>[0-9]{1,3}.[0-9]{1,3}.[0-9]{1,3}.[0-9]{1,3}) - - [(?<requesttime>[^ ]+ +[0-9]+)] "(?<requesttype>[A-Z]+) (?<requesturl>[^ ]+) HTTP/d.d" (?<status>[0-9]+) (?<bodysize>[0-9]+) "[^"]+" "(?<ua>[^"]+)"'
     } 
         remove_field => ["message","@version","path"]
       }


       date{
        match => ["requesttime","dd/MMM/yyy:HH:mm:ss Z"]
        target => "@timestamp"

  }

     }


  output{
    elasticsearch{
     hosts=>["http://172.16.240.20:9200"]
       }
    }
    
/usr/local/logstash-6.6.0/bin/logstash -f /usr/local/logstash-6.6.0/config/logstash.conf

Filebeat

轻量级, 不支持正则, 不能移除字段


安装

cd /usr/local/src
tar -zxf filebeat-6.6.0-linux-x86_64.tar.gz
mv filebeat-6.6.0-linux-x86_64 /usr/local/filebeat-6.6.0

最简单的配置

vim /usr/local/filebeat-6.6.0/filebeat.yml 
  filebeat.inputs:
  - type: log
    tail_files: true
    backoff: "1s"
    paths: 
        - /usr/local/nginx/logs/access.log

  output:
    elasticsearch:
      hosts: ["172.16.240.20:9200"]

  • 启动服务
nohup /usr/local/filebeat-6.6.0/filebeat  -e -c /usr/local/filebeat-6.6.0/filebeat.yml >/tmp/filebeat.log 2>&1 &

Filebeat + logstash

Filebeat -> Logstash -> Elasticsearch -> Kibana

Filebeat批量部署比Logstash要方便得多

Logstash监听在内网, Filebeat发送给内网的Logstash


架构推荐

Filebeat(多台) 
Filebeat(多台)  -> Logstash(正则) -> Elasticsearch(入库) -> Kibana展现
Filebeat(多台) 

  • filebeat配置
vim /usr/local/filebeat-6.6.0/filebeat.yml 
filebeat.inputs:
- type: log
  tail_files: true
  backoff: "1s"
  paths:
      - /usr/local/nginx/logs/access.log

output:
  logstash:
    hosts: ["172.16.240.30:5044"]

  • logstash配置
vim /usr/local/logstash-6.6.0/config/logstash.conf 
  input {
  beats {
    host => '0.0.0.0'
    port => 5044
  }
}


filter{
   grok{
     match=>{
           "message" => '(?<clientip>[0-9]{1,3}.[0-9]{1,3}.[0-9]{1,3}.[0-9]{1,3}) - - [(?<requesttime>[^ ]+ +[0-9]+)] "(?<requesttype>[A-Z]+) (?<requesturl>[^ ]+) HTTP/d.d" (?<status>[0-9]+) (?<bodysize>[0-9]+) "[^"]+" "(?<ua>[^"]+)"'
     } 
         remove_field => ["message","@version","path","beat","input","log","offset","prospector","source","tags"]
       }


       date{
        match => ["requesttime","dd/MMM/yyy:HH:mm:ss Z"]
        target => "@timestamp"

  }

     }


output{
  elasticsearch{
   hosts=>["http://172.16.240.20:9200"]
       }
    }

  • 重启logstash和filebeat
/usr/local/logstash-6.6.0/bin/logstash  -f /usr/local/logstash-6.6.0/config/logstash.conf
nohup /usr/local/filebeat-6.6.0/filebeat  -e -c /usr/local/filebeat-6.6.0/filebeat.yml >/tmp/filebeat.log 2>&1 &

采集Json格式日志


  • nginx配置
vim /usr/local/nginx/conf/nginx.conf
  worker_processes  1;
  events {
      worker_connections  1024;
  }
  http {
      include       mime.types;
      default_type  application/octet-stream;
      log_format json '{"@timestamp":"$time_iso8601",'
                   '"clientip":"$remote_addr",'
                   '"status":$status,'
                   '"bodysize":$body_bytes_sent,'
                   '"referer":"$http_referer",'
                   '"ua":"$http_user_agent",'
                   '"handletime":$request_time,'
                   '"url":"$uri"}';
      access_log  logs/access.log;
      access_log  logs/access.json.log  json;
      sendfile        on;
      keepalive_timeout  65;
      server {
          listen       80;
          server_name  localhost;
          location / {
              root   html;
              index  index.html index.htm;
          }
          error_page   500 502 503 504  /50x.html;
          location = /50x.html {
              root   html;
          }
      }
  }
  
/usr/local/nginx/sbin/nginx -s reload

  • filebea配置采集json格式的日志
vim /usr/local/filebeat-6.6.0/filebeat.yml 
  filebeat.inputs:
  - type: log
    tail_files: true
    backoff: "1s"
    paths:
        - /usr/local/nginx/logs/access.json.log 

  output:
    logstash:
      hosts: ["172.16.240.30:5044"]
      
nohup /usr/local/filebeat-6.6.0/filebeat -e -c /usr/local/filebeat-6.6.0/filebeat.yml >/tmp/filebeat.log 2>&1 &

  • logstash配置
vim /usr/local/logstash-6.6.0/config/logstash.conf 
  input {
    beats {
      host => '0.0.0.0'
      port => 5044
    }
  }


  filter{
    json{
      source=>"message"
  }
    mutate{
      remove_field => ["@version","beat","message","offset"]

  }
       }


  output{
    elasticsearch{
     hosts=>["http://172.16.240.20:9200"]
         }
      }
      
nohup /usr/local/logstash-6.6.0/bin/logstash -f /usr/local/logstash-6.6.0/config/logstash.conf > /tmp/logstash.log 2>&1 &

原文地址:https://www.cnblogs.com/cjwnb/p/12105319.html