微服务部署实践

构建微服务体系

  1. 网络规划
  2. 基础服务
  3. 监控服务
  4. 数据库配比
  5. 日志收集
  6. 分布式文件存储

组成

  • docker
  • docker-compose
  • docker swarm(docker集群管理服务)
  • portainer.io(docker集群可视化客户端)
  • docker registry
  • eureka
  • zuul
  • auth
  • spring-cloud
  • elasticsearch
  • logstash
  • kibana
  • fluentd
  • zookeeper
  • kafka
  • skywalking
  • apollo
  • apps

docker

环境:
CentOS 7.5x
安装方式:rpm

$ sudo curl -o /etc/yum.repos.d/CentOS-Base.repo http://mirrors.aliyun.com/repo/Centos-7.repo
$ sudo yum makecache
$ sudo yum remove docker 
                  docker-client 
                  docker-client-latest 
                  docker-common 
                  docker-latest 
                  docker-latest-logrotate 
                  docker-logrotate 
                  docker-selinux 
                  docker-engine-selinux 
                  docker-engine

$ sudo yum -y install /path/to/package.rpm
$ sudo systemctl start docker
$ sudo systemctl status docker
$ docker --version
$ docker info
...
$ sudo yum remove docker-ce
$ sudo rm -rf /var/lib/docker

docker-compose

环境:
CentOS 7.5x
安装方式:pip

$ sudo yum -y install epel-release
$ sudo yum -y install python-pip
$ sudo pip install --upgrade pip
$ sudo pip --default-timeout=200 install -U docker-compose
$ docker-compose -version

docker registry

$ sudo docker run -d -v /data/registry:/var/lib/registry -p 5000:5000 --restart=always --privileged=true --name registry registry:latest

ps: 注意添加可信息列表

microservice三组件(eureka&zuul&auth)

microservice-stack.yml

version: "3"
services:
  auth:
    image: 172.16.3.193:5000/microservice/auth:8696
    environment:
      - ENV=FAT
    deploy:
      replicas: 1
      update_config:
        parallelism: 1
        delay: 2s
      restart_policy:
        condition: on-failure
    networks:
      - microservice-network
    ports:
      - 7250:7250
    logging:
      driver: fluentd
      options:
        tag: app.log
        fluentd-address: 172.16.3.185:24224
        fluentd-async-connect: "true"
        tag: app.log
      
  eureka-peer1:
    image: 172.16.3.193:5000/microservice/eureka:8661
    environment:
      - spring.profiles.active=peer1
      - ENV=FAT
    deploy:
      replicas: 1
      update_config:
        parallelism: 1
        delay: 2s
      restart_policy:
        condition: on-failure
    networks:
      - microservice-network
    ports:
      - 7000:7000
    logging:
      driver: fluentd
      options:
        tag: app.log
        fluentd-address: 172.16.3.185:24224
        fluentd-async-connect: "true"
        tag: app.log
    
  eureka-peer2:
    image: 172.16.3.193:5000/microservice/eureka:8661
    environment:
      - spring.profiles.active=peer2
      - ENV=FAT
    deploy:
      replicas: 1
      update_config:
        parallelism: 1
        delay: 2s
      restart_policy:
        condition: on-failure
    networks:
      - microservice-network
    ports:
      - 7010:7010
    logging:
      driver: fluentd
      options:
        tag: app.log
        fluentd-address: 172.16.3.185:24224
        fluentd-async-connect: "true"
        tag: app.log

  eureka-peer3:
    image: 172.16.3.193:5000/microservice/eureka:8661
    environment:
      - spring.profiles.active=peer3
      - ENV=FAT
    deploy:
      replicas: 1
      update_config:
        parallelism: 1
        delay: 2s
      restart_policy:
        condition: on-failure
    networks:
      - microservice-network
    ports:
      - 7020:7020
    logging:
      driver: fluentd
      options:
        tag: app.log
        fluentd-address: 172.16.3.185:24224
        fluentd-async-connect: "true"
        tag: app.log
  
  zuul:
    image: 172.16.3.193:5000/microservice/zuul:8699
    environment:
      - ENV=FAT
    deploy:
    replicas: 1   
    update_config:
      parallelism: 1
      delay: 2s
    restart_policy:
      condition: on-failure
    # zuul要与应用网络互通
    networks:
      - microservice-network
      - app-network
    ports:
      - 7300:7300
    logging:
      driver: fluentd
      options:
        tag: app.log
        fluentd-address: 172.16.3.185:24224
        fluentd-async-connect: "true"
        tag: app.log

networks:
  microservice-network:
    external: true
  app-network:
    external: true

elasticsearch

基本介绍
Elasticsearch是一个分布式数据库,一个实例称为node,至少两个节点组成cluster。
Index ≈ 日常所说的某个数据库,名称必须小写
Document ≈ 元组(某条数据),不强制要求同构,但是同构能提升搜索效率
Type ≈ Table(Type概念将会在7.x之后干掉)
Shard ≈ 相当于分库分表(这个是es自动实现的)
Replia = 副本

环境:
说明:由于skywalking0.3版本只支持5.x,因此选择安装5.6.10
配置:三台服务器(172.16.2.137, 172.16.2.138 172.16.2.139)
建议使用rpm安装简单快速
使用chrome插件查看Elasticsearch预览图

es和mysql概念比较

elasticsearch.yml

// 137 master
cluster.name: caad-es-dev
node.name: es-node1
path.data: /var/lib/elasticsearch
path.logs: /var/log/elasticsearch
network.host: zoo1
http.port: 5020

discovery.zen.ping.unicast.hosts: ["zoo1", "zoo2", "zoo3"]
discovery.zen.minimum_master_nodes: 2

// 138
cluster.name: caad-es-dev
node.name: es-node2
path.data: /var/lib/elasticsearch
path.logs: /var/log/elasticsearch
network.host: zoo2
http.port: 5020
discovery.zen.ping.unicast.hosts: ["zoo1", "zoo2", "zoo3"]
discovery.zen.minimum_master_nodes: 2 

// 139
cluster.name: caad-es-dev
node.name: es-node3
path.data: /var/lib/elasticsearch
path.logs: /var/log/elasticsearch
network.host: zoo3
http.port: 5020
discovery.zen.ping.unicast.hosts: ["zoo1", "zoo2", "zoo3"]
discovery.zen.minimum_master_nodes: 2 

// test http://172.16.2.137:5020,http://172.16.2.138:5020,http://172.16.2.139:5020
// chrome extension: Elasticsearch Head
// elasticsearch-plugin install head
// 安装中文分词器 ./bin/elasticsearch-plugin install https://github.com/medcl/elasticsearch-analysis-ik/releases/download/v5.6.10/elasticsearch-analysis-ik-5.6.10.zip

logstash.conf

input {
  file {
    path => [
      "C:/Workspace/logdemo/logdemo/App_Data/Logs/*.log"
    ]
    # type => "error"
    start_position => "beginning"
    codec => multiline {
      pattern => "^[" 
      negate => true
      # next 
      what => "previous"
    }
  }
}

filter {
  # 定义数据的格式
  grok {
    match => { 
			"message" => "[(?<time>d{4}-d{2}-d{2} d{2}:d{2}:d{2},d{3})] [(?<thread>d+)] [(?<level>S+s?)] (?<message>(.|
)*)"}
  }
# 可能是这个版本有问题或者是插件没有安装,导致解析时间有问题
#  date {
#    match => {
#      "time" => "yyyy-MM-dd HH:mm:ss,SSS"
#    }
#    locale => "en"
#    target => "@timestamp"
#  }
}

output {
  elasticsearch {
    hosts => [ "172.16.2.137:5020", "172.16.2.137:5020", "172.16.2.137:5020" ]
    index => "filebeat-%{+yyyy.MM.dd}"
    template_overwrite => true
  }
# 调试用  
#  stdout { codec => rubydebug }
}


简单搜索

// 查看所有索引
$ curl -X GET 'http://172.16.2.137:5020/_cat/indices?v',每个节点查询出来的数据应该是一致的
// 创建索引名称为weather的索引
$ curl -X PUT 'http://172.16.2.137:5020/weather'
// 删除索引,支持通配符http://172.16.2.137:5020/*
$ curl -XDELETE "http://172.16.2.137:5020/weather"
// 用中文分词器查询 ik_max_word
$ curl -X PUT 'http://172.16.2.137:5020/accounts' -d '
{
  "mappings": {
    "person": {
      "properties": {
        "user": {
          "type": "text",
          "analyzer": "ik_max_word",
          "search_analyzer": "ik_max_word"
        },
        "title": {
          "type": "text",
          "analyzer": "ik_max_word",
          "search_analyzer": "ik_max_word"
        },
        "desc": {
          "type": "text",
          "analyzer": "ik_max_word",
          "search_analyzer": "ik_max_word"
        }
      }
    }
  }
}'

// 新增一条记录
// 注意,如果没有先创建 Index(这个例子是accounts),直接执行上面的命令,Elastic 也不会报错,而是直接生成指定的 Index。所以,打字的时候要小心,不要写错 Index 的名称
$ curl -X PUT 'http://172.16.2.137:5020/accounts/person/1' -d '
{
  "user": "张三",
  "title": "工程师",
  "desc": "数据库管理"
}' 

// 查看记录
$ curl 'http://172.16.2.137:5020/accounts/person/1?pretty=true'
// 删除记录
$ curl -X DELETE 'http://172.16.2.137:5020/accounts/person/1'
// 更新
$ curl -X PUT 'localhost:9200/accounts/person/1' -d '
{
    "user" : "张三",
    "title" : "工程师",
    "desc" : "数据库管理,软件开发"
}' 

// 全表查询
$ curl 'localhost:9200/accounts/person/_search'

{
  "took":2,
  "timed_out":false,
  "_shards":{"total":5,"successful":5,"failed":0},
  "hits":{
    "total":2,
    "max_score":1.0,
    "hits":[
      {
        "_index":"accounts",
        "_type":"person",
        "_id":"AV3qGfrC6jMbsbXb6k1p",
        "_score":1.0,
        "_source": {
          "user": "李四",
          "title": "工程师",
          "desc": "系统管理"
        }
      },
      {
        "_index":"accounts",
        "_type":"person",
        "_id":"1",
        "_score":1.0,
        "_source": {
          "user" : "张三",
          "title" : "工程师",
          "desc" : "数据库管理,软件开发"
        }
      }
    ]
  }
}

// match查询
$ curl 'http://172.16.2.137:5020/accounts/person/_search'  -d '
{
  // 软件 管理,es理解是or
  "query" : { "match" : { "desc" : "软件 管理" }},
  "from": 1,(偏移量)
  "size": 1(一页多少条,默认是全部)
}'
// and查询
$ curl 'http://172.16.2.137:5020/accounts/person/_search'  -d '
{
  "query": {
    "bool": {
      "must": [
        { "match": { "desc": "软件" } },
        { "match": { "desc": "系统" } }
      ]
    }
  }
}'

kafka

调试

bin/kafka-topics.sh --create --zookeeper zookeeper:2181 --replication-factor 1 --partitions 1 --topic mykafka

$KAFKA_HOME/bin/kafka-topics.sh --create --topic topic 
--partitions 4 --zookeeper 172.25.0.3 --replication-factor 2

$KAFKA_HOME/bin/kafka-topics.sh --describe --topic topic --zookeeper 172.25.0.3

$KAFKA_HOME/bin/kafka-console-producer.sh --topic=topic 
--broker-list=`broker-list.sh`

$KAFKA_HOME/bin/kafka-console-producer.sh --broker-list localhost:9092 --topic test
$KAFKA_HOME/bin/kafka-console-consumer.sh --topic test --from-beginning

skywalking

环境:
配置:172.16.3.141
默认端口:10800 11800 12800

5.0.x架构图

3.2.5+架构(使用这个版本)

部署之后的效果图

// download 
$ curl -o /etc/yum.repos.d/CentOS-Base.repo http://mirrors.aliyun.com/repo/Centos-7.repo
$ su -c "yum install java-1.8.0-openjdk"
$ systemctl status firewalld
$ systemctl stop firewalld
$ systemctl start firewalld
$ systemctl start elasticsearch
$ sudo yum install -y net-tools

# useradd name
# tar -zxf apache-skywalking-apm-incubating-5.0.0-RC2.tar.gz  
# rpm -ih elasticsearch-5.6.10.rpm
# vi config/application.yml
# vi webapp/webapp.yml
# ./bin/startup.sh

// test 172.16.2.141:8080 admin admin

# 删除索引数据,相当于删库
$ curl -XDELETE "http://172.16.2.137:5020/*"

application.yml

naming:
  jetty:
    #OS real network IP(binding required), for agent to find collector cluster
    host: 172.16.2.141
    port: 10800
    contextPath: /
cache:
  caffeine:
remote:
  gRPC:
    # OS real network IP(binding required), for collector nodes communicate with each other in cluster. collectorN --(gRPC) --> collectorM
    host: 172.16.2.141
    port: 11800
agent_gRPC:
  gRPC:
    #OS real network IP(binding required), for agent to uplink data(trace/metrics) to collector. agent--(gRPC)--> collector
    host: 172.16.2.141
    port: 11800
    # Set these two setting to open ssl
    #sslCertChainFile: $path
    #sslPrivateKeyFile: $path

    # Set your own token to active auth
    #authentication: xxxxxx
agent_jetty:
  jetty:
    # OS real network IP(binding required), for agent to uplink data(trace/metrics) to collector through HTTP. agent--(HTTP)--> collector
    # SkyWalking native Java/.Net/node.js agents don't use this.
    # Open this for other implementor.
    host: localhost
    port: 12800
    contextPath: /
analysis_register:
  default:
analysis_jvm:
  default:
analysis_segment_parser:
  default:
    bufferFilePath: ../buffer/
    bufferOffsetMaxFileSize: 10M
    bufferSegmentMaxFileSize: 500M
    bufferFileCleanWhenRestart: true
ui:
  jetty:
    # Stay in `localhost` if UI starts up in default mode.
    # Change it to OS real network IP(binding required), if deploy collector in different machine.
    host: localhost
    port: 12800
    contextPath: /
storage:
  elasticsearch:
    clusterName: caad-es-dev
    clusterTransportSniffer: true
    clusterNodes: 172.16.2.137:9300
    indexShardsNumber: 2
    indexReplicasNumber: 0
    highPerformanceMode: true
    # Batch process setting, refer to https://www.elastic.co/guide/en/elasticsearch/client/java-api/5.5/java-docs-bulk-processor.html
    bulkActions: 2000 # Execute the bulk every 2000 requests
    bulkSize: 20 # flush the bulk every 20mb
    flushInterval: 10 # flush the bulk every 10 seconds whatever the number of requests
    concurrentRequests: 2 # the number of concurrent requests
    # Set a timeout on metric data. After the timeout has expired, the metric data will automatically be deleted.
    traceDataTTL: 90 # Unit is minute
    minuteMetricDataTTL: 90 # Unit is minute
    hourMetricDataTTL: 36 # Unit is hour
    dayMetricDataTTL: 45 # Unit is day
    monthMetricDataTTL: 18 # Unit is month
configuration:
  default:
    applicationApdexThreshold: 2000
    serviceErrorRateThreshold: 10.00
    serviceAverageResponseTimeThreshold: 2000
    instanceErrorRateThreshold: 10.00
    instanceAverageResponseTimeThreshold: 2000
    applicationErrorRateThreshold: 10.00
    applicationAverageResponseTimeThreshold: 2000
    # thermodynamic
    thermodynamicResponseTimeStep: 50
    thermodynamicCountOfResponseTimeSteps: 40
    # max collection's size of worker cache collection, setting it smaller when collector OutOfMemory crashed.
    workerCacheMaxSize: 10000

webapp.yml

server:
  port: 8080

collector:
  path: /graphql
  ribbon:
    ReadTimeout: 10000
    listOfServers: 172.16.2.141:10800

security:
  user:
    admin:
      password: admin

filebeat

环境:winodws
配置:windows服务

filebeat.prospectors:
- input_type: log
  enabled: true
  encoding: utf-8
  # Paths that should be crawled and fetched. Glob based paths.
  paths:
    # - /var/log/*.log
    - C:PublishWebCaad.Client.AclLog*.error*
  fields:
    level: error
    appName: Caad.Client.Acl
  multiline:
    pattern: '^[d{4}-d{2}-d{2}sd+:d+:d+\,d+]s'
    negate: true
    match:  after
    max_lines: 500
    timeout: 5s
    
- input_type: log
  enabled: true
  encoding: utf-8
  # Paths that should be crawled and fetched. Glob based paths.
  paths:
    # - /var/log/*.log
    - C:PublishWebCaad.Client.DataLog*.error*
  fields:
    level: error
    appName: Caad.Client.Data
  multiline:
    pattern: '^[d{4}-d{2}-d{2}sd+:d+:d+\,d+]s'
    negate: true
    match:  after
    max_lines: 500
    timeout: 5s

- input_type: log
  enabled: true
  encoding: utf-8
  # Paths that should be crawled and fetched. Glob based paths.
  paths:
    # - /var/log/*.log
    - C:PublishWebCaad.Client.JobLog*.error*
  fields:
    level: error
    appName: Caad.Client.Job
  multiline:
    pattern: '^[d{4}-d{2}-d{2}sd+:d+:d+\,d+]s'
    negate: true
    match:  after
    max_lines: 500
    timeout: 5s
    
- input_type: log
  enabled: true
  encoding: utf-8
  # Paths that should be crawled and fetched. Glob based paths.
  paths:
    # - /var/log/*.log
    - C:PublishWebCaad.Client.SettingLog*.error*
  fields:
    level: error
    appName: Caad.Client.Setting
  multiline:
    pattern: '^[d{4}-d{2}-d{2}sd+:d+:d+\,d+]s'
    negate: true
    match:  after
    max_lines: 500
    timeout: 5s
    
- input_type: log
  enabled: true
  encoding: utf-8
  # Paths that should be crawled and fetched. Glob based paths.
  paths:
    - C:PublishWebCaad.Service.WebApiLog*.error*
  fields:
    level: error
    appName: Caad.Service.WebApi
  multiline:
    pattern: '^[d{4}-d{2}-d{2}sd+:d+:d+\,d+]s'
    negate: true
    match:  after
    max_lines: 500
    timeout: 5s
    
- input_type: log
  enabled: true
  encoding: utf-8
  # Paths that should be crawled and fetched. Glob based paths.
  paths:
    - C:PublishWebViss.Server.HostLog*.error*
  fields:
    level: error
    appName: Viss.Server.Host
  multiline:
    pattern: '^[d{4}-d{2}-d{2}sd+:d+:d+\,d+]s'
    negate: true
    match:  after
    max_lines: 500
    timeout: 5s
    
output.elasticsearch:
  hosts: ["172.16.2.137:5020","172.16.2.138:5020","172.16.2.139:5020"]
  index: "filebeat-%{+yyyy.MM.dd}"
  

fluentd

环境:
配置:docker中运行
fluentd作用

使用之前

使用之后

fluent.conf

<source>
  @type forward
  port 24224
  bind 0.0.0.0
</source>
<filter **>
  @type concat
  key log
  multiline_start_regexp /^[d{4}-d{1,2}-d{1,2} d{2}:d{2}:d{2},d{3}]/
  flush_interval 5s
</filter>
<match **>
  @type copy
  <store>
    @type elasticsearch
    hosts 172.16.2.137:5020,172.16.2.138:5020,172.16.2.139:5020
    index_name app-log
    type_name app-log
    include_timestamp true
    logstash_format true
    logstash_prefix fluentd
    logstash_dateformat %Y%m%d
    include_tag_key true
    type_name log
    tag_key @log_name
    flush_interval 5s
  </store>
  <store>
    @type stdout
  </store>
</match>

日志规范

log4net.config

<?xml version="1.0" encoding="utf-8" ?>
<!-- 官方配置参考 http://logging.apache.org/log4net/release/config-examples.html  -->
<log4net>
    <appender name="ConsoleAppender" type="log4net.Appender.ConsoleAppender">
      <layout type="log4net.Layout.PatternLayout">
        <conversionPattern value="[%d] [%t] [%p] %m%n" />
      </layout>
      <filter type="log4net.Filter.LevelRangeFilter">
        <param name="LevelMin" value="DEBUG" />
        <param name="LevelMax" value="FATAL" />
      </filter>
    </appender>
  <root>
    <appender-ref ref="ConsoleAppender" />
  </root>
</log4net>

log4j2.properties

appender.rolling.type = RollingFile
appender.rolling.name = rolling
appender.rolling.fileName = ${sys:es.logs.base_path}${sys:file.separator}%d{yyyyMMdd}.log
appender.rolling.filePattern = ${sys:es.logs.base_path}${sys:file.separator}%d{yyyyMMdd}.log.%i
appender.rolling.layout.type = PatternLayout
appender.rolling.layout.pattern = [%d] [%t] [%p]  %m%n

kibana

kibana.yml

server.port: 5601
server.host: "0.0.0.0"
elasticsearch.url: "http://172.16.2.137:5020"

微服务体系有哪些?

资料

原文地址:https://www.cnblogs.com/sachem/p/13855799.html