Etcd集群

Etcd集群

官方网站:
容器部署文档

环境:
CentOS7
etcd-3.0.4


3节点集群示例
etcd1:192.168.8.101
etcd2:192.168.8.102
etcd3:192.168.8.103


一.安装etcd(所有节点)
cp -af etcd-v3.0.4-linux-amd64/{etcd,etcdctl} /usr/local/bin
chmod +x /usr/local/bin/{etcd,etcdctl}


二.配置etcd集群
cluster帮助文档etcd-v3.0.4-linux-amd64/Documentation/op-guide/clustering.md

This guide will cover the following mechanisms for bootstrapping an etcd cluster:


* [Static](#static)

* [etcd Discovery](#etcd-discovery)

* [DNS Discovery](#dns-discovery)

目前支持三种发现方式,Static适用于有固定IP的主机节点,etcd Discovery适用于DHCP环境,DNS Discovery依赖DNS SRV记录

Static方式
提示:etcd支持ssl/tls,详见官方文档
节点一:etcd1:192.168.8.101
etcd --name etcd1 --data-dir /opt/etcd
  --initial-advertise-peer-urls http://192.168.8.101:2380
  --listen-peer-urls http://192.168.8.101:2380
  --listen-client-urls http://192.168.8.101:2379,http://127.0.0.1:2379
  --advertise-client-urls http://192.168.8.101:2379
  --initial-cluster-token etcd-cluster-1
  --initial-cluster etcd1=http://192.168.8.101:2380,etcd2=http://192.168.8.102:2380,etcd3=http://192.168.8.103:2380 
  --initial-cluster-state new

节点二:etcd2:192.168.8.102
etcd --name etcd2 --data-dir /opt/etcd
  --initial-advertise-peer-urls http://192.168.8.102:2380
  --listen-peer-urls http://192.168.8.102:2380
  --listen-client-urls http://192.168.8.102:2379,http://127.0.0.1:2379
  --advertise-client-urls http://192.168.8.102:2379
  --initial-cluster-token etcd-cluster-1
  --initial-cluster etcd1=http://192.168.8.101:2380,etcd2=http://192.168.8.102:2380,etcd3=http://192.168.8.103:2380 
  --initial-cluster-state new

节点三:etcd1:192.168.8.103
etcd --name etcd3 --data-dir /opt/etcd
  --initial-advertise-peer-urls http://192.168.8.103:2380
  --listen-peer-urls http://192.168.8.103:2380
  --listen-client-urls http://192.168.8.103:2379,http://127.0.0.1:2379
  --advertise-client-urls http://192.168.8.103:2379
  --initial-cluster-token etcd-cluster-1
  --initial-cluster etcd1=http://192.168.8.101:2380,etcd2=http://192.168.8.102:2380,etcd3=http://192.168.8.103:2380 
  --initial-cluster-state new 

2379是用于监听客户端请求,2380用于集群通信,可以通过--data-dir指定数据存放目录,不指定则默认为当前工作目录

[root@node3 ~]# netstat -tunlp|grep etcd

tcp            0 192.168.8.103:2379      0.0.0.0:*               LISTEN      11103/etcd          

tcp            0 127.0.0.1:2379          0.0.0.0:*               LISTEN      11103/etcd          

tcp            0 192.168.8.103:2380      0.0.0.0:*               LISTEN      11103/etcd          

[root@node3 ~]# ls

etcd3.etcd

[root@node3 ~]# ls etcd3.etcd/

fixtures/ member/   

[root@node3 ~]# ls etcd3.etcd/fixtures/

client/ peer/   

[root@node3 ~]# ls etcd3.etcd/fixtures/peer/

cert.pem  key.pem

注意:上面的初始化只是在集群初始化时运行一次,之后服务有重启,必须要去除掉initial参数,否则报错

请使用如下类似命令

etcd --name etcd3   --data-dir /opt/etcd

  --listen-peer-urls http://192.168.8.103:2380

  --listen-client-urls http://192.168.8.103:2379,http://127.0.0.1:2379

  --advertise-client-urls http://192.168.8.103:2379 



三.管理集群

etcdctl

https://github.com/coreos/etcd/blob/master/Documentation/op-guide/maintenance.md

[root@node3 ~]# etcdctl --version

etcdctl version: 3.0.4

 

 

API version: 2

COMMANDS:

     backup          backup an etcd directory

     cluster-health  check the health of the etcd cluster

     mk              make a new key with a given value

     mkdir           make a new directory

     rm              remove a key or a directory

     rmdir           removes the key if it is an empty directory or a key-value pair

     get             retrieve the value of a key

     ls              retrieve a directory

     set             set the value of a key

     setdir          create a new directory or update an existing directory TTL

     update          update an existing key with a given value

     updatedir       update an existing directory

     watch           watch a key for changes

     exec-watch      watch a key for changes and exec an executable

     member          member add, remove and list subcommands

     import          import a snapshot to a cluster

     user            user add, grant and revoke subcommands

     role            role add, grant and revoke subcommands

     auth            overall auth controls



集群健康状态

[root@node3 ~]# etcdctl cluster-health

member 2947dd07df9e44da is healthy: got healthy result from http://192.168.8.102:2379

member 571bf93ce7760601 is healthy: got healthy result from http://192.168.8.101:2379

member b200a8bec19bd22e is healthy: got healthy result from http://192.168.8.103:2379

cluster is healthy

集群成员查看

[root@node3 ~]# etcdctl member list

2947dd07df9e44da: name=etcd2 peerURLs=http://192.168.8.102:2380 clientURLs=http://192.168.8.102:2379 isLeader=false

571bf93ce7760601: name=etcd1 peerURLs=http://192.168.8.101:2380 clientURLs=http://192.168.8.101:2379 isLeader=true

b200a8bec19bd22e: name=etcd3 peerURLs=http://192.168.8.103:2380 clientURLs=http://192.168.8.103:2379 isLeader=false

删除集群成员

[root@node2 ~]# etcdctl member remove b200a8bec19bd22e 

Removed member 4d11141f72b2744c from cluster

[root@node2 ~]# etcdctl member list

2947dd07df9e44da: name=etcd2 peerURLs=http://192.168.8.102:2380 clientURLs=http://192.168.8.102:2379 isLeader=false

571bf93ce7760601: name=etcd1 peerURLs=http://192.168.8.101:2380 clientURLs=http://192.168.8.101:2379 isLeader=true

添加集群成员

https://github.com/coreos/etcd/blob/master/Documentation/op-guide/runtime-configuration.md

注意:步骤很重要,不然会报集群ID不匹配

[root@node2 ~]# etcdctl member add --help

NAME:

   etcdctl member add - add a new member to the etcd cluster

USAGE:

   etcdctl member add

1.将目标节点添加到集群

[root@node2 ~]# etcdctl member add etcd3 http://192.168.8.103:2380

Added member named etcd3 with ID 28e0d98e7ec15cd4 to cluster


ETCD_NAME="etcd3"

ETCD_INITIAL_CLUSTER="etcd3=http://192.168.8.103:2380,etcd2=http://192.168.8.102:2380,etcd1=http://192.168.8.101:2380"

ETCD_INITIAL_CLUSTER_STATE="existing"

[root@node2 ~]# etcdctl member list

2947dd07df9e44da: name=etcd2 peerURLs=http://192.168.8.102:2380 clientURLs=http://192.168.8.102:2379 isLeader=false

571bf93ce7760601: name=etcd1 peerURLs=http://192.168.8.101:2380 clientURLs=http://192.168.8.101:2379 isLeader=true

d4f257d2b5f99b64[unstarted]: peerURLs=http://192.168.8.103:2380

此时,集群会为目标节点生成一个唯一的member ID

2.清空目标节点的data-dir

[root@node3 ~]#rm -rf /opt/etcd

注意:节点删除后,集群中的成员信息会更新,新节点加入集群是作为一个全新的节点加入,如果data-dir有数据,etcd启动时会读取己经存在的数据,启动时仍然用的老member ID,也会造成,集群不无法加入,所以一定要清空新节点的data-dir

2016-08-12 01:59:41.084928 E | rafthttp: failed to find member 2947dd07df9e44da in cluster ce2f2517679629de

2016-08-12 01:59:41.133698 W | rafthttp: failed to process raft message (raft: stopped)

2016-08-12 01:59:41.135746 W | rafthttp: failed to process raft message (raft: stopped)

2016-08-12 01:59:41.170915 E | rafthttp: failed to find member 2947dd07df9e44da in cluster ce2f2517679629de

3.在目标节点上启动etcd

etcd --name etcd3 --data-dir /opt/etcd
  --initial-advertise-peer-urls http://192.168.8.103:2380
  --listen-peer-urls http://192.168.8.103:2380
  --listen-client-urls http://192.168.8.103:2379,http://127.0.0.1:2379
  --advertise-client-urls http://192.168.8.103:2379
  --initial-cluster-token etcd-cluster-1
  --initial-cluster etcd1=http://192.168.8.101:2380,etcd2=http://192.168.8.102:2380,etcd3=http://192.168.8.103:2380 
  --initial-cluster-state existing

注意: 这里的initial标记一定要指定为existing,如果为new则会自动生成一个新的member ID,和前面添加节点时生成的ID不一致,故日志中会报节点ID不匹配的错

[root@node2 ~]# etcdctl member list

28e0d98e7ec15cd4: name=etcd3 peerURLs=http://192.168.8.103:2380 clientURLs=http://192.168.8.103:2379 isLeader=false

2947dd07df9e44da: name=etcd2 peerURLs=http://192.168.8.102:2380 clientURLs=http://192.168.8.102:2379 isLeader=false

571bf93ce7760601: name=etcd1 peerURLs=http://192.168.8.101:2380 clientURLs=http://192.168.8.101:2379 isLeader=true

如果不通过命令行启动,则"initial-cluster-state": "existing"参数需要写入配置

删改查

[root@node3 ~]# etcdctl set foo "bar"

bar

[root@node3 ~]# etcdctl get foo

bar

[root@node3 ~]# etcdctl mkdir hello

[root@node3 ~]# etcdctl ls

/foo

/hello

[root@node3 ~]# etcdctl --output extended get foo

Key: /foo

Created-Index: 9

Modified-Index: 9

TTL: 0

Index: 10

bar

[root@node3 ~]# etcdctl --output json get foo

{"action":"get","node":{"key":"/foo","value":"bar","nodes":null,"createdIndex":9,"modifiedIndex":9},"prevNode":null}

[root@node2 ~]# etcdctl update foo "etcd cluster is ok"

etcd cluster is ok

[root@node2 ~]# etcdctl get foo

etcd cluster is ok

[root@node3 ~]# etcdctl import --snap /opt/etcd/member/snap/db 

starting to import snapshot /opt/etcd/member/snap/db with 10 clients

2016-08-12 01:18:17.281921 I | entering dir: /

finished importing 0 keys

REST API

https://github.com/coreos/etcd/tree/master/Documentation/learning

https://coreos.com/etcd/docs/latest/v2/api.html

[root@node1 ~]# curl 192.168.8.101:2379/v2/keys

{"action":"get","node":{"dir":true,"nodes":[{"key":"/foo","value":"etcd cluster is ok","modifiedIndex":28,"createdIndex":9},{"key":"/hello","dir":true,"modifiedIndex":10,"createdIndex":10},{"key":"/registry","dir":true,"modifiedIndex":47,"createdIndex":47}]}}

[root@node1 ~]# curl -fs -X PUT 192.168.8.101:2379/v2/keys/_test

{"action":"set","node":{"key":"/_test","value":"","modifiedIndex":1439,"createdIndex":1439}}

[root@node1 ~]# curl -X GET 192.168.8.101:2379/v2/keys/_test

{"action":"get","node":{"key":"/_test","value":"","modifiedIndex":1439,"createdIndex":1439}}


curl http://127.0.0.1:2379/v2/members|python -m json.tool



四.systemd管控

1.建用户etcd

useradd -r -s /sbin/nologin etcd

mkdir /opt/etcd

chown -R etcd: /opt/etcd

2.创建systemd服务脚本etcd.service

cat >/lib/systemd/system/etcd.service <<HERE

[Unit]

Description=Etcd Server

After=network.target

After=network-online.target

Wants=network-online.target


[Service]

Type=notify

WorkingDirectory=/opt/etcd/

User=etcd

ExecStart=/usr/local/bin/etcd --config-file /etc/etcd.conf

Restart=on-failure

LimitNOFILE=1000000


[Install]

WantedBy=multi-user.target

HERE

3.创建主配置文件etcd.conf

cat >/etc/etcd.conf <<HERE

name: default

data-dir: "/opt/etcd"

listen-peer-urls: "http://192.168.8.102:2380"

listen-client-urls: "http://192.168.8.102:2379,http://127.0.0.1:2379"

advertise-client-urls: "http://192.168.8.102:2379"

HERE

提示:etcd-3.x版本支持yamljson两种配置文件格式,配置模板见https://github.com/coreos/etcd/blob/master/etcd.conf.yml.sample

不同节点的配置文件不同,如上是etcd2的范本

4.测试systemd启动

[root@node2 ~]# systemctl enable etcd

Created symlink from /etc/systemd/system/multi-user.target.wants/etcd.service to /usr/lib/systemd/system/etcd.service.

[root@node2 ~]# systemctl start etcd

[root@node2 ~]# systemctl status etcd

etcd.service - Etcd Server

   Loaded: loaded (/usr/lib/systemd/system/etcd.service; enabled; vendor preset: disabled)

   Active: active (running) since 五 2016-08-12 03:06:30 CST; 8min ago

 Main PID: 12099 (etcd)

   CGroup: /system.slice/etcd.service

           └─12099 /usr/local/bin/etcd --config-file /etc/etcd.conf


8月 12 03:10:30 node2.example.com etcd[12099]: the clock difference against peer 571bf93ce776...1s]

8月 12 03:11:00 node2.example.com etcd[12099]: the clock difference against peer 571bf93ce776...1s]

8月 12 03:11:30 node2.example.com etcd[12099]: the clock difference against peer 571bf93ce776...1s]

8月 12 03:12:00 node2.example.com etcd[12099]: the clock difference against peer 571bf93ce776...1s]

8月 12 03:12:30 node2.example.com etcd[12099]: the clock difference against peer 571bf93ce776...1s]

8月 12 03:13:00 node2.example.com etcd[12099]: the clock difference against peer 571bf93ce776...1s]

8月 12 03:13:30 node2.example.com etcd[12099]: the clock difference against peer 571bf93ce776...1s]

8月 12 03:14:00 node2.example.com etcd[12099]: the clock difference against peer 571bf93ce776...1s]

8月 12 03:14:30 node2.example.com etcd[12099]: the clock difference against peer 571bf93ce776...1s]

8月 12 03:15:00 node2.example.com etcd[12099]: the clock difference against peer 571bf93ce776...1s]

 

Hint: Some lines were ellipsized, use -l to show in full.

原文地址:https://www.cnblogs.com/lixuebin/p/10814027.html