ETCD数据的备份与恢复

ETCD数据的备份与恢复

一、单机备份

说明:执行etcd备份数据的恢复的机器必须和原先etcd所在机器一致

1.1、单机备份

# 使用ETCDCTL API 3
[root@minio1 ~]# export ETCDCTL_API=3

# 写入一条数据
[root@minio1 app]# ETCDCTL_API=3 etcdctl --endpoints="https://192.168.1.106:2379" --cert=/etc/etcd/ssl/etcd.pem --key=/etc/etcd/ssl/etcd-key.pem --cacert=/etc/kubernetes/pki/etcd/etcd-ca.pem  put /name/1 tzh
OK

# 读取数据
[root@minio1 app]# ETCDCTL_API=3 etcdctl --endpoints="https://192.168.1.106:2379" --cert=/etc/etcd/ssl/etcd.pem --key=/etc/etcd/ssl/etcd-key.pem --cacert=/etc/kubernetes/pki/etcd/etcd-ca.pem  get /name/1
/name/1
tzh

# 备份数据
[root@minio1 app]# etcdctl --endpoints="https://192.168.1.106:2379" --cert=/etc/etcd/ssl/etcd.pem --key=/etc/etcd/ssl/etcd-key.pem --cacert=/etc/kubernetes/pki/etcd/etcd-ca.pem snapshot save `date +%Y-%m-%d`-etcd_back.db

1.2、单机数据恢复

# 停止etcd服务
[root@minio1 ~]# systemctl stop etcd

# 使用ETCDCTL API 3
[root@SZD-L0105331 ~]# export ETCDCTL_API=3

# 修改etcd启动参数--data-dir指向上一步的数据恢复目录,一般在/etc/etcd/etcd文件中
# 注意--data-dir参数,恢复以后修改data-dir,再次启动etcd
[root@minio1 ~]# grep data-dir /etc/etcd/etcd.config.yml
data-dir: /var/lib/etcd


[root@minio1 ~]# etcdctl snapshot restore 2021-12-07-etcd_back.db --name=minio1 --endpoints="https://192.168.1.106:2379" --cert=/etc/etcd/ssl/etcd.pem --key=/etc/etcd/ssl/etcd-key.pem --cacert=/etc/kubernetes/pki/etcd/etcd-ca.pem  --initial-cluster=minio1=https://192.168.1.106:2380 --initial-advertise-peer-urls=https://192.168.1.106:2380 --initial-cluster-token=etcd-cluster-0 --data-dir=/var/lib/etcd1

2021-12-07-etcd_back.db*********份文件名
--name**************************主机名
--endpoints*********************
--cert**************************
--key***************************
--cacert************************
--initial-cluster***************本member侧使用。描述集群中所有节点的信息,本member根据此信息去联系其他member
--initial-advertise-peer-urls***其他member使用,其他member通过该地址与本member交互信息。一定要保证从其他member能可访问该地址。静态配置方式下,该参数的value一定要同时在--initial-cluster参数中存在
--initial-cluster-token*********用于区分不同集群。本地如有多个集群要设为不同
--data-dir**********************该目录下保存了memberID,clusterID和数据等信息

# 启动etcd
[root@minio1 etcd]# systemctl start etcd

# 读取数据
[root@minio1 app]# ETCDCTL_API=3 etcdctl --endpoints="https://192.168.1.106:2379" --cert=/etc/etcd/ssl/etcd.pem --key=/etc/etcd/ssl/etcd-key.pem --cacert=/etc/kubernetes/pki/etcd/etcd-ca.pem  get /name/1
/name/1
tzh

二、集群模式下的数据备份与恢复

注意:ETCD 不同的版本的 etcdctl 命令不一样,但大致差不多,本文备份使用 napshot save , 每次备份一个节点就行。

命令备份(k8s-master1 机器上备份):

$ ETCDCTL_API=3 etcdctl --cacert=/opt/kubernetes/ssl/ca.pem --cert=/opt/kubernetes/ssl/server.pem --key=/opt/kubernetes/ssl/server-key.pem --endpoints=https://192.168.1.36:2379 snapshot save /data/etcd_backup_dir/etcd-snapshot-`date +%Y%m%d`.db

备份脚本(k8s-master1 机器上备份):

#!/usr/bin/env bash

date;

CACERT="/opt/kubernetes/ssl/ca.pem"
CERT="/opt/kubernetes/ssl/server.pem"
EKY="/opt/kubernetes/ssl/server-key.pem"
ENDPOINTS="192.168.1.36:2379"

ETCDCTL_API=3 etcdctl \
--cacert="${CACERT}" --cert="${CERT}" --key="${EKY}" \
--endpoints=${ENDPOINTS} \
snapshot save /data/etcd_backup_dir/etcd-snapshot-`date +%Y%m%d`.db

# 备份保留30天
find /data/etcd_backup_dir/ -name *.db -mtime +30 -exec rm -f {} \;

恢复

准备工作

  • 停止所有 Master 上 kube-apiserver 服务
$ systemctl stop kube-apiserver  

# 确认 kube-apiserver 服务是否停止 
$ ps -ef | grep kube-apiserver
  • 停止集群中所有 ETCD 服务
$ systemctl stop etcd
  • 移除所有 ETCD 存储目录下数据
$ mv /var/lib/etcd/default.etcd /var/lib/etcd/default.etcd.bak
  • 拷贝 ETCD 备份快照
# 从 k8s-master1 机器上拷贝备份 
$ scp /data/etcd_backup_dir/etcd-snapshot-20191222.db root@k8s-master2:/data/etcd_backup_dir/ 
$ scp /data/etcd_backup_dir/etcd-snapshot-20191222.db root@k8s-master3:/data/etcd_backup_dir/

恢复备份

# k8s-master1 机器上操作
$ ETCDCTL_API=3 etcdctl snapshot restore /data/etcd_backup_dir/etcd-snapshot-20191222.db \
  --name etcd-0 \
  --initial-cluster "etcd-0=https://192.168.1.36:2380,etcd-1=https://192.168.1.37:2380,etcd-2=https://192.168.1.38:2380" \
  --initial-cluster-token etcd-cluster \
  --initial-advertise-peer-urls https://192.168.1.36:2380 \
  --data-dir=/var/lib/etcd/default.etcd
  
# k8s-master2 机器上操作
$ ETCDCTL_API=3 etcdctl snapshot restore /data/etcd_backup_dir/etcd-snapshot-20191222.db \
  --name etcd-1 \
  --initial-cluster "etcd-0=https://192.168.1.36:2380,etcd-1=https://192.168.1.37:2380,etcd-2=https://192.168.1.38:2380"  \
  --initial-cluster-token etcd-cluster \
  --initial-advertise-peer-urls https://192.168.1.37:2380 \
  --data-dir=/var/lib/etcd/default.etcd
  
# k8s-master3 机器上操作
$ ETCDCTL_API=3 etcdctl snapshot restore /data/etcd_backup_dir/etcd-snapshot-20191222.db \
  --name etcd-2 \
  --initial-cluster "etcd-0=https://192.168.1.36:2380,etcd-1=https://192.168.1.37:2380,etcd-2=https://192.168.1.38:2380"  \
  --initial-cluster-token etcd-cluster \
  --initial-advertise-peer-urls https://192.168.1.38:2380 \
  --data-dir=/var/lib/etcd/default.etcd

上面三台 ETCD 都恢复完成后,依次登陆三台机器启动 ETCD

$ systemctl start etcd

三台 ETCD 启动完成,检查 ETCD 集群状态

$ ETCDCTL_API=3 etcdctl --cacert=/opt/kubernetes/ssl/ca.pem --cert=/opt/kubernetes/ssl/server.pem --key=/opt/kubernetes/ssl/server-key.pem --endpoints=https://192.168.1.36:2379,https://192.168.1.37:2379,https://192.168.1.38:2379 endpoint health

三台 ETCD 全部健康,分别到每台 Master 启动 kube-apiserver

$ systemctl start kube-apiserver

检查 Kubernetes 集群是否恢复正常

$ kubectl get cs

总结:

Kubernetes 集群备份主要是备份 ETCD 集群。而恢复时,主要考虑恢复整个顺序:

停止kube-apiserver --> 停止ETCD --> 恢复数据 --> 启动ETCD --> 启动kube-apiserve

注意:备份ETCD集群时,只需要备份一个ETCD就行,恢复时,拿同一份备份数据恢复

原文链接:https://zhuanlan.zhihu.com/p/101523337
原文地址:https://www.cnblogs.com/hsyw/p/15652417.html