数据库备份及还原

数据库关系到业务的核心,备份数据库进行容灾是必不可少的操作

mysql

使用mysqldump命令备份

mysqldump基本语法:

mysqldump -u username -p dbname table1 table2 ...-> BackupName.sql

其中:

dbname参数表示数据库的名称;
table1和table2参数表示需要备份的表的名称,为空则整个数据库备份;
BackupName.sql参数表设计备份文件的名称,文件名前面可以加上一个绝对路径. 通常将数据库被分成一个后缀名为sql的文件;

直接复制整个数据库目录

简单粗暴, 操作时最好保持数据库服务是停止的,以免影响数据,导致最终无法还原

另外这种方法不适用于InnoDB存储引擎的表,而对于MyISAM存储引擎的表很方便。同时,还原时MySQL的版本最好相同。

和这种方法类似,利用LVM或者btrfs这些文件系统层面的快照可以将硬盘当前状态保存,速度更快,但是需要文件系统支持

mysql的备份脚本

#!/bin/bash

. /etc/profile
. ~/.bashrc

. common.sh

# general variable define
bk_date=`date +'%Y%m%d'`
bk_dir=/data/backup/mysql/
bk_path=/data/backup/mysql/$bk_date
bk_log=/data/backup/log/$bk_date.log
de_date=`date -d"-7 day" +"%Y%m%d"`
dbs=(agreement cqa_test healthy hive hive_dfs hive_gas hive_kcloud hive_new hive_rec kcloud_chat kcloud_data kcloud_operation kcloud_operation_test kcloud_system kcloud_tool lingxi_yunwei mysql oppo public test xiaomi yunwei)

# alias define
shopt -s expand_aliases
alias now="date +'%Y-%m-%d %H:%M:%S'"
alias nowmin="date +'%H%M'"

# logging level
l1=INFO
l2=WARN
l3=CRIT
l4=UNKNOWN

logging(){
        echo "[`now`] $1 $2" >> $bk_log
}

check_bkpath(){
        if [ ! -d $bk_path ]
        then
                logging "$l1" "Backup path $bk_path not exist, make it"
                mkdir -p $bk_path
                [ $? -eq 0 ] && logging "$l1" "Backup path $bk_path make ok" || logging "l3" "Backup path $bk_path make exception"
        fi
}

backup_dbs(){
        cd $bk_path
        for d in ${dbs[@]}
        do
                db=$d.`nowmin`
                logging "$l1" "DB $db backup starting"
                res=`mysqldump -uroot -piflytek! $d > $db.sql 2>&1`
                if [ $? -eq 0 ]; then logging "$l1" "DB $db backup finish"; else logging "$l3" "DB $db backup exception: $res"; return 1; fi
                logging "$l1" "DB $db compress starting"
                res=`tar czf $db.sql.tgz $db.sql 2>&1`
                if [ $? -eq 0 ]; then logging "$l1" "DB $db compress finish"; else logging "$l2" "DB $db compress exception: $res"; return 1; fi
                res=`rm $db.sql 2>&1`
                if [ $? -eq 0 ]; then logging "$l1" "DB $db rm finish"; else logging "$l2" "DB $db rm exception: $res"; return 1; fi
        done
}

rm_history(){
        cd $bk_dir
        ls -l .|awk '{print $NF}'|sort -n|uniq|while read line
        do
                if [[ ${#line} -eq 8 ]] && [[ $line =~ ^[0-9]*.?[0-9]*$ ]]
                then
                        r=`echo $line|awk '{if($1<="'"$de_date"'")print}'`
                        if [ ! -z $r ]
                        then
                                logging "$l1" "$line is expire, will be removed"
                                res=`rm -rf $line 2>&1`
                                [ $? -eq 0 ] && logging "$l1" "$line removed finish" || logging "$l2" "$line removed exception: $res"
                        fi
                fi
        done
}

check_bkpath
mdb "BACKING"
[ $? -eq 0 ] && logging "$l1" "Write BACKING to db" || logging "$l2" "Write BACKING to db Failed"
backup_dbs
[ $? -ne 0 ] && mdb "BACKERR" || mdb "BACKOK"
[ $? -eq 0 ] && logging "$l1" "Write BACKOK/BACKERR to db" || logging "$l2" "Write BACKOK/BACKERR to db Failed"
rm_history

mongodb

备份前的检查

> show dbs
MyDB 0.0625GB
admin (empty)
bruce 0.0625GB
local (empty)
test 0.0625GB
> use MyDB
switched to db MyDB
> db.users.find()
{ "_id" : ObjectId("4e290aa39a1945747b28f1ee"), "a" : 1, "b" : 1 }
{ "_id" : ObjectId("4e2cd2182a65c81f21566318"), "a" : 3, "b" : 5 }
>

整库备份

mongodump -h dbhost -d dbname -o dbdirectory

-h:MongDB所在服务器地址,例如:127.0.0.1,当然也可以指定端口号:127.0.0.1:27017
-d:需要备份的数据库实例,例如:test
-o:备份的数据存放位置,例如:c:datadump,当然该目录需要提前建立,在备份完成后,系统自动在dump目录下建立一个test目录,这个目录里面存放该数据库实例的备份数据。

mongodump的官方说明(可通过mongodump --help查看):
options:
 --help          produce help message
 -v [ --verbose ]     be more verbose (include multiple times for more
              verbosity e.g. -vvvvv)
 --version        print the program's version and exit
 -h [ --host ] arg    mongo host to connect to ( /s1,s2 for
              sets)
 --port arg        server port. Can also use --host hostname:port
 --ipv6          enable IPv6 support (disabled by default)
 -u [ --username ] arg  username
 -p [ --password ] arg  password
 --dbpath arg       directly access mongod database files in the given
              path, instead of connecting to a mongod server -
              needs to lock the data directory, so cannot be used
              if a mongod is currently accessing the same path
 --directoryperdb     if dbpath specified, each db is in a separate
              directory
 --journal        enable journaling
 -d [ --db ] arg     database to use
 -c [ --collection ] arg collection to use (some commands)
 -o [ --out ] arg (=dump) output directory or "-" for stdout
 -q [ --query ] arg    json query
 --oplog         Use oplog for point-in-time snapshotting
 --repair         try to recover a crashed database
 --forceTableScan     force a table scan (do not use $snapshot)

备注:
--forceTableScan 在内存比较空闲时可以显著加快对冷数据的备份, 备份速度提升接近100倍

整库恢复

mongorestore -h dbhost -d dbname –directoryperdb dbdirectory

-h:MongoDB所在服务器地址
-d:需要恢复的数据库实例,例如:test,当然这个名称也可以和备份时候的不一样,比如test2
--directoryperdb:备份数据所在位置
–drop:恢复的时候,先删除当前数据,然后恢复备份的数据。就是说,恢复后,备份后添加修改的数据都会被删除

mongorestore的官方说明(可通过mongorestore --help查看):
options:
 --help         produce help message
 -v [ --verbose ]    be more verbose (include multiple times for more
             verbosity e.g. -vvvvv)
 --version        print the program's version and exit
 -h [ --host ] arg    mongo host to connect to ( /s1,s2 for sets)
 --port arg       server port. Can also use --host hostname:port
 --ipv6         enable IPv6 support (disabled by default)
 -u [ --username ] arg  username
 -p [ --password ] arg  password
 --dbpath arg      directly access mongod database files in the given
             path, instead of connecting to a mongod server -
             needs to lock the data directory, so cannot be used
             if a mongod is currently accessing the same path
 --directoryperdb    if dbpath specified, each db is in a separate
             directory
 --journal        enable journaling
 -d [ --db ] arg     database to use
 -c [ --collection ] arg collection to use (some commands)
 --objcheck       validate object before inserting
 --filter arg      filter to apply before inserting
 --drop         drop each collection before import
 --oplogReplay      replay oplog for point-in-time restore
 --oplogLimit arg    exclude oplog entries newer than provided timestamp
             (epoch[:ordinal])
 --keepIndexVersion   don't upgrade indexes to newest version
 --noOptionsRestore   don't restore collection options
 --noIndexRestore    don't restore indexes
 --w arg (=1)      minimum number of replicas per write

单个collection备份

mongoexport -h dbhost -d dbname -c collectionname -f collectionKey -o dbdirectory

-h: MongoDB所在服务器地址
-d: 需要恢复的数据库实例
-c: 需要恢复的集合
-f: 需要导出的字段(省略为所有字段)
-o: 表示导出的文件名

mongoexport的官方说明(可通过mongoexport --help查看):
 --help          produce help message
 -v [ --verbose ]     be more verbose (include multiple times for more
              verbosity e.g. -vvvvv)
 --version         print the program's version and exit
 -h [ --host ] arg     mongo host to connect to ( /s1,s2 for
              sets)
 --port arg        server port. Can also use --host hostname:port
 --ipv6          enable IPv6 support (disabled by default)
 -u [ --username ] arg   username
 -p [ --password ] arg   password
 --dbpath arg       directly access mongod database files in the given
              path, instead of connecting to a mongod server -
              needs to lock the data directory, so cannot be used
              if a mongod is currently accessing the same path
 --directoryperdb     if dbpath specified, each db is in a separate
              directory
 --journal         enable journaling
 -d [ --db ] arg      database to use
 -c [ --collection ] arg  collection to use (some commands)
 -f [ --fields ] arg    comma separated list of field names e.g. -f
              name,age
 --fieldFile arg      file with fields names - 1 per line
 -q [ --query ] arg    query filter, as a JSON string
 --csv           export to csv instead of json
 -o [ --out ] arg     output file; if not specified, stdout is used
 --jsonArray        output to a json array rather than one object per
              line
 -k [ --slaveOk ] arg (=1) use secondaries for export if available, default
              true
 --forceTableScan     force a table scan (do not use $snapshot)

单个collection恢复

mongoimport -d dbhost -c collectionname –type csv –headerline –file

-type: 指明要导入的文件格式
-headerline: 批明不导入第一行,因为第一行是列名
-file: 指明要导入的文件路径

mongoimport的官方说明(可通过mongoimport --help查看):
 --help         produce help message
 -v [ --verbose ]    be more verbose (include multiple times for more
             verbosity e.g. -vvvvv)
 --version        print the program's version and exit
 -h [ --host ] arg    mongo host to connect to ( /s1,s2 for sets)
 --port arg       server port. Can also use --host hostname:port
 --ipv6         enable IPv6 support (disabled by default)
 -u [ --username ] arg  username
 -p [ --password ] arg  password
 --dbpath arg      directly access mongod database files in the given
             path, instead of connecting to a mongod server -
             needs to lock the data directory, so cannot be used
             if a mongod is currently accessing the same path
 --directoryperdb    if dbpath specified, each db is in a separate
             directory
 --journal        enable journaling
 -d [ --db ] arg     database to use
 -c [ --collection ] arg collection to use (some commands)
 -f [ --fields ] arg   comma separated list of field names e.g. -f name,age
 --fieldFile arg     file with fields names - 1 per line
 --ignoreBlanks     if given, empty fields in csv and tsv will be ignored
 --type arg       type of file to import. default: json (json,csv,tsv)
 --file arg       file to import from; if not specified stdin is used
 --drop         drop collection first
 --headerline      CSV,TSV only - use first line as headers
 --upsert        insert or update objects that already exist
 --upsertFields arg   comma-separated fields for the query part of the
             upsert. You should make sure this is indexed
 --stopOnError      stop importing at first error rather than continuing
 --jsonArray       load a json array, not one item per line. Currently
             limited to 16MB.

其他导入与导出操作

1. mongoimport -d my_mongodb -c user user.dat

参数说明:

-d 指明使用的库, 本例中为” my_mongodb”

-c 指明要导出的表, 本例中为”user”

可以看到导入数据的时候会隐式创建表结构
2. mongoexport -d my_mongodb -c user -o user.dat

参数说明:

-d 指明使用的库, 本例中为” my_mongodb”

-c 指明要导出的表, 本例中为”user”

-o 指明要导出的文件名, 本例中为”user.dat”

从上面可以看到导出的方式使用的是JSON 的样式.

redis

持久化设置

save 900 1    # 900秒内有至少1个键被更改则进行快照
save 300 10   # 300秒内有至少10个键被更改则进行快照
save 60 10000 # 60秒内有至少10000个键被更改则进行快照

redis 可以直接复制持久化的文件进行备份

#! /bin/bash

PATH=/usr/local/bin:$PATH
redis-cli SAVE

date=$(date +"%Y%m%d")
cp /var/lib/redis/6379/dump.rdb /data01/cache_backup/$date.rdb

echo "done!"

若没有进行持久化可以通过SAVE命令进行持久化

redis 127.0.0.1:6379> SAVE 

另外可以后台进行操作

127.0.0.1:6379> BGSAVE

Background saving started
原文地址:https://www.cnblogs.com/mikeguan/p/6540595.html