ceph新加存储节点

随着业务的扩展,原有的存储池不够用了,这时我们就需要给ceph添加新的存储节点,这里以新加ceph-host-05节点为例
 
准备工作
给所有节点hosts文件添加10.30.1.225 ceph-host-05,并修改ceph-host-05的/etc/hosts节点如下
[root@ceph-host-05 ~]# cat /etc/hosts
127.0.0.1   localhost localhost.localdomain localhost4 localhost4.localdomain4
::1         localhost localhost.localdomain localhost6 localhost6.localdomain6
10.30.1.221 ceph-host-01
10.30.1.222 ceph-host-02
10.30.1.223 ceph-host-03
10.30.1.224 ceph-host-04
10.30.1.225 ceph-host-05
 
添加阿里的ceph源
[root@ceph-host-05 ~]# cat /etc/yum.repos.d/ceph.repo
[Ceph]
name=Ceph packages for $basearch
enabled=1
gpgcheck=1
type=rpm-md
 
[Ceph-noarch]
name=Ceph noarch packages
enabled=1
gpgcheck=1
type=rpm-md
 
[ceph-source]
name=Ceph source packages
enabled=1
gpgcheck=1
type=rpm-md
 
 
把ceph-host-01(ceph-deploy节点)的ssh key拷贝给ceph-host-05节点
[root@ceph-host-01 ceph-cluster]# ssh-copy-id ceph-host-05
 
ceph-host-05节点安装ceph和ceph-radosgw程序
[root@ceph-host-01 ceph-cluster]#  ceph-deploy install  --no-adjust-repos ceph-host-05
 
或者使用手动安装,手动安装命令如下
[root@ceph-host-05 ~]# yum install ceph ceph-radosgw -y
 
在管理节点把配置文件和 admin 密钥拷贝到Ceph 节点
 
[root@ceph-host-01 ceph-cluster]# ceph-deploy admin  ceph-host-05
 
执行ceph-deploy admin命令前后变化,多了配置文件ceph.conf和admin密钥ceph.client.admin.keyring,其实在管理节点使用scp命令直接复制过去也行。
[root@ceph-host-05 ~]# ls -lh /etc/ceph/
total 4.0K
-rw-r--r-- 1 root root 92 Feb  1 02:09 rbdmap
[root@ceph-host-05 ~]# ls -lh /etc/ceph/
total 12K
-rw------- 1 root root 151 Feb  4 21:28 ceph.client.admin.keyring
-rw-r--r-- 1 root root 644 Feb  4 21:28 ceph.conf
-rw-r--r-- 1 root root  92 Feb  1 02:09 rbdmap
-rw------- 1 root root   0 Feb  4 21:28 tmpeiAD3g
 
在每个节点上赋予 ceph.client.admin.keyring 有操作权限
# chmod +r /etc/ceph/ceph.client.admin.keyring
 
好了我们能查看集群的状态了
[root@ceph-host-05 ~]# ceph -s
  cluster:
    id:     272905d2-fd66-4ef6-a772-9cd73a274683
    health: HEALTH_WARN
            3 daemons have recently crashed
            1/3 mons down, quorum ceph-host-02,ceph-host-03
  services:
    mon: 3 daemons, quorum ceph-host-02,ceph-host-03 (age 31m), out of quorum: ceph-host-01
    mgr: ceph-host-02(active, since 31m), standbys: ceph-host-01, ceph-host-03
    mds: nova:1 {0=ceph-host-02=up:active} 1 up:standby
    osd: 15 osds: 15 up (since 44m), 15 in (since 3h)
  data:
    pools:   2 pools, 128 pgs
    objects: 423 objects, 1.4 GiB
    usage:   21 GiB used, 1.1 TiB / 1.2 TiB avail
    pgs:     128 active+clean
  io:
    client:   5.7 KiB/s rd, 46 KiB/s wr, 1 op/s rd, 4 op/s wr
 
我们添加新节点ceph-host-05的vdb磁盘到存储池中
[root@ceph-host-01 ceph-cluster]# ceph-deploy osd create --data /dev/vdb ceph-host-05
 
查看新的osd
[root@ceph-host-05 ~]# ceph osd tree
ID  CLASS WEIGHT  TYPE NAME             STATUS REWEIGHT PRI-AFF
-1       1.23340 root default                                  
-3       0.30835     host ceph-host-01                         
  0   hdd 0.07709         osd.0             up  1.00000 1.00000
  4   hdd 0.07709         osd.4             up  1.00000 1.00000
  8   hdd 0.07709         osd.8           down  1.00000 1.00000
12   hdd 0.07709         osd.12            up  1.00000 1.00000
-5       0.23126     host ceph-host-02                         
  1   hdd 0.07709         osd.1             up  1.00000 1.00000
  5   hdd 0.07709         osd.5             up  1.00000 1.00000
  9   hdd 0.07709         osd.9             up  1.00000 1.00000
-7       0.30835     host ceph-host-03                         
  2   hdd 0.07709         osd.2             up  1.00000 1.00000
  6   hdd 0.07709         osd.6             up  1.00000 1.00000
10   hdd 0.07709         osd.10            up  1.00000 1.00000
13   hdd 0.07709         osd.13            up  1.00000 1.00000
-9       0.30835     host ceph-host-04                         
  3   hdd 0.07709         osd.3             up  1.00000 1.00000
  7   hdd 0.07709         osd.7             up  1.00000 1.00000
11   hdd 0.07709         osd.11            up  1.00000 1.00000
14   hdd 0.07709         osd.14            up  1.00000 1.00000
-11       0.07709     host ceph-host-05                         
15   hdd 0.07709         osd.15            up  1.00000 1.00000
 
[root@ceph-host-05 ~]# ceph osd dump
epoch 465
fsid 272905d2-fd66-4ef6-a772-9cd73a274683
created 2020-02-03 03:13:00.528959
modified 2020-02-04 21:33:51.679093
flags sortbitwise,recovery_deletes,purged_snapdirs,pglog_hardlimit
crush_version 35
full_ratio 0.95
backfillfull_ratio 0.9
nearfull_ratio 0.85
require_min_compat_client jewel
min_compat_client jewel
require_osd_release nautilus
pool 6 'nova-metadata' replicated size 4 min_size 2 crush_rule 0 object_hash rjenkins pg_num 64 pgp_num 64 autoscale_mode warn last_change 152 flags hashpspool stripe_width 0 pg_autoscale_bias 4 pg_num_min 16 recovery_priority 5 application cephfs
pool 7 'nova-data' replicated size 4 min_size 2 crush_rule 0 object_hash rjenkins pg_num 64 pgp_num 64 autoscale_mode warn last_change 152 flags hashpspool stripe_width 0 application cephfs
max_osd 16
osd.0 up   in  weight 1 up_from 423 up_thru 457 down_at 418 last_clean_interval [328,417) [v2:10.30.1.221:6802/7327,v1:10.30.1.221:6803/7327] [v2:192.168.9.211:6808/7327,v1:192.168.9.211:6809/7327] exists,up 5903a2c7-ca1f-4eb8-baff-2583e0db38c8
osd.1 up   in  weight 1 up_from 457 up_thru 458 down_at 451 last_clean_interval [278,449) [v2:10.30.1.222:6802/5678,v1:10.30.1.222:6803/5678] [v2:192.168.9.212:6800/5678,v1:192.168.9.212:6801/5678] exists,up bd1f8700-c318-4a35-a0ac-16b16e9c1179
osd.2 up   in  weight 1 up_from 431 up_thru 457 down_at 427 last_clean_interval [272,426) [v2:10.30.1.223:6810/3927,v1:10.30.1.223:6812/3927] [v2:192.168.9.213:6810/3927,v1:192.168.9.213:6812/3927] exists,up 1d4e71da-1956-48bb-bf93-af6c4eae0799
osd.3 up   in  weight 1 up_from 355 up_thru 458 down_at 351 last_clean_interval [275,352) [v2:10.30.1.224:6802/3856,v1:10.30.1.224:6803/3856] [v2:192.168.9.214:6802/3856,v1:192.168.9.214:6803/3856] exists,up ecd3b813-c1d7-4612-8448-a9834af18d8f
osd.4 up   in  weight 1 up_from 400 up_thru 457 down_at 392 last_clean_interval [273,389) [v2:10.30.1.221:6800/6694,v1:10.30.1.221:6801/6694] [v2:192.168.9.211:6800/6694,v1:192.168.9.211:6801/6694] exists,up 28488ddd-240a-4a21-a245-351472a7deaa
osd.5 up   in  weight 1 up_from 398 up_thru 454 down_at 390 last_clean_interval [279,389) [v2:10.30.1.222:6805/4521,v1:10.30.1.222:6807/4521] [v2:192.168.9.212:6803/4521,v1:192.168.9.212:6804/4521] exists,up cc8742ff-9d93-46b7-9fdb-60405ac09b6f
osd.6 up   in  weight 1 up_from 431 up_thru 457 down_at 427 last_clean_interval [273,426) [v2:10.30.1.223:6800/3929,v1:10.30.1.223:6801/3929] [v2:192.168.9.213:6800/3929,v1:192.168.9.213:6801/3929] exists,up 27910039-7ee6-4bf9-8d6b-06a0b8c3491a
osd.7 up   in  weight 1 up_from 353 up_thru 464 down_at 351 last_clean_interval [271,352) [v2:10.30.1.224:6800/3858,v1:10.30.1.224:6801/3858] [v2:192.168.9.214:6800/3858,v1:192.168.9.214:6801/3858] exists,up ef7c51dd-b9ee-44ef-872a-2861c3ad2f5a
osd.8 down in  weight 1 up_from 420 up_thru 443 down_at 454 last_clean_interval [346,418) [v2:10.30.1.221:6814/4681,v1:10.30.1.221:6815/4681] [v2:192.168.9.211:6804/2004681,v1:192.168.9.211:6805/2004681] exists 4e8582b0-e06e-497d-8058-43e6d882ba6b
osd.9 up   in  weight 1 up_from 382 up_thru 461 down_at 377 last_clean_interval [280,375) [v2:10.30.1.222:6810/4374,v1:10.30.1.222:6811/4374] [v2:192.168.9.212:6808/4374,v1:192.168.9.212:6809/4374] exists,up baef9f86-2d3d-4f1a-8d1b-777034371968
osd.10 up   in  weight 1 up_from 430 up_thru 456 down_at 427 last_clean_interval [272,426) [v2:10.30.1.223:6808/3921,v1:10.30.1.223:6809/3921] [v2:192.168.9.213:6808/3921,v1:192.168.9.213:6809/3921] exists,up b6cd0b80-9ef1-42ad-b0c8-2f5b8d07da98
osd.11 up   in  weight 1 up_from 354 up_thru 458 down_at 351 last_clean_interval [278,352) [v2:10.30.1.224:6808/3859,v1:10.30.1.224:6809/3859] [v2:192.168.9.214:6808/3859,v1:192.168.9.214:6809/3859] exists,up 788897e9-1b8b-456d-b379-1c1c376e5bf0
osd.12 up   in  weight 1 up_from 420 up_thru 458 down_at 418 last_clean_interval [383,418) [v2:10.30.1.221:6810/6453,v1:10.30.1.221:6811/6453] [v2:192.168.9.211:6814/2006453,v1:192.168.9.211:6815/2006453] exists,up bf5765f0-cb28-4ef8-a92d-f7fe1b5f2a09
osd.13 up   in  weight 1 up_from 431 up_thru 457 down_at 427 last_clean_interval [274,426) [v2:10.30.1.223:6804/3922,v1:10.30.1.223:6805/3922] [v2:192.168.9.213:6804/3922,v1:192.168.9.213:6805/3922] exists,up 54a3b38f-e772-4e6f-bb6a-afadaf766a4e
osd.14 up   in  weight 1 up_from 353 up_thru 457 down_at 351 last_clean_interval [273,352) [v2:10.30.1.224:6812/3860,v1:10.30.1.224:6813/3860] [v2:192.168.9.214:6812/3860,v1:192.168.9.214:6813/3860] exists,up 2652556d-b2a9-4bce-a4a2-3039a80f3c29
osd.15 up   in  weight 1 up_from 443 up_thru 462 down_at 0 last_clean_interval [0,0) [v2:10.30.1.225:6800/26134,v1:10.30.1.225:6801/26134] [v2:192.168.9.215:6800/26134,v1:192.168.9.215:6801/26134] exists,up 229fac50-a084-4853-860e-7fbd90a0b2fe
pg_temp 6.25 [1,2,15,7]
pg_temp 6.36 [7,9,15,10]
pg_temp 6.39 [11,15,13,5]
pg_temp 7.7 [15,6,1,3]
pg_temp 7.9 [0,2,5]
pg_temp 7.c [15,11,2,1]
pg_temp 7.11 [9,7,10]
pg_temp 7.12 [14,9,15,10]
pg_temp 7.15 [3,12,2]
pg_temp 7.1b [0,3,5]
pg_temp 7.23 [15,14,1,6]
pg_temp 7.27 [9,12,6]
pg_temp 7.2a [0,14,10]
pg_temp 7.31 [10,14,15,9]
pg_temp 7.33 [3,2,12]
pg_temp 7.37 [11,2,0]
pg_temp 7.39 [3,9,15,13]
pg_temp 7.3b [5,0,13]
pg_temp 7.3d [1,15,14,2]
blacklist 10.30.1.221:6805/1539363681 expires 2020-02-05 20:59:28.979301
blacklist 10.30.1.221:6804/1539363681 expires 2020-02-05 20:59:28.979301
blacklist 10.30.1.221:6829/1662704623 expires 2020-02-05 05:33:02.570724
blacklist 10.30.1.221:6828/1662704623 expires 2020-02-05 05:33:02.570724
blacklist 10.30.1.222:6800/1416583747 expires 2020-02-05 17:54:46.478629
blacklist 10.30.1.222:6801/1416583747 expires 2020-02-05 17:54:46.478629
blacklist 10.30.1.222:6800/3620735873 expires 2020-02-05 19:03:42.652746
blacklist 10.30.1.222:6801/3620735873 expires 2020-02-05 19:03:42.652746
 
 
原文地址:https://www.cnblogs.com/dexter-wang/p/12259170.html