分布式存储ceph部署(2)

一、部署准备:

准备5台机器(linux系统为centos7.6版本),当然也可以至少3台机器并充当部署节点和客户端,可以与ceph节点共用:
    1台部署节点(配一块硬盘,运行ceph-depoly)
    3台ceph节点(配两块硬盘,第一块为系统盘并运行mon,第二块作为osd数据盘)
    1台客户端(可以使用ceph提供的文件系统,块存储,对象存储)
 
(1)所有ceph集群节点(包括客户端)设置静态域名解析;
复制代码
127.0.0.1   localhost localhost.localdomain localhost4 localhost4.localdomain4
::1         localhost localhost.localdomain localhost6 localhost6.localdomain6
192.168.253.135 controller
192.168.253.194 compute
192.168.253.15  storage
192.168.253.10 dlp
复制代码
 
(2)所有集群节点(包括客户端)创建cent用户,并设置密码,后执行如下命令:
 
useradd cent && echo "123" | passwd --stdin cent
echo -e 'Defaults:cent !requiretty
cent ALL = (root) NOPASSWD:ALL' | tee /etc/sudoers.d/ceph
chmod 440 /etc/sudoers.d/ceph
 
(3)在部署节点切换为cent用户,设置无密钥登陆各节点包括客户端节点
 
  su - cent 
复制代码
ceph@dlp15:17:01~#ssh-keygen
ceph@dlp15:17:01~#ssh-copy-id dlp
ceph@dlp15:17:01~#ssh-copy-id controller
ceph@dlp15:17:01~#ssh-copy-id compute
ceph@dlp15:17:01~#ssh-copy-id storage
复制代码
 
 
 

(4)在部署节点切换为cent用户,在cent用户家目录,设置如下文件:vi config 

 然后设置如下权限:

复制代码
Host dlp
      Hostname dlp
      User cent
Host controller
      Hostname controller
      User cent
Host compute
      Hostname compute
      User cent
Host storage
      Hostname storage
      User cent
复制代码

   chmod 600 ./ssh/config

二、所有节点配置国内ceph源:

(1)所有节点下载阿里云的镜像源,并删除或者移动rdo-release-yunwei.repo去另一个目录下

  wget https://mirrors.aliyun.com/centos/7/storage/x86_64/ceph-jewel/

 2)接着yum clean all ,yum makecache缓存原数据。执行
  
 
  wget  http://download2.yunwei.edu/shell/ceph-j.tar.gz
 
 
(3)将下载好的rpm拷贝到所有节点,并安装。注意ceph-deploy-xxxxx.noarch.rpm 只有部署节点用到,其他节点不需要,部署节点也需要安装其余的rpm包
 
(4)在部署节点(cent用户下执行):安装 ceph-deploy,在root用户下,进入下载好的rpm包目录,执行:
 yum -y  localinstall ./*

 

创建ceph工作目录
  mkdir ceph  && cd ceph
 
(5)在部署节点(cent用户下ceph目录下):配置新集群
 
  ceph-deploy new  controller compute storage
  vim conf
  添加:osd_pool_default_size = 2
 
(6)在部署节点执行(ceph目录下):所有节点安装ceph软件
 
  ceph-deploy install dlp controller compute  storage
 
 
   7)  初始化集群(ceph目录下)
 
  ceph-deploy mon create initial
 
如果报错1:
eph部署monitor时出现"monitor is not yet in quorum.

这是因为防火墙没关闭,去各个节点关闭所有防火墙

再执行:

  ceph-deploy --overwrite-conf  mon create-initial

 
报错2:
复制代码
[ceph_deploy.mon][ERROR ] RuntimeError: config file /etc/ceph/ceph.conf exists with different content; use --overwrite-conf to overwrite
[ceph_deploy][ERROR ] GenericError: Failed to create 3 monitors
原因:修改了ceph用户里的ceph.conf文件内容,但是没有把这个文件里的最新消息发送给其他节点,所有要推送消息
解决:ceph-deploy --overwrite-conf config push node1-4
  ceph-deploy --overwrite-conf mon create node1-4
复制代码
 
 8) 给每个节点分区
 
  fdisk /dev/sdb                 #自己注意是分哪个硬盘,注意w保存
 
   9)准备osd(Object Storage Daemon:对象存储保护程序)
 
ceph-deploy osd prepare controller:/var/lib/ceph/osd compute:/var/lib/ceph/osd storage:/var/lib/ceph/osd
 10)激活osd
 
ceph-deploy osd activate controller:/var/lib/ceph/osd compute:/var/lib/ceph/osd storage:/var/lib/ceph/osd
   1 1)  在部署节点transfer config files
 
ceph-deploy admin dlp controller compute storage

  sudo chmod 644 /etc/ceph/ceph.client.admin.keyring

   12  )在ceph集群中任意节点检测:
 
复制代码
[root@controller old]# ceph -s
    cluster 8e03f0d7-06cb-49c6-b0fa-b9764e85e61a
     health HEALTH_OK
     monmap e1: 3 mons at {compute=192.168.253.194:6789/0,controller=192.168.253.135:6789/0,storage=192.168.253.15:6789/0}
            election epoch 6, quorum 0,1,2 storage,controller,compute
     osdmap e14: 3 osds: 3 up, 3 in
            flags sortbitwise,require_jewel_osds
      pgmap v2230: 64 pgs, 1 pools, 0 bytes data, 0 objects
            24995 MB used, 27186 MB / 52182 MB avail
                  64 active+clean
复制代码
 
原文地址:https://www.cnblogs.com/daisyyang/p/11011355.html