Ceph与OpenStack的Glance相结合

http://docs.ceph.com/docs/master/rbd/rbd-openstack/?highlight=nova#kilo

在Ceoh的admin-node上进行如下操作:

1. ceph osd pool create images 128
2. ceph auth get-or-create client.glance mon 'allow r' osd 'allow class-read object_prefix rbd_children, allow rwx pool=images'
(ceph auth del client.glance可以用来删除已经创建的keyring条目)

在OpenStack的Controller上进行如下操作:

1. 将admin-node上的ceph.conf拷贝到controller的/etc/ceph目录下
2. apt-get install python-rbd
3. 在admin-node上运行ceph auth get-or-create client.glance, 将结果保存在controller上的/etc/ceph/ceph.client.glance.keyring, 同时将结果追加到controller的ceph.conf文件中
(一定要加上,不然上传镜像的时候会报如下错: HTTPInternalServerError (HTTP 500))
4. chown glance:glance /etc/ceph/ceph.client.glance.keyring
5. vi /etc/glance/glance-api.conf
[default]
show_image_direct_url = True
[glance_store]
default_store = rbd
stores = rbd
rbd_store_pool = images
rbd_store_user = glance
rbd_store_ceph_conf = /etc/ceph/ceph.conf
rbd_store_chunk_size = 8
flavor = keystone
# filesystem_store_datadir = /var/lib/glance/images/
6. service glance-api restart
7. glance image-create --name "test-image-1" --file /root/cirros-0.3.4-x86_64-disk.img --disk-format qcow2 --container-format bare --visibility public --progress
8. glance image-list
9. glance image-delete <uuid>

备注:
1. rados -p images ls
rbd_header.5e3bf874fd1
rbd_directory
rbd_id.b83e6d1e-c3d8-4f08-ac36-904560b32c55
rbd_data.5e3bf874fd1.0000000000000000
rbd_data.5e3bf874fd1.0000000000000001

2. rados -p images stat rbd_data.5e3bf874fd1.0000000000000000
images/rbd_data.5e3bf874fd1.0000000000000000 mtime 2015-11-03 14:53:07.000000, size 8388608

rados -p images stat rbd_data.5e3bf874fd1.0000000000000000
images/rbd_data.5e3bf874fd1.0000000000000001 mtime 2015-11-03 14:53:07.000000, size 4899328

两个文件的大小相加刚好是cirros-0.3.4-x86_64-disk.img的大小

原文地址:https://www.cnblogs.com/IvanChen/p/4939292.html