阿里云-云小站(无限量代金券发放中)
【腾讯云】云服务器、云数据库、COS、CDN、短信等热卖云产品特惠抢购

Ceph 与 OpenStack系统集成指南

159次阅读
没有评论

共计 8735 个字符,预计需要花费 22 分钟才能阅读完成。

本文已经基于 Liberty+Ceph9.2 进行过测试。
 
1、创建一个 POOL
Ceph 的块设备默认使用“rdb”pool,但建议为 Cinder 和 Glance 创建专用的 pool。
ceph osd pool create volumes 128
ceph osd pool create images 128
ceph osd pool create backups 128
ceph osd pool create vms 128
 
2、配置 OPENSTACK CEPH CLIENTS
环境的准备,需要事先在 ceph 管理节点到 openstack 各服务节点间建立起免密钥登录的关系,且需有使用 sudo 的权限。
 
安装 ceph 客户端软件包:
在 glance-api 节点:sudo yum install Python-rbd
在 nova-compute, cinder-backup and on the cinder-volume 节点:sudo yum install  ceph(both the Python bindings and the client command line tools)
 
在 OpenStack 中运行 glance-api, cinder-volume, nova-compute,cinder-backup 服务的主机节点,都属于 Ceph 的客户端。需要配置 ceph.conf.
使用以下命令把 ceph.conf 复制到每个 ceph 客户端节点:
ssh {your-openstack-server} sudo tee /etc/ceph/ceph.conf </etc/ceph/ceph.conf
 
3、设置 Ceph Client Authentication
如果启用了 cephx 认证,那么就需要为 Nova/Cinder 创建一个新用户:
$ceph auth get-or-create client.cinder mon 'allow r' osd 'allow class-read object_prefix rbd_children, allow rwx pool=volumes, allow rwx pool=vms, allow rx pool=images'
$ ceph auth get-or-create client.glance mon 'allow r' osd 'allow class-read object_prefix rbd_children, allow rwx pool=images'
$ ceph auth get-or-create client.cinder-backup mon 'allow r' osd 'allow class-read object_prefix rbd_children, allow rwx pool=backups'
 
将密钥(client.cinder, client.glance, and client.cinder-backup)分发至对应的主机节点,并调整属主权限:
ceph auth get-or-create client.glance | ssh {your-glance-api-server} sudo tee /etc/ceph/ceph.client.glance.keyring
ssh {your-glance-api-server} sudo chown glance:glance /etc/ceph/ceph.client.glance.keyring
ceph auth get-or-create client.cinder | ssh {your-volume-server} sudo tee /etc/ceph/ceph.client.cinder.keyring
ssh {your-cinder-volume-server} sudo chown cinder:cinder /etc/ceph/ceph.client.cinder.keyring
ceph auth get-or-create client.cinder-backup | ssh {your-cinder-backup-server} sudo tee /etc/ceph/ceph.client.cinder-backup.keyring
ssh {your-cinder-backup-server} sudo chown cinder:cinder /etc/ceph/ceph.client.cinder-backup.keyring
 
运行 nova-compute 的节点需要使用 cinder 密钥:
ceph auth get-or-create client.cinder | ssh {your-nova-compute-server} sudo tee /etc/ceph/ceph.client.cinder.keyring
 
libvirt 同样也要使用 client.cinder 密钥:
Libvirt 进程在从 Cinder 挂载一个块设备时,需要使用该密钥访问 Ceph 存储集群。
先要在运行 nova-compute 的节点上创建一个密钥的临时拷贝:
ceph auth get-key client.cinder | ssh {your-compute-node} tee client.cinder.key
 
然后登录到 compute 节点上,将密钥增加到 libvirt 配置文件中并删除上面的临时文件:
$ uuidgen
22003ebb-0f32-400e-9584-fa90b6efd874

cat > secret.xml <<EOF
<secret ephemeral='no' private='no'>
<uuid>22003ebb-0f32-400e-9584-fa90b6efd874</uuid>
<usage type='ceph'>
<name>client.cinder secret</name>
</usage>
</secret>
EOF
# virsh secret-define --file secret.xml
#Secret 22003ebb-0f32-400e-9584-fa90b6efd874 created
# virsh secret-set-value --secret 22003ebb-0f32-400e-9584-fa90b6efd874 --base64 $(cat client.cinder.key) && rm client.cinder.key secret.xml
Secret value set
 
为了便于管理,建议在有多个计算节点时,在上在的操作中使用相同的 UUID。
 
 
4、设置 Glance 集成 Ceph
JUNO,编辑 /etc/glance/glance-api.conf:
[DEFAULT]
...
default_store = rbd
...
[glance_store]
stores = rbd
rbd_store_pool = images
rbd_store_user = glance
rbd_store_ceph_conf = /etc/ceph/ceph.conf
rbd_store_chunk_size = 8
 
启用 copy-on-write 功能:
在 [DEFAULT] 中增加以下参数,
show_image_direct_url = True
 
怎样关闭 Glance 的镜像缓存:
修改以下参数如以下所示,
[paste_deploy]flavor = keystone
 
其它建议设置的 Glance 参数:
– hw_scsi_model=virtio-scsi: add the virtio-scsi controller and get better performance and support for discard operation
– hw_disk_bus=scsi: connect every cinder block devices to that controller
– hw_qemu_guest_agent=yes: enable the QEMU guest agent
– os_require_quiesce=yes: send fs-freeze/thaw calls through the QEMU guest agent
 
OpenStack 官网配置参考:
http://docs.openstack.org/liberty/config-reference/content/ch_configuring-openstack-image-service.html
 
5、设置 Cinder 集成 Ceph
编辑 /etc/cinder/cinder.conf:
[DEFAULT]
...
enabled_backends = ceph
...
[ceph]
volume_driver = cinder.volume.drivers.rbd.RBDDriver
rbd_pool = volumes
rbd_ceph_conf = /etc/ceph/ceph.conf
rbd_flatten_volume_from_snapshot = false
rbd_max_clone_depth = 5
rbd_store_chunk_size = 4
rados_connect_timeout = -1
glance_api_version = 2
 
在启用了 cephx 认证时,还需要配置认证信息:
[ceph]
...
rbd_user = cinder
rbd_secret_uuid = 22003ebb-0f32-400e-9584-fa90b6efd874
 
注:如果你需要配置多个 cinder back ends,一定要在 [DEFAULT] 部分设置 glance_api_version = 2。
 
下表来自 OpenStack 官网 Liberty 的配置文档:
http://docs.openstack.org/liberty/config-reference/content/ceph-rados.html
Description of Ceph storage configuration options
Configuration option = Default value Description
[DEFAULT]
rados_connect_timeout = -1 (IntOpt) Timeout value (in seconds) used when connecting to ceph cluster. If value < 0, no timeout is set and default librados value is used.
rados_connection_interval = 5 (IntOpt) Interval value (in seconds) between connection retries to ceph cluster.
rados_connection_retries = 3 (IntOpt) Number of retries if connection to ceph cluster failed.
rbd_ceph_conf = (StrOpt) Path to the ceph configuration file
rbd_cluster_name = ceph (StrOpt) The name of ceph cluster
rbd_flatten_volume_from_snapshot = False (BoolOpt) Flatten volumes created from snapshots to remove dependency from volume to snapshot
rbd_max_clone_depth = 5 (IntOpt) Maximum number of nested volume clones that are taken before a flatten occurs. Set to 0 to disable cloning.
rbd_pool = rbd (StrOpt) The RADOS pool where rbd volumes are stored
rbd_secret_uuid = None (StrOpt) The libvirt uuid of the secret for the rbd_user volumes
rbd_store_chunk_size = 4 (IntOpt) Volumes will be chunked into objects of this size (in megabytes).
rbd_user = None (StrOpt) The RADOS client name for accessing rbd volumes – only set when using cephx authentication
volume_tmp_dir = None (StrOpt) Directory where temporary image files are stored when the volume driver does not write them directly to the volume. Warning: this option is now deprecated, please use image_conversion_dir instead.
 
 
 
6、设置 Cinder Backup 集成 Ceph
OpenStack Cinder Backup 需要一个专门的进程。在你的 Cinder Backup 节点上,编辑 /etc/cinder/cinder.conf:
backup_driver = cinder.backup.drivers.cephbackup_ceph_conf = /etc/ceph/ceph.conf
backup_ceph_user = cinder-backup
backup_ceph_chunk_size = 134217728
backup_ceph_pool = backups
backup_ceph_stripe_unit = 0
backup_ceph_stripe_count = 0
restore_discard_excess_bytes = true
 
以下是来自 OpenStack 官网对 Cinder Backup 集成 Ceph 的配置说明:

To enable the Ceph backup driver, include the following option in the cinder.conf file:

backup_driver = cinder.backup.drivers.ceph

The following configuration options are available for the Ceph backup driver.

Table 2.52. Description of Ceph backup driver configuration options
Configuration option = Default value Description
[DEFAULT]
backup_ceph_chunk_size = 134217728 (IntOpt) The chunk size, in bytes, that a backup is broken into before transfer to the Ceph object store.
backup_ceph_conf = /etc/ceph/ceph.conf (StrOpt) Ceph configuration file to use.
backup_ceph_pool = backups (StrOpt) The Ceph pool where volume backups are stored.
backup_ceph_stripe_count = 0 (IntOpt) RBD stripe count to use when creating a backup image.
backup_ceph_stripe_unit = 0 (IntOpt) RBD stripe unit to use when creating a backup image.
backup_ceph_user = cinder (StrOpt) The Ceph user to connect with. Default here is to use the same user as for Cinder volumes. If not using cephx this should be set to None.
restore_discard_excess_bytes = True (BoolOpt) If True, always discard excess bytes when restoring volumes i.e. pad with zeroes.

This example shows the default options for the Ceph backup driver.

backup_ceph_conf=/etc/ceph/ceph.conf
backup_ceph_user = cinder
backup_ceph_chunk_size = 134217728
backup_ceph_pool = backups
backup_ceph_stripe_unit = 0
backup_ceph_stripe_count = 0
 
7、设置 NOVA 集成 Ceph
为了直接基于 Ceph 存储启动虚机,还需要为 Nova 配置一个临时的存储后端。同时,建议使用 RBD 缓存,启用 admin socket。
admin socket 可以通过以下方法访问:
     ceph daemon /var/run/ceph/ceph-client.cinder.19195.32310016.asok help
 
在你的每个 compute 节点上,编辑 Ceph 配置文件:
[client]
rbd cache = true
rbd cache writethrough until flush = true
admin socket = /var/run/ceph/guests/$cluster-$type.$id.$pid.$cctid.asok
log file = /var/log/qemu/qemu-guest-$pid.log
rbd concurrent management ops = 20

 

调整权限:
mkdir -p /var/run/ceph/guests/ /var/log/qemu/
chown qemu:libvirt /var/run/ceph/guests /var/log/qemu/

 

注:以上的 qemu 用户和 libvirt 组是基于 RedHat 相关系统的。
 
以配置好后,如果虚机已经在运行,则可以重启使上面的配置生效。
 
JUNO
在每个 compute 节点上编辑 /etc/nova/nova.conf 文件:
[libvirt]
images_type = rbd
images_rbd_pool = vms
images_rbd_ceph_conf = /etc/ceph/ceph.conf
rbd_user = cinder
rbd_secret_uuid = 22003ebb-0f32-400e-9584-fa90b6efd874
disk_cachemodes="network=writeback"
 
建议关闭 nova 的密钥注入功能,而是使用基于 metadata 服务、cloud-init 实现类似功能:
在每个计算节点上,编辑 /etc/nova/nova.conf:
inject_password = false
inject_key = false
inject_partition = -2
 
启动热迁移支持:
在 [libvirt] 部分增加:
live_migration_flag="VIR_MIGRATE_UNDEFINE_SOURCE,VIR_MIGRATE_PEER2PEER,VIR_MIGRATE_LIVE,VIR_MIGRATE_PERSIST_DEST,VIR_MIGRATE_TUNNELLED"
 
8、重启 OpenStack 服务
sudo service openstack-glance-api restart
sudo service openstack-nova-compute restart
sudo service openstack-cinder-volume restart
sudo service openstack-cinder-backup restart

在 CentOS 7.1 上安装分布式存储系统 Ceph  http://www.linuxidc.com/Linux/2015-08/120990.htm

Ceph 环境配置文档 PDF http://www.linuxidc.com/Linux/2013-05/85212.htm 

CentOS 6.3 上部署 Ceph http://www.linuxidc.com/Linux/2013-05/85213.htm 

Ceph 的安装过程 http://www.linuxidc.com/Linux/2013-05/85210.htm 

HOWTO Install Ceph On FC12, FC 上安装 Ceph 分布式文件系统 http://www.linuxidc.com/Linux/2013-05/85209.htm 

Ceph 文件系统安装 http://www.linuxidc.com/Linux/2013-05/85208.htm 

CentOS 6.2 64 位上安装 Ceph 0.47.2 http://www.linuxidc.com/Linux/2013-05/85206.htm 

Ubuntu 12.04 Ceph 分布式文件系统 http://www.linuxidc.com/Linux/2013-04/82588.htm 

Ubuntu 16.04 快速安装 Ceph 集群  http://www.linuxidc.com/Linux/2016-09/135261.htm

Ceph 的详细介绍:请点这里
Ceph 的下载地址:请点这里

本文永久更新链接地址:http://www.linuxidc.com/Linux/2016-11/137095.htm

正文完
星哥说事-微信公众号
post-qrcode
 0
星锅
版权声明:本站原创文章,由 星锅 于2022-01-21发表,共计8735字。
转载说明:除特殊说明外本站文章皆由CC-4.0协议发布,转载请注明出处。
【腾讯云】推广者专属福利,新客户无门槛领取总价值高达2860元代金券,每种代金券限量500张,先到先得。
阿里云-最新活动爆款每日限量供应
评论(没有评论)
验证码
【腾讯云】云服务器、云数据库、COS、CDN、短信等云产品特惠热卖中