记一次虚拟机无法挂载volume的怪异问题排查

故障现象

使用nova volume-attach <server> <volume>命令挂载卷,命令没有返回错误,但是查看虚拟机状态,卷并没有挂载上。

故障原因

疑似虚拟机长时间运行(超过1年)后,libvirt无法执行live attach操作。

处理方法

将虚拟机关机,在关机状态下挂载卷,然后启动虚拟机。

排查过程

由于没有nova命令没有报错,基本确定问题出在计算节点,直接到计算节点查看日志,发现如下异常:

2018-06-05 13:40:32.337 160589 DEBUG nova.virt.libvirt.config [req-392fd85e-1853-4c6c-8248-310ca6289895 d31b768e1dbf4a0dbf2571234b4e2f5a 65ea11db9ebf49c69d
3c05bc38925617 - - -] Generated XML ('<disk type="network" device="disk">
  <driver name="qemu" type="raw" cache="none"/>
  <source protocol="rbd" name="volumes/volume-433ad82b-4d8d-4d5c-b1c0-d0c88e0c397b">
    <host name="10.212.11.15" port="6789"/>
    <host name="10.212.13.30" port="6789"/>
    <host name="10.212.14.30" port="6789"/>
  </source>
  <auth username="cinder">
    <secret type="ceph" uuid="90b0641c-a0d1-4103-ad8c-d580dd7da953"/>
  </auth>
  <target bus="virtio" dev="vdc"/>
  <serial>433ad82b-4d8d-4d5c-b1c0-d0c88e0c397b</serial>
</disk>
',)  to_xml /usr/lib/python2.7/site-packages/nova/virt/libvirt/config.py:82
2018-06-05 13:40:32.386 160589 ERROR nova.virt.libvirt.driver [req-392fd85e-1853-4c6c-8248-310ca6289895 d31b768e1dbf4a0dbf2571234b4e2f5a 65ea11db9ebf49c69d
3c05bc38925617 - - -] [instance: 211437a1-e4c4-40e0-ade1-b167d6251ced] Failed to attach volume at mountpoint: /dev/vdc
2018-06-05 13:40:32.386 160589 ERROR nova.virt.libvirt.driver [instance: 211437a1-e4c4-40e0-ade1-b167d6251ced] Traceback (most recent call last):
2018-06-05 13:40:32.386 160589 ERROR nova.virt.libvirt.driver [instance: 211437a1-e4c4-40e0-ade1-b167d6251ced]   File "/usr/lib/python2.7/site-packages/nova/virt/libvirt/driver.py", line 1121, in attach_volume
2018-06-05 13:40:32.386 160589 ERROR nova.virt.libvirt.driver [instance: 211437a1-e4c4-40e0-ade1-b167d6251ced]     guest.attach_device(conf, persistent=True, live=live)
2018-06-05 13:40:32.386 160589 ERROR nova.virt.libvirt.driver [instance: 211437a1-e4c4-40e0-ade1-b167d6251ced]   File "/usr/lib/python2.7/site-packages/nova/virt/libvirt/guest.py", line 235, in attach_device
2018-06-05 13:40:32.386 160589 ERROR nova.virt.libvirt.driver [instance: 211437a1-e4c4-40e0-ade1-b167d6251ced]     self._domain.attachDeviceFlags(conf.to_xml(), flags=flags)
2018-06-05 13:40:32.386 160589 ERROR nova.virt.libvirt.driver [instance: 211437a1-e4c4-40e0-ade1-b167d6251ced]   File "/usr/lib/python2.7/site-packages/eventlet/tpool.py", line 183, in doit
2018-06-05 13:40:32.386 160589 ERROR nova.virt.libvirt.driver [instance: 211437a1-e4c4-40e0-ade1-b167d6251ced]     result = proxy_call(self._autowrap, f, *args, **kwargs)
2018-06-05 13:40:32.386 160589 ERROR nova.virt.libvirt.driver [instance: 211437a1-e4c4-40e0-ade1-b167d6251ced]   File "/usr/lib/python2.7/site-packages/eventlet/tpool.py", line 141, in proxy_call
2018-06-05 13:40:32.386 160589 ERROR nova.virt.libvirt.driver [instance: 211437a1-e4c4-40e0-ade1-b167d6251ced]     rv = execute(f, *args, **kwargs)
2018-06-05 13:40:32.386 160589 ERROR nova.virt.libvirt.driver [instance: 211437a1-e4c4-40e0-ade1-b167d6251ced]   File "/usr/lib/python2.7/site-packages/eventlet/tpool.py", line 122, in execute
2018-06-05 13:40:32.386 160589 ERROR nova.virt.libvirt.driver [instance: 211437a1-e4c4-40e0-ade1-b167d6251ced]     six.reraise(c, e, tb)
2018-06-05 13:40:32.386 160589 ERROR nova.virt.libvirt.driver [instance: 211437a1-e4c4-40e0-ade1-b167d6251ced]   File "/usr/lib/python2.7/site-packages/eventlet/tpool.py", line 80, in tworker
2018-06-05 13:40:32.386 160589 ERROR nova.virt.libvirt.driver [instance: 211437a1-e4c4-40e0-ade1-b167d6251ced]     rv = meth(*args, **kwargs)
2018-06-05 13:40:32.386 160589 ERROR nova.virt.libvirt.driver [instance: 211437a1-e4c4-40e0-ade1-b167d6251ced]   File "/usr/lib64/python2.7/site-packages/libvirt.py", line 554, in attachDeviceFlags
2018-06-05 13:40:32.386 160589 ERROR nova.virt.libvirt.driver [instance: 211437a1-e4c4-40e0-ade1-b167d6251ced]     if ret == -1: raise libvirtError ('virDomainAttachDeviceFlags() failed', dom=self)
2018-06-05 13:40:32.386 160589 ERROR nova.virt.libvirt.driver [instance: 211437a1-e4c4-40e0-ade1-b167d6251ced] libvirtError: internal error: unable to execute QEMU command '__com.redhat_drive_add': Device 'drive-virtio-disk2' could not be initialized
2018-06-05 13:40:32.386 160589 ERROR nova.virt.libvirt.driver [instance: 211437a1-e4c4-40e0-ade1-b167d6251ced]

volume保存在Ceph中,从日志看到生成的xml配置文件中volume的信息没有问题,在调用libvirt接口挂载时报了一条libvirtError: internal error: unable to execute QEMU command '__com.redhat_drive_add': Device 'drive-virtio-disk2' could not be initialized的错误。

查看qemu日志/var/log/libvirt/qemu/instance-0000017b.log,发现这样的信息:

2018-06-05T13:40:34.471447Z error reading header from volume-433ad82b-4d8d-4d5c-b1c0-d0c88e0c397b

似乎是不能正常识别Ceph中的rbd。于是把Ceph client的debug日志打开,在计算节点的/etc/ceph/ceph.conf中添加如下配置:

[client]
debug rbd = 20
log file = /var/log/ceph-client.log

其中/var/log/ceph-client.log需要是libvirt用户可写的。

再次执行nova volume-attach命令,查看Ceph日志:

2018-06-05 13:40:32.349046 7f99f5355c80 20 librbd: open_image: ictx = 0x7f99f785bc00 name = 'volume-433ad82b-4d8d-4d5c-b1c0-d0c88e0c397b' id = '' snap_name
 = ''
2018-06-05 13:40:32.353777 7f99f5355c80 20 librbd: detect format of volume-433ad82b-4d8d-4d5c-b1c0-d0c88e0c397b : new
2018-06-05 13:40:32.362089 7f99f5355c80 10 librbd::ImageCtx: init_layout stripe_unit 4194304 stripe_count 1 object_size 4194304 prefix rbd_data.85160113150
e34 format rbd_data.85160113150e34.%016llx
2018-06-05 13:40:32.375950 7f99f5355c80 20 librbd: ictx_refresh 0x7f99f785bc00
2018-06-05 13:40:32.377678 7f99f5355c80 -1 librbd: Image uses unsupported features: 60
2018-06-05 13:40:32.377685 7f99f5355c80 20 librbd: close_image 0x7f99f785bc00
2018-06-05 13:40:32.377688 7f99f5355c80 20 librbd: flush 0x7f99f785bc00
2018-06-05 13:40:32.377690 7f99f5355c80 20 librbd: ictx_check 0x7f99f785bc00

注意报错Image uses unsupported features: 60。

Ceph rbd支持的features:

  • layering: layering support,numeric value: 1
  • striping: striping v2 support,numeric value: 2
  • exclusive-lock: exclusive locking support,numeric value: 4
  • object-map: object map support (requires exclusive-lock) ,numeric value: 8
  • fast-diff: fast diff calculations (requires object-map) ,numeric value: 16
  • deep-flatten: snapshot flatten support ,numeric value: 32
  • journaling: journaled IO support (requires exclusive-lock) ,numeric value: 64

所以报错中的60代表的就是:

60 = 32+16+8+4 = exclusive-lock, object-map, fast-diff, deep-flatten

这些feature是format 2格式的rbd中默认开启的,而计算节点上的librbd表示不支持这些feature,官方的说法是在3.11版本以上的kernel才支持。

可以在cinder-volume节点的ceph配置文件中,设置rbd_default_features = 1,这样就只启用layering属性。对于已经创建的rbd,使用下面的命令关闭不支持的feature:

# rbd feature disable volumes/volume-433ad82b-4d8d-4d5c-b1c0-d0c88e0c397b exclusive-lock object-map fast-diff deep-flatten

经测试,关闭这些feature后,volume确实可以挂载上了。

但是实际上还是存在问题。因为之前挂载卷的时候没有出现过这种错误,查看已经挂载到虚拟机上的volume,它们的这些feature都是开启的。

在同一个计算节点上的虚拟机测试挂载同一个卷,发现有些虚拟机能挂载上,而有些则会出现上面的报错。这些报错的虚拟机存在这些共性:

1. 使用的镜像是其他虚拟机的snapshot,且镜像已经被删除

2. 使用的flavor包含交换分区,且flavor已经被删除

3. 运行时间在1年以上

从第1点和第2点看出使用极不规范,因为是老集群,其他同事的坑,只能默默接过。还好OpenStack数据库的删除都是软删除,手动修改数据库,将删除的flavor记录恢复。但是镜像就比较坑了,因为这不仅仅是数据库中的一条记录,还包括保存在文件系统中的镜像实体,这个是无法恢复的。只能通过从计算节点拷贝缓存的镜像还原。

查看计算节点上使用这个镜像的虚拟机:

# qemu-img info /var/lib/nova/instances/211437a1-e4c4-40e0-ade1-b167d6251ced/disk
image: /var/lib/nova/instances/211437a1-e4c4-40e0-ade1-b167d6251ced/disk
file format: qcow2
virtual size: 60G (64424509440 bytes)
disk size: 37G
cluster_size: 65536
backing file: /var/lib/nova/instances/_base/141ce390743531b7da2db335d2159fa550f460c8
Format specific information:
    compat: 1.1
    lazy refcounts: false
    refcount bits: 16
    corrupt: false

可以看到它的backing file,这个就是转成raw格式的镜像。将它转换成qcow2格式:

# cd /var/lib/nova/instances/_base
# qemu-img convert -f raw -O qcow2 141ce390743531b7da2db335d2159fa550f460c8 141ce390743531b7da2db335d2159fa550f460c8.qcow2

然后将这个qcow2格式的镜像复制到glance节点保存镜像的目录,重命名为镜像的UUID。最后还要修改数据库中的记录:

MariaDB [glance]> update images set status='active', deleted_at=NULL,deleted=0,is_public=0,checksum='0a5a3e84558e8470946acb86a839dc02' where id='b3986e99-1988-43bb-b47c-0b34438bc189';

注意要更新镜像的checksum,这个值使用md5sum IMAGE_FILE命令获取:

# md5sum /mfsclient/ucscontroller/glance/images/b3986e99-1988-43bb-b47c-0b34438bc189 
0a5a3e84558e8470946acb86a839dc02  /mfsclient/ucscontroller/glance/images/b3986e99-1988-43bb-b47c-0b34438bc189

另外发现glance保存镜像的目录也是修改过的,需要更新image_locations表:

MariaDB [glance]> update image_locations set value='file:///mfsclient/ucscontroller/glance/images/b3986e99-1988-43bb-b47c-0b34438bc189',deleted_at=NULL,deleted=0,status='active' where image_id='b3986e99-1988-43bb-b47c-0b34438bc189';

对于一个正常上传的镜像,这样做就算恢复完成了。但这个镜像是一个虚拟机的snapshot,所以其实是没有办法复原的,这样做只是得到了一个UUID和旧镜像相同的一个新镜像。

使用恢复的flavor和image创建一个虚拟机,然后挂载之前使用的测试卷,没有报错,可以正常挂载。由于镜像不是完全复原的,不能排除镜像的问题,flavor是可以确定没有问题的。这样一来只剩下第3点了,是不是虚拟机运行太久导致的?

回顾nova-compute的报错日志,是在调用libvirt的attachDeviceFlags时出错。这里传入了一个flags参数。具体调用的代码在/usr/lib/python2.7/site-packages/nova/virt/libvirt/guest.py:

class Guest(object):
    ...
    def attach_device(self, conf, persistent=False, live=False):
        """Attaches device to the guest.

        :param conf: A LibvirtConfigObject of the device to attach
        :param persistent: A bool to indicate whether the change is
                           persistent or not
        :param live: A bool to indicate whether it affect the guest
                     in running state
        """
        flags = persistent and libvirt.VIR_DOMAIN_AFFECT_CONFIG or 0
        flags |= live and libvirt.VIR_DOMAIN_AFFECT_LIVE or 0
        self._domain.attachDeviceFlags(conf.to_xml(), flags=flags)

flags由上一级传入的persistent和live参数决定, /usr/lib/python2.7/site-packages/nova/virt/libvirt/driver.py:

class LibvirtDriver(driver.ComputeDriver):
    ...
    def attach_volume(self, context, connection_info, instance, mountpoint,
                      disk_bus=None, device_type=None, encryption=None):
        ...
        try:
            state = guest.get_power_state(self._host)
            live = state in (power_state.RUNNING, power_state.PAUSED)

            if encryption:
                encryptor = self._get_volume_encryptor(connection_info,
                                                       encryption)
                encryptor.attach_volume(context, **encryption)

            guest.attach_device(conf, persistent=True, live=live)
        except Exception as ex:
            LOG.exception(_LE('Failed to attach volume at mountpoint: %s'),
                          mountpoint, instance=instance)
            if isinstance(ex, libvirt.libvirtError):
                errcode = ex.get_error_code()
                if errcode == libvirt.VIR_ERR_OPERATION_FAILED:
                    self._disconnect_volume(connection_info, disk_dev)
                    raise exception.DeviceIsBusy(device=disk_dev)

            with excutils.save_and_reraise_exception():
                self._disconnect_volume(connection_info, disk_dev)

persistent硬编码为True,live由虚拟机状态决定,如果是RUNNING或者PAUSED,live为True,否则为False。我们是在为运行中的虚拟机挂载卷,live为True。

可知libvirt对关闭和运行的虚拟机挂载卷时操作不一样,是不是因为虚拟机运行久了,libvirt不能正常执行live状态下的挂载操作了呢。最简单的方法就是重启一下虚拟机再挂载。

将虚拟机关机,在关机状态下挂载卷,然后启动虚拟机,在虚拟机上可以看到挂载的块设备。没有继续深究libvirt内部的挂载逻辑。

参考资料

Need better documentation to describe RBD image features

rbd无法map(rbd feature disable)

原文地址:https://www.cnblogs.com/ltxdzh/p/9159710.html