Openstack_通用模块_Oslo_vmware 创建/删除 vCenter 虚拟机

目录

oslo.vmware

oslo.vmware 是 Openstack 的通用模块, 用于连接/获取/操作 vCenter 上的实体资源(Datacenter/CLuster/Host/VirtualMachine…等). Openstack Nova Project 具有一定的 Hypervisor 异构能力, 到 /opt/stack/nova/nova/virt 路径下可以看见其所支持的虚拟化类型包括了 hyperv/libvirt/xenapi/vmwareapi . 其中 vmwareapi 主要由 oslo.vmware 来提供基础的接口实现, Nova Project 在此基础上做了进一步的封装和重构.

Connect to vCenter Server

关于如何连接和获取 vCenter 实体资源对象的信息, 在 Python Module_oslo.vmware_连接 vCenter 一文中作了介绍, 通过 oslo_vmware.api Module 的 class VMwareAPISession 来完成, Return session 对象 . EXAMPLE:

from oslo_vmware import api 
from oslo_vmware import vim_util

session = api.VMwareAPISession(
            '<vCenter_Server_IPAddress>',
            '<server_username>',
            '<server_password>',
             1,  
             0.1)

Create VirtualMachine for vCenter

在 vCenter 上创建虚拟机通过调用 Folder.CreateVM_Task() 来实现, 然后使用 VirtualMachine managed object 的属性和方法来配置这个虚拟机.

Folder.CreateVM_Task() 方法使 用 一 个 VirtualMachineConfigSpec 数据对象作为参数。VirtualMachineConfigSpec 允许你设定创建虚拟机的属性信息.你还必须指定一个 Host 或一个 ResourcePool(或两者都指定)。虚拟机需要占用 HostResourcePool 里的 CPU 和内存。

Folder.CreateVM_Task(_this, config, pool, host)# _this(ManagedObjectReference) – 文件夹,你想放置虚拟机的文件夹
    # config(VirtualMachineConfigSpec) – 虚拟机配置讯息,数据对象明确了CPU,内存,网络等等信息。
    # pool(ManagedObjectReference to a ResourcePool) – 资源池,虚拟机获得从中获得资源。
    # host(ManagedObjectReference to a Host) - 主机,代表了虚拟机运行的目标主机。

NOTE: 假如你在一个独立的 ESXi 主机上应用 Folder.CreateVM_Task() 方法, 可以省略 host 参数。假如目标主机是 VMware DRS Cluster 里的一部分, host 参数是可选的. 假如没有指定主机,系统自动选择一个。
EXAMPLE:

# 获取 Datacenter 资源清单
datacenter  = session.invoke_api(
                            vim_util,        
                            'get_objects',              
                            session.vim,                 
                            'Datacenter',                
                            100) 

# 选择创建虚拟机的目标 Datacenter
datacenter = datacenter.objects[1]

# 选择创建虚拟机的目标 Folder
vmFolder = session.invoke_api(vim_util, 'get_object_properties_dict', session.vim,
                              datacenter.obj,
                              'vmFolder')

# 获取 Cluster 资源清单
computeResource  = session.invoke_api(
                            vim_util,        
                            'get_objects',              
                            session.vim,                 
                            'ComputeResource',                
                            100) 

# 选择创建虚拟机的目标 Cluster
cluster =  computeResource.objects[2]

# 选择创建虚拟机的目标 ResourcePool
resourcePool = session.invoke_api(vim_util, 'get_object_properties_dict', session.vim,
                                  cluster.obj,
                                  'resourcePool')

# 获取虚拟机配置信息对象
client_factory = session.vim.client.factory
config_spec = client_factory.create('ns0:VirtualMachineConfigSpec')

# 设定虚拟机配置信息
import uuid
instance_uuid = str(uuid.uuid4())
data_store_name = 'datastore1'

config_spec.name = 'jmilkfan_vm'
config_spec.guestId = 'otherGuest'
config_spec.instanceUuid = instance_uuid
vm_file_info = client_factory.create('ns0:VirtualMachineFileInfo')
vm_file_info.vmPathName = "[" + data_store_name + "]"
config_spec.files = vm_file_info

tools_info = client_factory.create('ns0:ToolsConfigInfo')
tools_info.afterPowerOn = True
tools_info.afterResume = True
tools_info.beforeGuestStandby = True
tools_info.beforeGuestShutdown = True
tools_info.beforeGuestReboot = True
config_spec.tools = tools_info

config_spec.numCPUs = int(instance.vcpus) = 1
config_spec.numCoresPerSocket = int(extra_specs.cores_per_socket) = 1
config_spec.memoryMB = int(instance.memory_mb) = 512

extra_config = []
opt = client_factory.create('ns0:OptionValue')
opt.key = "nvp.vm-uuid"
opt.value = instance_uuid
extra_config.append(opt)
config_spec.extraConfig = extra_config

# 创建虚拟机
vm_create_task = session.invoke_api(session.vim, 
                                    "CreateVM_Task", 
                                    vmFolder['vmFolder'], 
                                    config=config_spec, 
                                    pool=resourcePool['resourcePool'])

ERROR 1: 对象不支持该操作
TSG: 检查 Datacenter 和 Cluster 是否为有效的清单对象, 该例子中的 Datacenter:ECONE 和 Cluster:cluster 均为拥有 Host 且正常运作的资源对象.

In [190]: datacenter.objects[1]
Out[190]: 
(ObjectContent){
   obj = 
      (obj){
         value = "datacenter-2"
         _type = "Datacenter"
      }
   propSet[] = 
      (DynamicProperty){
         name = "name"
         val = "ECONE"
      },
 }

In [191]:  computeResource.objects[2]
Out[191]: 
(ObjectContent){
   obj = 
      (obj){
         value = "domain-c7"
         _type = "ClusterComputeResource"
      }
   propSet[] = 
      (DynamicProperty){
         name = "name"
         val = "cluster"
      },
 }

ERROR 2: 配置参数错误 config.fiile
TSG: 检查 config_spec 的参数是否符合创建虚拟机的基本要求

NOTE 1: session.invoke_api() 方法, 实际上是调用了指定 module 中的 method, 再将参数集合中其余的参数作为 method 的实参传递到过去, 最后返回一个 API CALL 的响应.
HELP:

Help on method invoke_apiin module oslo_vmware.api:

invoke_api(self, module, method, *args, **kwargs) method of oslo_vmware.api.VMwareAPISession instance
    Wrapper method for invoking APIs.

    The API call is retried in the event of exceptions due to session
    overload or connection problems.

    :param module: module corresponding to the VIM API call
    :param method: method in the module which corresponds to the
                   VIM API call
    :param args: arguments to the method
    :param kwargs: keyword arguments to the method
    :returns: response from the API call
    :raises: VimException, VimFaultException, VimAttributeException,
             VimSessionOverLoadException, VimConnectionException

所以如果你感兴趣的话, 可以到 module 中看看该 method 是怎么实现的.

NOTE 2: 需要注意的是, 当 module == session.vim 时, 实际上 module 是一个 VIM Object , 所以在 module 中并没有 method 的实现, 而是在 class VIM 中实现.

常用的虚拟机配置项

# /opt/stack/nova/nova/virt/vmwareapi/vm_util.py +204 ==> get_vm_create_spec()

client_factory = session.vim.client.factory
config_spec = client_factory.create('ns0:VirtualMachineConfigSpec')

config_spec.name name
config_spec.guestId = os_type

# The name is the unique identifier for the VM
config_spec.instanceUuid = instance.uuid
config_spec.annotation = metadata

# Set the HW version
config_spec.version = extra_specs.hw_version

# Allow nested hypervisor instances to host 64 bit VMs.
if os_type in ("vmkernel5Guest", "vmkernel6Guest", "windowsHyperVGuest"):
    config_spec.nestedHVEnabled = "True"

# Append the profile spec
if profile_spec:
    config_spec.vmProfile = [profile_spec]

# Datastore for VM
vm_file_info = client_factory.create('ns0:VirtualMachineFileInfo')
vm_file_info.vmPathName = "[" + data_store_name + "]"
config_spec.files = vm_file_info

# VMTools for VM
tools_info = client_factory.create('ns0:ToolsConfigInfo')
tools_info.afterPowerOn = True
tools_info.afterResume = True
tools_info.beforeGuestStandby = True
tools_info.beforeGuestShutdown = True
tools_info.beforeGuestReboot = True
config_spec.tools = tools_info

# CPU Info for VM
config_spec.numCPUs = int(instance.vcpus)

# Memory for VM
config_spec.memoryMB = int(instance.memory_mb)

删除虚拟机

删除虚拟机的逻辑就相对简单了:
1. 找到虚拟机
2. 执行删除.

# 获取虚拟机的资源清单
vms  = session.invoke_api(vim_util,        
                            'get_objects',              
                            session.vim,                 
                            'VirtualMachine',                
                            100) 

# 获取该虚拟机的 instanceUuid
vms.objects[10]

summary = session.invoke_api(vim_util, 'get_object_properties_dict', session.vim,
                              vms.objects[10].obj,
                              'summary')

instance_uuid = summary['summary'].config.instanceUuid

# 通过 instanceUuid 获取虚拟机实体对象
vm = session.invoke_api(session.vim, 
                         "FindAllByUuid",
                         session.vim.service_content.searchIndex,
                         uuid=instance_uuid,
                         vmSearch=True,
                         instanceUuid=True)

vm_ref = vm[0]

# 删除虚拟机
destroy_task = session.invoke_api(session.vim, 
                                  "Destroy_Task", 
                                  vm_ref)

当然在实际生产环境中, 我们会将之前创建的或者已经拥有的虚拟机资产信息存放到数据库中. 所以获取 instanceUuid 的途径就变成了从数据库获取.

相关阅读:

原文地址:https://www.cnblogs.com/hzcya1995/p/13310744.html