OpenStack笔记

************************批量打印出VM的信息********************

nova list | awk '{print $2}'|grep 360 |xargs -n1 -t nova show               //批量打印出id含有360的VM的信息
nova list | awk '{print $4}'|grep worker |xargs -t -I {} nova show {}      //批量打印出name含有worker的VM的信息

*******************查看host资源使用情况********************

openstack host show <compute1>

+------------------------------+----------------------------------+-----+-----------+---------+
| Host                         | Project                          | CPU | Memory MB | Disk GB |
+------------------------------+----------------------------------+-----+-----------+---------+
| compute1 | (total)                                            |  56 |    385406 |     812 |
| compute1 | (used_now)                                   |  50 |    184832 |     328 |
| compute1 | (used_max)                                    |  50 |    184320 |     328 |

+------------------------------+----------------------------------+-----+-----------+---------+

*********virsh xml文件解读******************************

https://libvirt.org/format.html

https://libvirt.org/formatdomain.html#elementsDisks

*************查看每个network的IP地址使用情况******************

# neutron net-ip-availability-list
+--------------------------------------+--------------+-----------+----------+
| network_id                           | network_name | total_ips | used_ips |
+--------------------------------------+--------------+-----------+----------+
| cbfdf14f-b24a-42ee-ae4f-00822cbed32f | tenant_2   |         5 |        1 |
| a2933e8d-ced2-44cb-846e-66194f9b6be8 | test_net1    |       151 |        4 |
| badfa621-ac4b-4b3c-869e-32f0f9ddbb37 | tenant_1   |         5 |        1 |
+--------------------------------------+--------------+-----------+----------+

 *************查看每个network的IP地址和VLAN信息******************

# neutron net-list -c id -c name -c provider:segmentation_id

# neutron subnet-list -c id -c name -c cidr

# neutron net-list -c id -c name -c provider:segmentation_id  -c subnets

  

*************************************Host Aggregate*******************************

https://docs.openstack.org/nova/queens/user/aggregates.html

nova里的相关指令:

aggregate-add-host          Add the host to the specified aggregate.
aggregate-create            Create a new aggregate with the specified details.
aggregate-delete            Delete the aggregate.
aggregate-details           Show details of the specified aggregate.
aggregate-list              Print a list of all aggregates.
aggregate-remove-host       Remove the specified host from the specified aggregate.
aggregate-set-metadata      Update the metadata associated with the aggregate.
aggregate-update            Update the aggregate's name and optionally availability zone.
availability-zone-list      List all the availability zones.

要点:

++一个host可以定义在多个Host Aggregate里,但只能属于一个Availability Zone。

++Availability Zone是在指令nova aggregate-create中创建的。

++必须先使用nova aggregate-remove-host 指令删除HA中host,然后才能删除该HA

******************cinder*************************************

volume delete时为什么速度很慢:

https://ask.openstack.org/en/question/64894/delete-a-volume-very-slow/

==> 修改cinder.conf文件的volume_clear = none或volume_clear_size=50。

指令cinder get-pools --detail

cinder service-list

cinder availability-zone-list

***********************ceilometer*************************

指令sample:

ceilometer sample-list -m hardware.memory.total

// 磁盘I/O读出速率
ceilometer sample-list -m disk.read.bytes.rate -l 6 -q resource=Resource_ID
// 磁盘IO写入速率
ceilometer sample-list -m disk.write.bytes.rate -l 6 -q resource=Resource_ID
// 磁盘每秒进行读(I/O)操作的次数
ceilometer sample-list -m disk.read.requests.rate -l 6 -q resource=Resource_ID
// 磁盘每秒进行写(I/O)操作的次数
ceilometer sample-list -m disk.write.requests.rate -l 6 -q resource=Resource_ID

*************************nova scheduler log**********************************

从nova-scheduler.log中可以看出各个Filter的筛选结果:start:7是开始选择的可用的host数量,end:7是最终符合条件的host数量。

PciPassthroughFilter中end:0表示没有找到符合条件的host:

root@cic:/var/log/nova# cat nova-scheduler.log|grep 356e4818-7819-4207-add3-4592d40678b8
2018-07-12T14:03:48.775287+08:00 {{ nova-scheduler[6157]: 2018-07-12 14:03:48.774 6157 INFO nova.filters [req-e13577a7-a2d5-49ed-aed0-65b0ec581526 cd44b708e91a4735bc159a3d1fcce956 38a6a1808b374d11a1a723a57309eeb8 - - -] Filtering removed all hosts for the request with instance ID '356e4818-7819-4207-add3-4592d40678b8'. Filter results: ['AggregateMultiTenancyIsolation: (start: 8, end: 8)', 'RetryFilter: (start: 8, end: 7)', 'AvailabilityZoneFilter: (start: 7, end: 7)', 'RamFilter: (start: 7, end: 7)', 'CoreFilter: (start: 7, end: 7)', 'DiskFilter: (start: 7, end: 7)', 'ComputeFilter: (start: 7, end: 7)', 'ComputeCapabilitiesFilter: (start: 7, end: 7)', 'ImagePropertiesFilter: (start: 7, end: 7)', 'AggregateInstanceExtraSpecsFilter: (start: 7, end: 7)', 'SameHostFilter: (start: 7, end: 7)', 'DifferentHostFilter: (start: 7, end: 7)', 'ServerGroupAntiAffinityFilter: (start: 7, end: 7)', 'ServerGroupAffinityFilter: (start: 7, end: 7)', 'PciPassthroughFilter: (start: 7, end: 0)']

 Note:nova show指令的结果中如果没有compute host的信息,就说明虚拟机在schedule阶段就创建失败了。

*************************nova logs**********************************

nova logs中有处理请求的时长:

例如:nova-api.log

2018-10-05T18:00:09.969435+08:00 {{ nova-api[22140]: 2018-10-05 18:00:09.969 22140 INFO nova.osapi_compute.wsgi.server [req-429fcae3-5694-446e-ab86-6089d5a601f8 63a4c2dd9be44692941823a831712627 d6701fb237e946719667c06cf61c7b21 - - -] 3200::6848:73 "GET /v2.1/servers/detail?all_tenants=1&host=compute-1.domain.test HTTP/1.1" status: 200 len: 3964 time: 0.1234751

Note:API负载重且响应慢,可能与数据库中instances的记录太多有关:有很多deleted和error状态的instances。 

例如:nova-compute.log

2018-10-08T21:46:34.812868+08:00 compute-1.domain nova-compute[110839]: 2018-10-08 21:46:34.812 110839 INFO nova.compute.manager [req-73c6302f-df2a-4001-bc5f-3f5eadd1cb6c 288b67a8179d4d39b980fe7566e1fb57 d6701fb237e946719667c06cf61c7b21 - - -] [instance: 1263f4a8-d49b-442d-9cc4-85c20534e9f0] Took 12.56 seconds to build instance. 

****************************nova ************************

Create a new server

openstack server create

    (--image <image> | --image-property <key=value> | --volume <volume>)

    --flavor <flavor>

    [--security-group <security-group>]

    [--key-name <key-name>]

    [--property <key=value>]

    [--file <dest-filename=source-filename>]

    [--user-data <user-data>]

    [--availability-zone <zone-name>]

    [--block-device-mapping <dev-name=mapping>]

    [--nic <net-id=net-uuid,v4-fixed-ip=ip-addr,v6-fixed-ip=ip-addr,port-id=port-uuid,auto,none>]

    [--network <network>]

    [--port <port>]

    [--hint <key=value>]

    [--config-drive <config-drive-volume>|True]

    [--min <count>]

    [--max <count>]

    [--wait]

    <server-name>

--nic <net-id=net-uuid,v4-fixed-ip=ip-addr,v6-fixed-ip=ip-addr,port-id=port-uuid,auto,none>

Create a NIC on the server. Specify option multiple times to create multiple NICs. Either net-id or port-id must be provided, but not both. net-id: attach NIC to network with this UUID, port-id: attach NIC to port with this UUID, v4-fixed-ip: IPv4 fixed address for NIC (optional), v6-fixed-ip: IPv6 fixed address for NIC (optional), none: (v2.37+) no network is attached, auto: (v2.37+) the compute service will automatically allocate a network. Specifying a –nic of auto or none cannot be used with any other –nic value.

 Note:Instance的特有数据:1)meta data; 2)user-date,也就是config driver数据; 3)文件注入:使用--file选项。

*********************************************************************

~# nova diagnostics VM1
+--------------------+--------------+
| Property           | Value        |
+--------------------+--------------+
| cpu0_time          | 224240000000 |
| cpu1_time          | 218990000000 |
| hdd_errors         | -1           |
| hdd_read           | 37088        |
| hdd_read_req       | 16           |
| hdd_write          | 0            |
| hdd_write_req      | 0            |
| memory             | 4194304      |
| memory-actual      | 4194304      |
| memory-available   | 3915332      |
| memory-major_fault | 1406         |
| memory-minor_fault | 1578133      |
| memory-rss         | 90036        |
| memory-swap_in     | 0            |
| memory-swap_out    | 0            |
| memory-unused      | 2245212      |
| vda_errors         | -1           |
| vda_read           | 890119168    |
| vda_read_req       | 55861        |
| vda_write          | 132972544    |
| vda_write_req      | 1165         |
| vdb_errors         | -1           |
| vdb_read           | 268605440    |
| vdb_read_req       | 17670        |
| vdb_write          | 1841225728   |
| vdb_write_req      | 16671        |
+--------------------+--------------+

*********************cinder availability zone**********************

///新加cinder availability zone需要先在nova里新加availability zone???

///然后手工修改cinder.conf,添加 storage_availability_zone=my_zone_name。(cinder node)

如果要新加一个zone,可以加一个新的 cinder-volume,然后修改默认的zone,启动服务

*****cloud init*********************

参见http://www.cnblogs.com/CloudMan6/p/6431771.html
cloud-init 的配置文件为 /etc/cloud/cloud.cfg:
    root 能够直接登录 instance(默认不允许 root 登录),设置:disable_root: 0
    能以 ssh passwod 方式登录(默认只能通过 private key 登录),设置:ssh_pwauth: 1
instance 每次启动 cloud-init 都会执行初始化工作,如果希望改变所有 instance 的初始化行为,则修改镜像的 /etc/cloud/cloud.cfg 文件;
如果只想改变某个 instance 的初始化行为,直接修改 instance 的 /etc/cloud/cloud.cfg(???).

***************arp spoof****************

需要同时在controller和compute节点上设置:in /etc/neutron/plugins/ml2/openvswitch_agent.ini:
prevent_arp_spoofing = True

*********keystone Fernet Token**********************

keystone有四种 Token,现在采用的是Fernet Token;相关文档:

https://blog.csdn.net/xiongchun11/article/details/53886416

Fernet - Frequently Asked Questions:

https://docs.openstack.org/keystone/latest/admin/identity-fernet-token-faq.html

IBM:

https://developer.ibm.com/opentech/2015/11/11/deep-dive-keystone-fernet-tokens/

https://developer.ibm.com/opentech/2015/11/11/deep-dive-keystone-fernet-tokens/

****************************************************************************

每个compute host的目录/var/lib/nova/instances,包含两种类型的目录:

第一个是_base目录,里面包含所有从glance缓存过来的基本镜像;

其它目录命名为Instance-xxxxxx,对应那些在该compute host上运行的虚拟机实例;里面的文件与_base目录中的一个文件相互关联。它们本质上是差分文件,只包含在初始的_base目录上做出的改动。

数据库连接connection的字符串格式:

mysql://<username>:<password>@<hostname>/<database name> 

**********************instance admin password*************************************

https://docs.openstack.org/nova/pike/admin/admin-password-injection.html

admin password injection is disabled by default. To enable it, set this option in /etc/nova/nova.conf:

[libvirt]
inject_password=true

nova boot --image ubuntu1604 --flavor m1.summit --admin-pass mypassword mycustomrootpasswordinstance

https://access.redhat.com/solutions/2213451#

  • How to use the adminPass in nova to set the root password while spawning an instance ?
  • If I didn't add any keypair to the instance and when I didn't capture the adminpass during nova boot command. How I can find this password?
  • How to set the root password of instance while spawning it ?

 *******************************************dhcp*******************

neutron.conf -> dhcp_lease_duration=86400

 

********************************************rabbitmq****************************************

~# rabbitmqctl list_queues|grep cinder

~# rabbitmqctl status

 

**********************************OpenStack虚机网卡的创建过程*****************************

https://zhuanlan.zhihu.com/p/31695924  (https://blog.csdn.net/dylloveyou/article/details/78735482) (网卡创建过程)

https://blog.csdn.net/bc_vnetwork/article/details/51771366 (VM建立的详细过程)

http://bodenr.blogspot.com/  (http://bodenr.blogspot.com/2014/03/openstack-nova-boot-server-call-diagram.html#more)
https://blog.csdn.net/bc_vnetwork/article/details/52231418 (详细log)

Note:control node中的neutron.conf文件的参数base_mac配置了MAC地址的范围。

 

**************************openstack domain,project,user,role**************************

http://dy.163.com/v2/article/detail/D2ISUN3L0511Q0OL.html

++Domain - 表示 project 和 user 的集合,在公有云或者私有云中常常表示一个客户

++Users must be associated with at least one project, though they may belong to many. Therefore, to add at least one project before adding users.

++同一个user可以加入到不同的project里;

++创建project和user时,都可以指定domain;

++role是独立于domain的(roles that you create must map to roles specified in the policy.json):

$ openstack role create --help
usage: openstack role create [-h] [-f {json,shell,table,value,yaml}]
                             [-c COLUMN] [--max-width <integer>] [--noindent]
                             [--prefix PREFIX] [--or-show]
                             <role-name>

++user与project即使在不同的domain,也能加入到该project???

++创建用户时避免明文密码输入的方法:

$ openstack user create --domain default  --password-prompt admin

++创建user时如果指定project,则需要在给user增加role后,才能显示在这个project里:

 root@server1:~# openstack user create --project test-project --password pwd123 user66
 +--------------------+----------------------------------+
 | Field              | Value                            |
 +--------------------+----------------------------------+
 | default_project_id | 2d1eba68aae94dedaa5488296f7ff340 |
 | domain_id          | default                          |
 | enabled            | True                             |
 | id                 | dec1bfe9a14440b596578297754efeb1 |
 | name               | user66                         |
 +--------------------+----------------------------------+
 root@server1:~# openstack user list --project test-project
 
 root@server1:~# openstack role add --user user66 --project 2d1eba68aae94dedaa5488296f7ff340 projectAdmin
 root@server1:~# openstack user list --project test-project
 +----------------------------------+----------+
 | ID                               | Name     |
 +----------------------------------+----------+
 | dec1bfe9a14440b596578297754efeb1 | user66 |
 +----------------------------------+----------+

 

***********************************neutron qos policy***************************

给同一个policy添加ingress和egress的rule:

# neutron qos-policy-create test_qos1

# neutron qos-bandwidth-limit-rule-create --max-kbps=100000 --max-burst-kbps=20000 test_qos1 --direction ingress

# neutron qos-bandwidth-limit-rule-create --max-kbps=200000 --max-burst-kbps=40000 test_qos1

# neutron qos-policy-show test_qos4
+-------------+--------------------------------------------------------------+
| Field       | Value                                                        |
+-------------+--------------------------------------------------------------+
| description |                                                              |
| id          | 07efb2d2-1851-4d99-a514-7b5e517dd5a8                         |
| name        | test_qos4                                                    |
| rules       | cc16ba01-ce1a-4120-8558-1112d83c1429 (type: bandwidth_limit) |
|             | 84b9649d-82c9-4ab8-a698-1eecce42d5ce (type: bandwidth_limit) |
| shared      | False                                                        |
| tenant_id   | d6701fb237e946719667c06cf61c7b21                             |
+-------------+--------------------------------------------------------------+

 

*********************宿主机重启***********************

宿主机重启时会保存在线的虚拟机状态;在虚拟机未完成内存保存时重启host,造成虚拟机保存的状态文件错误,从而虚拟机无法重启。

虚拟机的状态保存文件路径是/var/lib/libvirt/qemu/save/;

直接删除相应的文件再启动虚拟机即可。

 

****************virsh指令********************

virsh nodedev-list --tree

virsh nodedev-dumpxml <pci>

ovs-appctl bond/show

ovs-appctl bond/show <bond>

原文地址:https://www.cnblogs.com/bjtime/p/9233305.html