B01-环境介绍

环境信息描述:

控制节点信息:

controller01    10.100.214.201

controller02   10.100.214.202

controller03   10.100.214.203

控制节点的硬件信息:

内存8G  处理器4核

硬盘:20G  系统盘

硬盘:50G  ceph osd

网卡:4个(其中两个设置仅主机,两个nat)

计算节点信息:

compute01  10.100.214.205

compute02  10.100.214.206

compute02  10.100.214.207

控制节点的硬件信息:

内存4G  处理器2核

硬盘:20G  系统盘

硬盘:50G  ceph osd

网卡:4个(其中两个设置仅主机,两个nat)

控制节点组件图示:

Host
IP
Service
Remark
controller01
ens192(Management + API + Message + Storage Public Network): 10.100.201.201
 
ens224(External Network): 10.100.202.201
 
ens161(Tunnel Tenant Network):115.115.115.201
 
ens256(Vlan Tenant Network)
1. keystone
2. glance-api , glance-registry
3. nova-api, nova-conductor, nova-consoleauth, nova-scheduler, nova-novncproxy
4. neutron-api, neutron-linuxbridge-agent, neutron-dhcp-agent, neutron-metadata-agent, neutron-l3-agent
5. cinder-api, cinder-schedulera
6. dashboard
7. ceph-mon, ceph-mgr
8. mariadb, rabbitmq, memcached等
1.控制节点: keystone, glance, horizon, nova&neutron管理组件;
2.网络节点:虚机网络,L2/L3,dhcp,route,nat等;
3.存储节点:调度,监控(ceph)等组件;
4.openstack基础服务
controller02
ens192(Management + API + Message + Storage Public Network): 10.100.201.202
 
ens224(External Network): 10.100.202.202
 
ens161(Tunnel Tenant Network):115.115.115.202
 
ens256(Vlan Tenant Network)
1. keystone
2. glance-api , glance-registry
3. nova-api, nova-conductor, nova-consoleauth, nova-scheduler, nova-novncproxy
4. neutron-api, neutron-linuxbridge-agent, neutron-dhcp-agent, neutron-metadata-agent, neutron-l3-agent
5. cinder-api, cinder-schedulera
6. dashboard
7. ceph-mon, ceph-mgr
8. mariadb, rabbitmq, memcached等
1.控制节点: keystone, glance, horizon, nova&neutron管理组件;
2.网络节点:虚机网络,L2/L3,dhcp,route,nat等;
3.存储节点:调度,监控(ceph)等组件;
4.openstack基础服务
controller03
ens192(Management + API + Message + Storage Public Network): 10.100.201.203
 
ens224(External Network): 10.100.202.203
 
ens161(Tunnel Tenant Network):115.115.115.203
 
ens256(Vlan Tenant Network)
1. keystone
2. glance-api , glance-registry
3. nova-api, nova-conductor, nova-consoleauth, nova-scheduler, nova-novncproxy
4. neutron-api, neutron-linuxbridge-agent, neutron-dhcp-agent, neutron-metadata-agent, neutron-l3-agent
5. cinder-api, cinder-schedulera
6. dashboard
7. ceph-mon, ceph-mgr
8. mariadb, rabbitmq, memcached等
1.控制节点: keystone, glance, horizon, nova&neutron管理组件;
2.网络节点:虚机网络,L2/L3,dhcp,route,nat等;
3.存储节点:调度,监控(ceph)等组件;
4.openstack基础服务
compute01
eth0(Management + Message + Storage Publi
 
 
ens224(External Network): 10.100.202.205
 
ens161(Tunnel Tenant Network):115.115.115.205
 
ens256(Vlan Tenant Network)
c Network): 10.100.201.205
 
1. nova-compute
2. neutron-linuxbridge-agent
3. cinder-volume(如果后端使用共享存储,建议部署在controller节点)
4. ceph-osd
1.计算节点:hypervisor(kvm);
2.网络节点:虚机网络等;
3.存储节点:卷服务等组件
compute02
ens192(Management + API + Message + Storage Public Network): 10.100.201.206
 
ens224(External Network): 10.100.202.206
 
ens161(Tunnel Tenant Network):115.115.115.206
 
ens256(Vlan Tenant Network)
1. nova-compute
2. neutron-linuxbridge-agent
3. cinder-volume(如果后端使用共享存储,建议部署在controller节点)
4. ceph-osd
1.计算节点:hypervisor(kvm);
2.网络节点:虚机网络等;
3.存储节点:卷服务等组件
compute03
ens192(Management + API + Message + Storage Public Network): 10.100.201.207
 
ens224(External Network): 10.100.202.207
 
ens161(Tunnel Tenant Network):115.115.115.207
 
ens256(Vlan Tenant Network)
1. nova-compute
2. neutron-linuxbridge-agent
3. cinder-volume(如果后端使用共享存储,建议部署在controller节点)
4. ceph-osd
1.计算节点:hypervisor(kvm);
2.网络节点:虚机网络等;
3.存储节点:卷服务等组件

1.配置主机之间域名解析:

[root@controller01 ~]# cat /etc/hosts
127.0.0.1   localhost localhost.localdomain localhost4 localhost4.localdomain4
::1         localhost localhost.localdomain localhost6 localhost6.localdomain6

10.100.214.201 controller01
10.100.214.202 controller02
10.100.214.203 controller03

10.100.214.205 compute01
10.100.214.206 compute02
10.100.214.207 compute03

2:关闭防火墙  和 selinux

systemctl stop firewalld

systemctl disable firewalld
sed -i "s/^SELINUX=.*/SELINUX=disabled/g" /etc/selinux/config
setenforce 0

3:配置时间同步:

controler01为主ntp服务器,其他节点用来同步controller01的时间

yum install chrony -y

[root@controller01 ~]# egrep -v "^#|^$" /etc/chrony.conf
server 0.centos.pool.ntp.org iburst
server 1.centos.pool.ntp.org iburst
server 2.centos.pool.ntp.org iburst
server 3.centos.pool.ntp.org iburst
driftfile /var/lib/chrony/drift
makestep 1.0 3
rtcsync
allow 10.100.214.0/24
local stratum 10
logdir /var/log/chrony

systemctl start chronyd 

其他节点如下配置

sed -i "s/^server.*/#&/g" /etc/chrony.conf
echo "server 10.100.214.201 iburst" >> /etc/chrony.conf

[root@controller02 ~]# egrep -v "^#|^$" /etc/chrony.conf
server 10.100.214.201 iburst
driftfile /var/lib/chrony/drift
makestep 1.0 3
rtcsync
logdir /var/log/chrony

systemctl start chronyd 

安装openstack的相关基础软件包:

 [root@controller01 ~]# yum install python-openstackclient openstack-selinux  -y

原文地址:https://www.cnblogs.com/zhaopei123/p/13074176.html