如何在RHEL 7上安装TripleO Undercloud(Openstack)
如何使用virt-manager(RHEL)在虚拟机上使用Red Hat OpenStack Platform 10安装TripleO Undercloud(Openstack)的指南。
红帽OpenStack平台director是用于安装和管理完整OpenStack环境的工具集。
它主要基于OpenStack项目TripleO,它是OpenStack-On-OpenStack的缩写。 。
因此,Red Hat OpenStack Platform总监使用两个主要概念:
Undercloud
Overcloud
在开始安装Tripleo Undercloud的步骤之前,让我们了解一些基本术语
Undercloud
undercloud是主要的director节点。它是一个单系统OpenStack安装,包括用于置备和管理构成OpenStack环境(多云)的OpenStack节点的组件。
Undercloud的主要目标如下:
发现已在其上部署了Openstack Platform的裸机服务器
充当要在这些节点上部署的软件的部署管理器
为部署定义复杂的网络拓扑和配置
向已部署的节点推出软件更新和配置
重新配置现有的undercloud部署环境
为openstack节点启用高可用性支持
Overcloud
overcloud是使用undercloud创建的最终的Red Hat OpenStack Platform环境。
这包括我们根据要创建的OpenStack Platform环境定义的不同节点角色。
因此,这是关于Openstack的简要概述,让我们从在Openstack中安装TripleO Undercloud并部署Overcloud的步骤开始。
我的环境:
我计划将一个控制器和计算节点作为我的overcloud部署的一部分。
用于托管undercloud和overcloud节点的物理主机
红帽OpenStack平台Director(VM)
一个Red Hat OpenStack Platform
Compute
节点(VM)一个Red Hat OpenStack Platform
Controller
节点(VM)
物理主机要求(最低)
以下是Red Hat建议的执行原型的最低要求:
双核64位x86处理器,支持Intel 64或者AMD64 CPU扩展
至少16 GB的RAM。
在根磁盘上至少获得40 GB的可用磁盘空间。
至少2 x 1 Gbps网络接口卡
已将Red Hat Enterprise Linux 7.X/CentOS 7.X安装为主机操作系统。
主机上已启用SELinux。
我的设置详细信息
以下是我的物理主机配置:
操作系统 | CentOS7.4 |
主机名 | openstack.example |
桥IP(nm-bridge1) | 10.43.138.12 |
外网(virbr0) | 192.168.122.0/24 GW:192.168.122.1 |
供应网络(virbr1) | 192.168.126.0/24 GW:192.168.126.254 |
RAM | 128 GB |
磁盘 | 900 GB |
CPU | 双核 |
重要的提示:
在安装物理主机时,请确保安装了与所有虚拟化相关的rpm的GNOME桌面,否则我们可以稍后使用以下方法手动安装它们
$yum install libvirt-client libvirt-daemon qemu-kvm libvirt-daemondriver-qemu libvirt-daemon-kvm virt-install bridge-utils rsync
说明:
我建议将CentOS用作物理主机,因为我们将需要使用VirtualBMC来执行与电源有关的活动。在RHEL中,当我们使用openstack-10时,我们将再次需要有效订阅rhel-7-server-openstack-11-rpms
。对于CentOS,我们可以从RDO项目下载VirtualBMC。
组网要求
undercloud主机至少需要两个网络:
"供应网络"-提供DHCP和PXE引导功能,以帮助发现要在overcloud中使用的裸机系统。通常,此网络必须在中继接口上使用本机VLAN,以便导向器可以处理PXE引导和DHCP请求。
外部网络-一个独立的网络,用于与所有节点的远程连接。连接到该网络的接口需要一个可路由的IP地址,该地址是静态定义的,还是通过外部DHCP服务动态定义的。
设计流程(本文的步骤)
简而言之,下面是"在Openstack中安装TripleO Undercloud并部署Overcloud"的流程
首先调出物理主机
为undercloud-director安装新的虚拟机
设置导演的主机名
配置仓库或者订阅RHN
安装python-tripleoclient
配置undercloud.conf
安装Undercloud
获取并上传镜像以进行Overcloud内省和部署
为overcloud节点创建虚拟机(计算和控制器)
配置虚拟裸机控制器
导入和注册overcloud节点
内省overcloud节点
将overcloud节点标记到配置文件
最后开始部署Overcloud节点
安装TripleO Undercloud Openstack
在我的物理主机(openstack)上,我们已经有一个"默认"网络
[root@openstack ~]# virsh net-list Name State Autostart Persistent --------------------------------------------------------- default active yes yes
我们将销毁此网络,并创建"外部"和"供应"网络
[root@openstack ~]# virsh net-destroy default [root@openstack ~]# virsh net-undefine default [root@openstack ~]# virsh net-list Name State Autostart Persistent ---------------------------------------------------------
接下来,使用以下模板创建"外部"网络。其中我使用" 192.168.122.1"作为分配给物理主机的网关。
[root@openstack ~]# cat /tmp/external.xml <network> <name>external</name> <forward mode='nat'> <nat> <port start='1024' end='65535' </nat> </forward> <ip address='192.168.122.1' netmask='255.255.255.0'> </ip> </network>
现在定义此网络,并使其在启动时自动启动
[root@openstack ~]# virsh net-define /tmp/external.xml [root@openstack ~]# virsh net-autostart external [root@openstack ~]# virsh net-start external
因此,让我们验证新的网络
[root@openstack ~]# virsh net-list Name State Autostart Persistent --------------------------------------------------------- external active yes yes
类似地,创建一个以192.168.126.254为网关的Provisioning网络。
[root@openstack ~]# cat /tmp/provisioning.xml <network> <name>provisioning</name> <ip address='192.168.126.254' netmask='255.255.255.0'> </ip> </network>
现在定义此网络,并使其在启动时自动启动
[root@openstack ~]# virsh net-define /tmp/provisioning.xml [root@openstack ~]# virsh net-autostart provisioning [root@openstack ~]# virsh net-start provisioning
最后,验证新的虚拟机网络列表。
[root@openstack ~]# virsh net-list Name State Autostart Persistent --------------------------------------------------------- external active yes yes provisioning active yes yes
检查网络配置。如我们所见,我们在上面创建的网络中有两个网桥" virbr0"和" virbr1"。
[root@openstack ~]# ifconfig eno51: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1500 ether 9c:dc:71:77:ef:51 txqueuelen 1000 (Ethernet) RX packets 100888 bytes 5670187 (5.4 MiB) RX errors 0 dropped 208 overruns 0 frame 0 TX packets 0 bytes 0 (0.0 B) TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0 eno52: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1500 ether 9c:dc:71:77:ef:59 txqueuelen 1000 (Ethernet) RX packets 54461086 bytes 81543828070 (75.9 GiB) RX errors 0 dropped 0 overruns 0 frame 0 TX packets 2985822 bytes 438043585 (417.7 MiB) TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0 lo: flags=73<UP,LOOPBACK,RUNNING> mtu 65536 inet 127.0.0.1 netmask 255.0.0.0 inet6 ::1 prefixlen 128 scopeid 0x10 loop txqueuelen 1 (Local Loopback) RX packets 152875 bytes 9356602 (8.9 MiB) RX errors 0 dropped 0 overruns 0 frame 0 TX packets 152875 bytes 9356602 (8.9 MiB) TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0 nm-bridge1: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1500 inet 10.43.138.12 netmask 255.255.255.224 broadcast 10.43.138.31 inet6 fe80::9edc:71ff:fe77:ef59 prefixlen 64 scopeid 0x20 ether 9c:dc:71:77:ef:59 txqueuelen 1000 (Ethernet) RX packets 8015838 bytes 77945540204 (72.5 GiB) RX errors 0 dropped 240 overruns 0 frame 0 TX packets 2725594 bytes 416996466 (397.6 MiB) TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0 virbr0: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1500 inet 192.168.122.1 netmask 255.255.255.0 broadcast 192.168.122.255 ether 52:54:00:4e:e8:2c txqueuelen 1000 (Ethernet) RX packets 0 bytes 0 (0.0 B) RX errors 0 dropped 0 overruns 0 frame 0 TX packets 1 bytes 160 (160.0 B) TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0 virbr1: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1500 inet 192.168.126.254 netmask 255.255.255.0 broadcast 192.168.126.255 ether 52:54:00:c9:37:63 txqueuelen 1000 (Ethernet) RX packets 0 bytes 0 (0.0 B) RX errors 0 dropped 0 overruns 0 frame 0 TX packets 0 bytes 0 (0.0 B) TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0 vnet0: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1500 inet6 fe80::fc54:ff:fea1:8128 prefixlen 64 scopeid 0x20 ether fe:54:00:a1:81:28 txqueuelen 1000 (Ethernet) RX packets 0 bytes 0 (0.0 B) RX errors 0 dropped 0 overruns 0 frame 0 TX packets 74 bytes 4788 (4.6 KiB) TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0 vnet1: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1500 inet6 fe80::fc54:ff:fe33:e8b4 prefixlen 64 scopeid 0x20 ether fe:54:00:33:e8:b4 txqueuelen 1000 (Ethernet) RX packets 0 bytes 0 (0.0 B) RX errors 0 dropped 0 overruns 0 frame 0 TX packets 33 bytes 1948 (1.9 KiB) TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0
同样检查网关的网络连接
[root@openstack ~]# ping 192.168.122.1 PING 192.168.122.1 (192.168.122.1) 56(84) bytes of data. 64 bytes from 192.168.122.1: icmp_seq=1 ttl=64 time=0.040 ms ^C --- 192.168.122.1 ping statistics -- 1 packets transmitted, 1 received, 0% packet loss, time 0ms rtt min/avg/max/mdev = 0.040/0.040/0.040/0.000 ms [root@openstack ~]# ping 192.168.126.254 PING 192.168.126.254 (192.168.126.254) 56(84) bytes of data. 64 bytes from 192.168.126.254: icmp_seq=1 ttl=64 time=0.058 ms 64 bytes from 192.168.126.254: icmp_seq=2 ttl=64 time=0.069 ms ^C --- 192.168.126.254 ping statistics -- 2 packets transmitted, 2 received, 0% packet loss, time 999ms rtt min/avg/max/mdev = 0.058/0.063/0.069/0.009 ms
使用基于KVM的嵌套虚拟化配置OpenStack
当使用KVM等虚拟化技术时,可以利用"嵌套VMX"(即在KVM上运行KVM的能力)的优势,从而使云中的VM(Nova guest虚拟机)的运行速度比普通QEMU仿真要快。
检查是否启用了"嵌套KVM"内核参数。
[root@openstack ~]# cat /sys/module/kvm_intel/parameters/nested N
在kvm.conf
中添加以下内容
[root@openstack ~]# vim /etc/modprobe.d/kvm.conf options kvm_intel nested=Y
重新引导节点并再次检查"嵌套的KVM"内核参数。
[root@openstack ~]# cat /sys/module/kvm_intel/parameters/nested Y
更新物理主机(openstack)上的"/etc/hosts"内容。我计划将" 192.168.122.90"用于我的director节点,因此我在此处添加了相同的内容
[root@openstack ~]# cat /etc/hosts 127.0.0.1 localhost localhost.localdomain localhost4 localhost4.localdomain4 ::1 localhost localhost.localdomain localhost6 localhost6.localdomain6 192.168.122.90 director.example director
在主机openstack机器上禁用firewalld
[root@openstack ~]# systemctl stop firewalld [root@openstack ~]# systemctl disable firewalld
创建Director虚拟机
我们可以在此处为导向器节点手动创建虚拟机。以下是我的规格和节点详细信息
操作系统 | CentOS 7.4 |
主机名 | director.example |
vCPUs | 4个 |
内存ダ20480MB | |
磁盘 格式:qcow2 | 60 GB |
公网(ens3) MAC:52:54:00:a1:81:28 | 10.43.138.27 |
供应网络(ens4) MAC:52:54:00:33:e8:b4 | 192.168.122.90 |
外部网络(ens9) MAC:52:54:00:86:83:c0 | 192.168.126.1 |
设置undercloud的主机名
导向器在其安装和配置过程中需要完全限定的域名。这意味着我们可能需要设置导演主持人的"主机名"。
# hostnamectl set-hostname director.example # hostnamectl set-hostname --transient director.example
导演还需要在/etc/hosts中输入系统的主机名和基本名。
[stack@director ~]$cat /etc/hosts 127.0.0.1 localhost localhost.localdomain localhost4 localhost4.localdomain4 ::1 localhost localhost.localdomain localhost6 localhost6.localdomain6 192.168.122.90 director.example director
以下是我的Director节点的网络配置
[root@director network-scripts]# ifconfig ens3: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1500 inet 10.43.138.27 netmask 255.255.255.0 broadcast 10.43.138.255 inet6 fe80::5054:ff:fea1:8128 prefixlen 64 scopeid 0x20 ether 52:54:00:a1:81:28 txqueuelen 1000 (Ethernet) RX packets 1393 bytes 75417 (73.6 KiB) RX errors 0 dropped 0 overruns 0 frame 0 TX packets 78 bytes 7833 (7.6 KiB) TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0 ens4: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1500 inet 192.168.126.1 netmask 255.255.255.0 broadcast 192.168.126.255 inet6 fe80::5054:ff:fe33:e8b4 prefixlen 64 scopeid 0x20 ether 52:54:00:33:e8:b4 txqueuelen 1000 (Ethernet) RX packets 2 bytes 130 (130.0 B) RX errors 0 dropped 0 overruns 0 frame 0 TX packets 77 bytes 4226 (4.1 KiB) TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0 ens9: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1500 inet 192.168.122.90 netmask 255.255.255.0 broadcast 192.168.122.255 inet6 fe80::5054:ff:fe86:83c0 prefixlen 64 scopeid 0x20 ether 52:54:00:86:83:c0 txqueuelen 1000 (Ethernet) RX packets 1238 bytes 87817 (85.7 KiB) RX errors 0 dropped 0 overruns 0 frame 0 TX packets 805 bytes 220059 (214.9 KiB) TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0 lo: flags=73<UP,LOOPBACK,RUNNING> mtu 65536 inet 127.0.0.1 netmask 255.0.0.0 inet6 ::1 prefixlen 128 scopeid 0x10 loop txqueuelen 1 (Local Loopback) RX packets 251 bytes 20716 (20.2 KiB) RX errors 0 dropped 0 overruns 0 frame 0 TX packets 251 bytes 20716 (20.2 KiB) TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0
类似地,以下是我用于"公共网络"的网络配置文件,该文件用于从笔记本电脑直接连接
[root@director network-scripts]# cat ifcfg-ens3 TYPE=Ethernet PROXY_METHOD=none BROWSER_ONLY=no BOOTPROTO=none IPADDR=10.43.138.27 PREFIX=24 GATEWAY=10.43.138.30 DEFROUTE=yes IPV4_FAILURE_FATAL=no IPV6INIT=no NAME=ens3 UUID=e7dab5ae-06c6-4855-bf1e-487919fe13a2 DEVICE=ens3 ONBOOT=yes
类似地,以下是我的"配置网络"的网络配置文件
[root@director network-scripts]# cat ifcfg-ens4 TYPE=Ethernet PROXY_METHOD=none BROWSER_ONLY=no BOOTPROTO=none IPADDR=192.168.126.1 PREFIX=24 DEFROUTE=no IPV4_FAILURE_FATAL=no IPV6INIT=no NAME=ens4 UUID=8f6b534e-2ee1-4bc8-9159-27be0214d507 DEVICE=ens4 ONBOOT=yes
以下是我针对"外部网络"的网络配置文件
[root@director network-scripts]# cat ifcfg-ens9 TYPE=Ethernet PROXY_METHOD=none BROWSER_ONLY=no BOOTPROTO=none DEFROUTE=no IPV4_FAILURE_FATAL=no IPV6INIT=no NAME=ens9 UUID=7ab31c05-3da6-4609-a55f-c63c078e8f19 DEVICE=ens9 ONBOOT=yes IPADDR=192.168.122.90 PREFIX=24
以下是我的路线文件
[root@director network-scripts]# cat route-ens4 ADDRESS0=192.168.126.0 NETMASK0=255.255.255.0 GATEWAY0=192.168.126.254 METRIC0=0 [root@director network-scripts]# cat route-ens9 ADDRESS0=192.168.122.0 NETMASK0=255.255.255.0 GATEWAY0=192.168.122.1 METRIC0=0
同样,以下是我的路线详细信息
[root@director network-scripts]# ip route show default via 10.43.138.30 dev ens3 proto static metric 100 10.43.138.0/24 dev ens3 proto kernel scope link src 10.43.138.27 metric 100 192.168.122.0/24 via 192.168.122.1 dev ens9 proto static 192.168.122.0/24 dev ens9 proto kernel scope link src 192.168.122.90 metric 102 192.168.126.0/24 via 192.168.126.254 dev ens4 proto static 192.168.126.0/24 dev ens4 proto kernel scope link src 192.168.126.1 metric 101
最后,请确保我们能够ping通所有网关
[root@director network-scripts]# ping 192.168.122.1 PING 192.168.122.1 (192.168.122.1) 56(84) bytes of data. 64 bytes from 192.168.122.1: icmp_seq=1 ttl=64 time=0.269 ms 64 bytes from 192.168.122.1: icmp_seq=2 ttl=64 time=0.315 ms 64 bytes from 192.168.122.1: icmp_seq=3 ttl=64 time=0.335 ms ^C --- 192.168.122.1 ping statistics -- 3 packets transmitted, 3 received, 0% packet loss, time 2000ms rtt min/avg/max/mdev = 0.269/0.306/0.335/0.031 ms [root@director network-scripts]# ping 192.168.126.254 PING 192.168.126.254 (192.168.126.254) 56(84) bytes of data. 64 bytes from 192.168.126.254: icmp_seq=1 ttl=64 time=0.410 ms 64 bytes from 192.168.126.254: icmp_seq=2 ttl=64 time=0.337 ms 64 bytes from 192.168.126.254: icmp_seq=3 ttl=64 time=0.365 ms ^C --- 192.168.126.254 ping statistics -- 3 packets transmitted, 3 received, 0% packet loss, time 2000ms rtt min/avg/max/mdev = 0.337/0.370/0.410/0.037 ms
配置存储库
由于我无法从此Director节点直接访问Internet,因此我已将所需的在线存储库同步到我的OpenStack主机上,并通过http作为脱机存储库使用
[root@director network-scripts]# cat /etc/yum.repos.d/rhel.repo [rhel-7-server-extras-rpms] name=rhel-7-server-extras-rpms baseurl=http://192.168.122.1/repo/rhel-7-server-extras-rpms/ gpgcheck=0 enabled=1 [rhel-7-server-rh-common-rpms] name=rhel-7-server-rh-common-rpms baseurl=http://192.168.122.1/repo/rhel-7-server-rh-common-rpms/ gpgcheck=0 enabled=1 [rhel-7-server-rpms] name=rhel-7-server-rpms baseurl=http://192.168.122.1/repo/rhel-7-server-rpms/ gpgcheck=0 enabled=1 [rhel-7-server-openstack-10-devtools-rpms] name=rhel-7-server-openstack-10-devtools-rpms baseurl=http://192.168.122.1/repo/rhel-7-server-openstack-10-devtools-rpms/ gpgcheck=0 enabled=1 [rhel-7-server-openstack-10-rpms] name=rhel-7-server-openstack-10-rpms baseurl=http://192.168.122.1/repo/rhel-7-server-openstack-10-rpms/ gpgcheck=0 enabled=1 [rhel-7-server-satellite-tools-6.2-rpms] name=rhel-7-server-satellite-tools-6.2-rpms baseurl=http://192.168.122.1/repo/rhel-7-server-satellite-tools-6.2-rpms/ gpgcheck=0 enabled=1 [rhel-ha-for-rhel-7-server-rpms] name=rhel-ha-for-rhel-7-server-rpms baseurl=http://192.168.122.1/repo/rhel-ha-for-rhel-7-server-rpms/ gpgcheck=0 enabled=1
在导向器节点上禁用"防火墙"
[root@director ~]# systemctl stop firewalld [root@director ~]# systemctl disable firewalld
安装Director软件包
因此,请使用以下命令来安装Director安装和配置所需的命令行工具:
[root@director ~]# yum install -y python-tripleoclient
创建用于undercloud部署的用户
Undercloud和overcloud部署必须以普通用户而不是root用户的身份完成,因此我们将为此目的创建一个" stack"用户。
[root@director ~]# useradd stack [root@director network-scripts]# echo redhat | passwd --stdin stack Changing password for user stack. passwd: all authentication tokens updated successfully. [root@director ~]# echo "stack ALL=(root) NOPASSWD:ALL" | tee -a /etc/sudoers.d/stack stack ALL=(root) NOPASSWD:ALL [root@director ~]# chmod 0440 /etc/sudoers.d/stack [root@director ~]# su - stack Last login: Mon Oct 8 08:54:44 IST 2016 on pts/0
配置undercloud部署参数
将示例undercloud.conf
文件复制到stack
用户的主目录,如下所示
[stack@director ~]$cp /usr/share/instack-undercloud/undercloud.conf.sample ~/undercloud.conf
现在在undercloud.conf
中更新或者添加以下变量。这些变量将用于设置undercloud节点。
[stack@director ~]$vim undercloud.conf [DEFAULT] local_ip = 192.168.126.1/24 undercloud_public_vip = 192.168.126.2 undercloud_admin_vip = 192.168.126.3 local_interface = ens4 masquerade_network = 192.168.126.0/24 dhcp_start = 192.168.126.100 dhcp_end = 192.168.126.150 network_cidr = 192.168.126.0/24 network_gateway = 192.168.126.1 inspection_iprange = 192.168.126.160,192.168.126.199 generate_service_certificate = true certificate_generation_ca = local
我们可以按照Red Hat官方页面了解我们在此处使用的各个参数。
安装TripleO Undercloud
Undercloud部署是完全自动化的,并使用TripleO提供的puppet
列表。这将启动控制器的配置脚本。导向器将安装其他软件包并配置其服务以适应undercloud.conf
中的设置。
说明:
完成完整配置将需要一些时间。
[stack@director ~]$openstack undercloud install ** output trimmed ** ############################################################################# Undercloud install complete. The file containing this installation's passwords is at /home/stack/undercloud-passwords.conf. There is also a stackrc file at /home/stack/stackrc. These files are needed to interact with the OpenStack services, and should be secured. #############################################################################
说明:
如果在配置undercloud节点时遇到任何问题,请检查/home/stack/.instack/install-undercloud.log
文件以获取与安装相关的日志
使用python脚本/usr/lib/python2.7/site-packages/instack_undercloud/undercloud.py
执行配置。
完成后,配置脚本将生成两个文件:
undercloud-passwords.conf-导演服务的所有密码的列表。
stackrc
-一组初始化变量,可访问Director的命令行工具。
查看undercloud的已配置网络接口。 " br-ctlplane"网桥是" 192.168.126.1"配置网; " ens9"接口是" 192.168.122.90"外部网络," ens3"和" 10.43.138.27"是公共网络。
[root@director ~]# ip a | grep -E 'br-ctlplane|ens9|ens3' 2: ens3: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP group default qlen 1000 inet 10.43.138.27/24 brd 10.43.138.255 scope global noprefixroute ens3 4: ens9: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP group default qlen 1000 inet 192.168.122.90/24 brd 192.168.122.255 scope global noprefixroute ens9 6: br-ctlplane: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UNKNOWN group default qlen 1000 inet 192.168.126.1/24 brd 192.168.126.255 scope global br-ctlplane inet 192.168.126.3/32 scope global br-ctlplane inet 192.168.126.2/32 scope global br-ctlplane
因此,在下一篇文章中,我将继续"安装TripleO Undercloud并在Openstack中部署Overcloud"。
接下来,将介绍使用单个控制器和计算节点部署Overcloud的步骤。