如何在CentOS 7上使用packstack在virtualbox上安装多节点openstack

时间:2020-01-09 10:40:59  来源:igfitidea点击:

在本文中,将介绍使用CentOS 7 Linux在VirtualBox上安装多节点OpenStack的步骤。 Packstack用于执行多节点OpenStack部署,主要是出于POC目的,因为它不能访问OpenStack的所有功能。即使我们将在VirtualBox上安装多节点OpenStack,我们也只有两个节点的设置。这两个节点都是在Windows 10上的Oracle VirtualBox上运行CentOS 7的虚拟机。

理想情况下,我们还可以将OpenStack服务配置为在单个节点而不是单个节点上运行。

试验环境

组件配置详细信息
HDD120GB安装操作系统。建议配置LVM(在两个节点上)
HDD210GB配置Cinder。必须创建LVM。(仅在控制器节点server1.example.com)上配置
内存8GB两个节点上
NIC1桥接适配器用于连接到internet以下载软件包(在两个节点上)
NIC2内部网络用于openstack节点之间的内部网络(在两个节点上)
vCPU4两个节点上
OSCentOS 7.6在两个节点上
主机名-server1. example.com(控制器)

server1. example.com(计算)

提示:

其中" server1"将充当控制器,而" server2"将在我们的OpenStack设置中进行计算。

为了在我们的控制器节点上配置cinder存储,我已经创建了一个单独的卷组cinder-volumes。

Packstack将使用此卷组来创建Cinder存储。

[root@server1 ~]# vgs
  VG             #PV #LV #SN Attr   VSize   VFree
  centos           1   2   0 wz--n- <19.00g      0
  cinder-volumes   1   0   0 wz--n- <10.00g <10.00g

这里我的操作系统安装在OS卷组下,而cinder-volumes稍后将由Packstack使用。

配置网络以在VirtualBox上安装多节点OpenStack

网络配置对于在VirtualBox上正确设置和安装多节点OpenStack至关重要。我正在使用两个NIC适配器,如"实验环境"部分中已告知的那样,并且我的主机正在与CentOS 7一起运行。

对于内部网络适配器,我们必须在Oracle VirtualBox中具有内置的DHCP服务器。我们可以在Windows 10笔记本电脑上使用以下命令使用DHCP服务器创建内部网络

C:Program FilesOracleVirtualBox>VBoxManage dhcpserver add --netname mylab --ip 10.10.10.1 --netmask 255.255.255.0 --lowerip 10.10.10.2 --upperip 10.10.10.64 --enable

因此,以下是我的IP地址配置。

说明:

建议为桥接接口设置静态IP,因为DHCP可能会不断更改该IP,这会破坏OpenStack配置

[root@server1 ~]# ip addr show
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host
       valid_lft forever preferred_lft forever
2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP group default qlen 1000
    link/ether 08:00:27:f6:20:06 brd ff:ff:ff:ff:ff:ff
    inet 192.168.0.120/24 brd 192.168.0.255 scope global eth0
       valid_lft forever preferred_lft forever
    inet6 fe80::a00:27ff:fef6:2006/64 scope link
       valid_lft forever preferred_lft forever
3: eth1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP group default qlen 1000
    link/ether 08:00:27:4b:7a:80 brd ff:ff:ff:ff:ff:ff
    inet 10.10.10.2/24 brd 10.10.10.255 scope global dynamic eth1
       valid_lft 1086sec preferred_lft 1086sec
    inet6 fe80::a00:27ff:fe4b:7a80/64 scope link
       valid_lft forever preferred_lft forever

使用Packstack进行OpenStack部署的准备工作

在开始使用Packstack在VirtualBox上安装多节点OpenStack的步骤之前,必须满足一些强制性的准备工作。

在代理之后部署OpenStack(可选)

如果仅针对那些其系统需要代理服务器才能连接到Internet的用户,则执行此步骤。我已经写了另一篇文章,介绍了在Linux环境中配置代理的步骤。

但是要能够在代理后面使用Packstack在VirtualBox上安装多节点OpenStack,请执行以下步骤:

创建一个文件(如果尚不可用)/etc/environment来设置默认的代理设置。将以下内容添加/添加到此文件:

http_proxy=http://user:[email protected]:proxy_port
https_proxy=https://user:[email protected]:proxy_port
no_proxy=localhost
127.0.0.1, yourdomain.com, your.ip.add.ress

我们还需要包含no_proxy,因为默认情况下,所有流量(即定向到$proxy_port的流量都将通过代理)。现在,我们不想让本地主机流量通过代理。因此,如果一个OpenStack节点尝试与其他OpenStack节点进行通信,则必须确保即使使用了$proxy_port,流量也不会通过代理传递。

我们也可以选择在yum.conf中包含这些值,以使用代理服务器连接CentOS 7服务器。

更新Linux主机

在开始安装和/或者配置OpenStack之前,最好先更新Linux主机,以确保已安装所有最新的可用补丁程序和更新。我们可以使用来更新两个CentOS 7 Linux系统。

# yum update -y

我们也可以在完成此步骤后重启Linux主机,因为可能已经安装了新内核。

更新/etc/hosts

系统必须能够解析本地主机和计算节点的主机名,这一点很重要。为此,我可以配置DNS服务器或者在本地使用/etc/hosts文件

# echo "192.168.0.120   server1 server1.example.com" >> /etc/hosts
# echo "192.168.0.121   server2 server2.example.com" >> /etc/hosts

根据环境在两个节点上修改值

禁用一致的网络设备命名

红帽企业Linux提供了用于网络接口的一致且可预测的网络设备命名的方法。这些功能更改了系统上网络接口的名称,以使查找和区分接口更加容易。对于Openstack,我必须禁用此功能,因为我们需要使用传统的命名约定,即ethXX

要禁用一致的网络设备命名,请在两个节点上将net.ifnames = 0 biosdevname = 0添加到/etc/sysconfig/grub中。

[root@server1 ~]# grep GRUB_CMDLINE_LINUX /etc/sysconfig/grub
GRUB_CMDLINE_LINUX="crashkernel=auto rd.lvm.lv=centos/root rd.lvm.lv=centos/swap rhgb quiet net.ifnames=0 biosdevname=0"

下一步重建GRUB2

[root@server1 ~]# grub2-mkconfig -o /boot/grub2/grub.cfg

接下来,重新启动Linux系统以激活更改。

禁用NetworkManager

当前的OpenStack设置与NetworkManager不兼容,因此我们必须在两个节点上都禁用NetworkManager。下面的命令将停止并禁用NetworkManager服务

[root@server1 ~]# systemctl disable NetworkManager --now
[root@server2 ~]# systemctl disable NetworkManager --now

设置RDO存储库以安装Packstack实用程序

RDO(Openstack的Red Hat发行版)是一个在CentOS,Fedora和Red Hat Enterprise Linux上使用和部署OpenStack的社区。默认情况下,Packstack工具在CentOS存储库中不可用。我们必须在两个CentOS 7 Linux主机上都安装RDO存储库

[root@server1 ~]# yum install -y https://rdoproject.org/repos/rdo-release.rpm

接下来,我们可以验证可用存储库的列表

[root@server1 ~]# yum repolist
Loaded plugins: fastestmirror
Loading mirror speeds from cached hostfile
 * base: mirrors.piconets.webwerks.in
 * extras: mirrors.piconets.webwerks.in
 * openstack-stein: mirrors.piconets.webwerks.in
 * rdo-qemu-ev: mirrors.piconets.webwerks.in
 * updates: mirrors.piconets.webwerks.in
repo id                                        repo name                                         status
!base/7/x86_64                                 CentOS-7 - Base                                   10,097
!extras/7/x86_64                               CentOS-7 - Extras                                    305
!openstack-stein/x86_64                        OpenStack Stein Repository                         2,198
!rdo-qemu-ev/x86_64                            RDO CentOS-7 - QEMU EV                                83
!updates/7/x86_64                              CentOS-7 - Updates                                   711
repolist: 13,394

重要的提示:

如果我们使用的是Oracle VirtualBox,则建议我们对虚拟机的这种状态进行快照,以防万一发生故障,然后可以轻松还原快照并从此阶段开始。

与NTP服务器同步

将多节点OpenStack部署中的所有节点都与NTP服务器同步非常重要。当我们在带有Packstack的VirtualBox上安装多节点OpenStack时,它还将配置NTP服务器,但是最好确保控制器和计算节点与NTP服务器同步。

[root@server2 ~]# ntpdate -u pool.ntp.org
10 Nov 21:14:40 ntpdate[9857]: step time server 157.119.108.165 offset 278398.807924 sec
[root@server1 ~]# ntpdate -u pool.ntp.org
10 Nov 21:14:24 ntpdate[1978]: step time server 5.103.139.163 offset 2.537080 sec

安装Packstack

Packstack是一个实用程序,它使用p模块在SSH上自动在多个预安装的服务器上部署OpenStack的各个部分。我们可以使用yum安装packstack

[root@server1 ~]# yum -y install openstack-packstack

这应该安装Packstack实用程序。接下来,我们可以使用packstack --help来查看Packstack支持的选项列表,以在VirtualBox上安装多节点OpenStack。

生成答案文件

要使用Packstack在VirtualBox上安装多节点OpenStack,我们必须配置多个值。现在使我们的生活变得轻松,我们有了一个包含Packstack的应答文件的概念,其中包含在VirtualBox上配置和安装多节点OpenStack所需的所有变量。

生成答案文件模板

[root@server1 ~]# packstack --gen-answer-file ~/answers.txt

其中我在/root /下创建了名为answers.txt的答案文件。此答案文件将包含在VirtualBox上安装多节点OpenStack的默认值,并且将从CentOIS 7 localhost控制器节点捕获某些详细信息以填充一些值。

修改答案文件以配置OpenStack

现在,生成的答案文件包含所有变量的默认值。我们必须根据需要修改它们。

要在我的环境中使用Packstack在VirtualBox上安装多节点OpenStack,我已修改了以下值

CONFIG_DEFAULT_PASSWORD=password
CONFIG_SWIFT_INSTALL=y
CONFIG_HEAT_INSTALL=y
CONFIG_NTP_SERVERS=pool.ntp.org
CONFIG_KEYSTONE_ADMIN_PW=password
CONFIG_CINDER_VOLUMES_SIZE=9G
CONFIG_CINDER_VOLUME_NAME=cinder-volumes
CONFIG_CINDER_VOLUMES_CREATE=y
CONFIG_NEUTRON_L2_AGENT=openvswitch
CONFIG_NEUTRON_OVS_BRIDGE_MAPPINGS=physnet1:br-eth1
CONFIG_NEUTRON_OVS_BRIDGE_IFACES=br-eth1:eth1
CONFIG_NEUTRON_OVS_TUNNEL_IF=eth1
CONFIG_HORIZON_SSL=y
CONFIG_HEAT_CFN_INSTALL=y
CONFIG_NEUTRON_ML2_TENANT_NETWORK_TYPES=vxlan
CONFIG_NEUTRON_ML2_TYPE_DRIVERS=vxlan
CONFIG_NEUTRON_ML2_MECHANISM_DRIVERS=openvswitch
CONFIG_NEUTRON_METERING_AGENT_INSTALL=n
CONFIG_PROVISION_DEMO=n
  • 对于Openstack的基本概念证明,这些值修改就足够了。我为Openstack和Keystone使用了简单的密码,我们可以选择使用其他密码。

  • 由于我在操作系统安装期间创建了10GB的Cinder卷组,因此我提供的" cinder-volumes"大小为9GB。

  • 我的CentOS 7系统已连接到Internet,因此我正在使用pool.ntp.org来配置NTP服务器。

  • 如我们所见,我有一个控制器(192.168.0.120)和一个计算(192.168.0.121)节点

CONFIG_CONTROLLER_HOST=192.168.0.120
CONFIG_COMPUTE_HOSTS=192.168.0.121
CONFIG_NETWORK_HOSTS=192.168.0.120
CONFIG_STORAGE_HOST=192.168.0.120
CONFIG_SAHARA_HOST=192.168.0.120
CONFIG_AMQP_HOST=192.168.0.120
CONFIG_MARIADB_HOST=192.168.0.120
CONFIG_KEYSTONE_LDAP_URL=ldap://192.168.0.120
CONFIG_REDIS_HOST=192.168.0.120
  • 其中我使用openvswitch作为NEUTRON后端,这就是为什么我使用OVS值定义网络接口的原因
CONFIG_NEUTRON_OVS_BRIDGE_MAPPINGS
CONFIG_NEUTRON_OVS_BRIDGE_IFACES
CONFIG_NEUTRON_OVS_TUNNEL_IF
  • 但是,如果我们打算将OVN用作Neutron后端,则必须使用以下变量而不是OVS
CONFIG_NEUTRON_OVN_BRIDGE_MAPPINGS
CONFIG_NEUTRON_OVN_BRIDGE_IFACES
CONFIG_NEUTRON_OVN_TUNNEL_IF

使用Packstack在VirtualBox上安装多节点Openstack

现在,我们的答案文件已准备好用于多节点OpenStack部署。接下来执行Packstack以便在localhost CentOS 7服务器上部署OpenStack

提示:

根据虚拟机的资源可用性,此命令的完整执行将花费15-20分钟。为Packstack提供两个CentOS 7主机的密码,以设置少密码的身份验证

[root@server1 ~]# packstack --answer-file /root/answers.txt
Welcome to the Packstack setup utility
The installation log file is available at: /var/tmp/packstack/20191110-214111-UMIIGA/openstack-setup.log
Installing:
Clean Up                                             [ DONE ]
Discovering ip protocol version                      [ DONE ]
[email protected]'s password:
[email protected]'s password:
Setting up ssh keys                                  [ DONE ]
Preparing servers                                    [ DONE ]
Pre installing Puppet and discovering hosts' details [ DONE ]
Preparing pre-install entries                        [ DONE ]
Installing time synchronization via NTP              [ DONE ]
Setting up CACERT                                    [ DONE ]
Preparing AMQP entries                               [ DONE ]
Preparing MariaDB entries                            [ DONE ]
Fixing Keystone LDAP config parameters to be undef if empty[ DONE ]
Preparing Keystone entries                           [ DONE ]
Preparing Glance entries                             [ DONE ]
Checking if the Cinder server has a cinder-volumes vg[ DONE ]
Preparing Cinder entries                             [ DONE ]
Preparing Nova API entries                           [ DONE ]
Creating ssh keys for Nova migration                 [ DONE ]
Gathering ssh host keys for Nova migration           [ DONE ]
Preparing Nova Compute entries                       [ DONE ]
Preparing Nova Scheduler entries                     [ DONE ]
Preparing Nova VNC Proxy entries                     [ DONE ]
Preparing OpenStack Network-related Nova entries     [ DONE ]
Preparing Nova Common entries                        [ DONE ]
Preparing Neutron LBaaS Agent entries                [ DONE ]
Preparing Neutron API entries                        [ DONE ]
Preparing Neutron L3 entries                         [ DONE ]
Preparing Neutron L2 Agent entries                   [ DONE ]
Preparing Neutron DHCP Agent entries                 [ DONE ]
Preparing Neutron Metering Agent entries             [ DONE ]
Checking if NetworkManager is enabled and running    [ DONE ]
Preparing OpenStack Client entries                   [ DONE ]
Preparing Horizon entries                            [ DONE ]
Preparing Heat entries                               [ DONE ]
Preparing Heat CloudFormation API entries            [ DONE ]
Preparing Gnocchi entries                            [ DONE ]
Preparing Redis entries                              [ DONE ]
Preparing Ceilometer entries                         [ DONE ]
Preparing Aodh entries                               [ DONE ]
Preparing Puppet manifests                           [ DONE ]
Copying Puppet modules and manifests                 [ DONE ]
Applying 192.168.0.120_controller.pp
192.168.0.120_controller.pp:                         [ DONE ]
Applying 192.168.0.120_network.pp
192.168.0.120_network.pp:                            [ DONE ]
Applying 192.168.0.121_compute.pp
192.168.0.121_compute.pp:                            [ DONE ]
Applying Puppet manifests                            [ DONE ]
Finalizing                                           [ DONE ]
 ** ** Installation completed successfully ** ****
Additional information:
 * File /root/keystonerc_admin has been created on OpenStack client host 192.168.0.120. To use the command line tools you need to source the file.
 * NOTE : A certificate was generated to be used for ssl, You should change the ssl certificate configured in /etc/httpd/conf.d/ssl.conf on 192.168.0.120 to use a CA signed cert.
 * To access the OpenStack Dashboard browse to https://192.168.0.120/dashboard .
Please, find your login credentials stored in the keystonerc_admin in your home directory.
 * The installation log file is available at: /var/tmp/packstack/20191110-214111-UMIIGA/openstack-setup.log
 * The generated manifests are available at: /var/tmp/packstack/20191110-214111-UMIIGA/manifests

现在假设我们在Oracle VirtualBox上的多节点OpenStack部署期间没有遇到任何问题,在Packstack实用程序退出结束时,我们将获得OpenStack仪表板(Horizon)URL和keystonerc文件位置。

在多节点OpenStack部署期间观察到的错误

在尝试使用Packstack实用程序在VirtualBox上安装多节点OpenStack时,我也遇到了一些问题。

错误1:无法评估:无法分配内存-fork(2)

在我的初次运行期间,尝试使用Packstack在VirtualBox上安装多节点OpenStack时遇到以下错误

ERROR : Error appeared during Puppet run: 192.168.0.120_controller.pp
Error: /Stage[main]/Neutron::Plugins::Ml2::Ovn/Package[python-networking-ovn]: Could not evaluate: Cannot allocate memory - fork(2)

如果系统不再能够创建新进程,或者没有可用的ID分配给新进程,则会观察到这种情况。

将kernel.pid_max的值增加到65534:

[root@server1 ~]# echo kernel.pid_max = 65534 >> /etc/sysctl.conf
[root@server1 ~]# sysctl -p

验证新值

[root@server1 ~]# sysctl -a | grep kernel.pid_max
kernel.pid_max = 65534

/etc/security/limits.conf文件中的最大打开文件数和最大进程数的值设置为无限制:

[root@server1 ~]# cat /etc/security/limits.conf
root  soft  nproc  unlimited
root  soft  nofile  unlimited

错误2:无法应用目录:无法分配内存-fork(2)

另一个错误也与上述错误类似,这里的问题是我在Windows 10笔记本电脑上的CentOS 7 VM运行的内存更少。

ERROR : Error appeared during Puppet run: 192.168.0.120_controller.pp
Error: Failed to apply catalog: Cannot allocate memory - fork(2)

最初,由于执行失败,我仅给该CentOS 7 VM 4GB的内存。后来我将CentOS 7 VM的内存增加到8GB。尽管根据Red Hat的建议,建议至少有16GB内存在虚拟环境中设置Openstack。

错误3:无法在节点192.168.0.120上检测到接口eth1的ipaddress

在CentOS 7控制器节点上使用Packstack进行多节点OpenStack部署的最开始阶段,我其中遇到了此错误。

Couldn't detect ipaddress of interface eth1 on node 192.168.0.120

原因是我的eth1没有任何IP地址。我的Windows 10笔记本电脑上的内部网络未正确设置,因此eth1接口未接收到DHCP IP。在启动Packstack以便在VirtualBox上安装多节点OpenStack之前,eth1必须具有有效的IP地址,这一点很重要。

验证OpenStack安装

接下来,必须执行几项检查,以确保我们能够成功使用Packstack在VirtualBox上安装多节点OpenStack。

验证控制器(服务器1)上OpenStack服务的状态

现在,要确保我们能够在VirtualBox上成功安装多节点OpenStack,我们应该检查OpenStack服务的状态。但是,正如我们在OpenStack中所知道的,我们没有任何单一服务,而是多种服务的组合。因此,或者继续检查各个服务状态,或者我们可以使用" openstack-status"

默认情况下,该工具在节点上不可用。我们可以通过安装openstack-utilsrpm获得此工具

[root@server1 ~]# yum -y install openstack-utils

接下来检查Controller节点上的OpenStack服务状态

[root@server1 ~]# openstack-status
== Nova services ==
openstack-nova-api:                     active
openstack-nova-compute:                 inactive  (disabled on boot)
openstack-nova-network:                 inactive  (disabled on boot)
openstack-nova-scheduler:               active
openstack-nova-conductor:               active
openstack-nova-console:                 inactive  (disabled on boot)
openstack-nova-consoleauth:             active
openstack-nova-xvpvncproxy:             inactive  (disabled on boot)
== Glance services ==
openstack-glance-api:                   active
openstack-glance-registry:              active
== Keystone service ==
openstack-keystone:                     inactive  (disabled on boot)
== Horizon service ==
openstack-dashboard:                    301
== neutron services ==
neutron-server:                         active
neutron-dhcp-agent:                     active
neutron-l3-agent:                       active
neutron-metadata-agent:                 active
neutron-openvswitch-agent:              active
== Cinder services ==
openstack-cinder-api:                   active
openstack-cinder-scheduler:             active
openstack-cinder-volume:                active
openstack-cinder-backup:                active
== Ceilometer services ==
openstack-ceilometer-api:               inactive  (disabled on boot)
openstack-ceilometer-central:           active
openstack-ceilometer-compute:           inactive  (disabled on boot)
openstack-ceilometer-collector:         inactive  (disabled on boot)
openstack-ceilometer-notification:      active
== Heat services ==
openstack-heat-api:                     active
openstack-heat-api-cfn:                 active
openstack-heat-api-cloudwatch:          inactive  (disabled on boot)
openstack-heat-engine:                  active
== Support services ==
openvswitch:                            active
dbus:                                   active
target:                                 active
rabbitmq-server:                        active
memcached:                              active
== Keystone users ==
Warning keystonerc not sourced

在计算(server2)上验证OpenStack Services的状态

同样,在计算节点上,验证nova computing的状态。当前,我们看到大多数服务都处于非活动状态,但这很好,因为我们将在以后的文章中对它们进行处理。

[root@server2 ~]# openstack-status
== Nova services ==
openstack-nova-api:                     inactive  (disabled on boot)
openstack-nova-compute:                 active
openstack-nova-network:                 inactive  (disabled on boot)
openstack-nova-scheduler:               inactive  (disabled on boot)
== neutron services ==
neutron-server:                         inactive  (disabled on boot)
neutron-dhcp-agent:                     inactive  (disabled on boot)
neutron-l3-agent:                       inactive  (disabled on boot)
neutron-metadata-agent:                 inactive  (disabled on boot)
neutron-openvswitch-agent:              active
== Ceilometer services ==
openstack-ceilometer-api:               inactive  (disabled on boot)
openstack-ceilometer-central:           inactive  (disabled on boot)
openstack-ceilometer-compute:           inactive  (disabled on boot)
openstack-ceilometer-collector:         inactive  (disabled on boot)
== Support services ==
openvswitch:                            active
dbus:                                   active
Warning novarc not sourced

验证煤渣体积卷组

我们还提供了Cinder-Volumes VG的卷组详细信息,我们可以看到它现在被OpenStack用于Cinder存储

[root@server1 ~]# vgs
  VG             #PV #LV #SN Attr   VSize   VFree
  centos           1   2   0 wz--n- <19.00g      0
  cinder-volumes   1   1   0 wz--n- <10.00g 484.00m

现在,我们有了一个由packstack实用程序创建的新逻辑卷(cinder-volumes-pool)。

[root@server1 ~]# lvs
  LV                  VG             Attr       LSize   Pool Origin Data%  Meta%  Move Log Cpy%Sync Convert
  root                centos         -wi-ao---- <18.00g
  swap                centos         -wi-ao----   1.00g
  cinder-volumes-pool cinder-volumes twi-a-tz--   9.50g             0.00   10.58

验证server1上的网络(控制器)

现在,我们可以看到在Linux盒子上添加了一些其他配置。这类似于我们添加到Packstack答案文件中的内容。

[root@server1 ~]# ip addr show
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host
       valid_lft forever preferred_lft forever
2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP group default qlen 1000
    link/ether 08:00:27:f6:20:06 brd ff:ff:ff:ff:ff:ff
    inet 192.168.0.120/24 brd 192.168.0.255 scope global eth0
       valid_lft forever preferred_lft forever
    inet6 fe80::a00:27ff:fef6:2006/64 scope link
       valid_lft forever preferred_lft forever
3: eth1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast master ovs-system state UP group default qlen 1000
    link/ether 08:00:27:4b:7a:80 brd ff:ff:ff:ff:ff:ff
    inet6 fe80::a00:27ff:fe4b:7a80/64 scope link
       valid_lft forever preferred_lft forever
6: ovs-system: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN group default qlen 1000
    link/ether c2:aa:61:37:ec:1e brd ff:ff:ff:ff:ff:ff
7: br-eth1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UNKNOWN group default qlen 1000
    link/ether 08:00:27:4b:7a:80 brd ff:ff:ff:ff:ff:ff
    inet 10.10.10.2/24 brd 10.10.10.255 scope global br-eth1
       valid_lft forever preferred_lft forever
    inet6 fe80::a8de:6fff:fead:b145/64 scope link
       valid_lft forever preferred_lft forever
8: br-ex: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UNKNOWN group default qlen 1000
    link/ether 36:b4:50:57:84:4e brd ff:ff:ff:ff:ff:ff
    inet6 fe80::34b4:50ff:fe57:844e/64 scope link
       valid_lft forever preferred_lft forever
9: br-int: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN group default qlen 1000
    link/ether ae:8c:6f:e8:19:4a brd ff:ff:ff:ff:ff:ff
10: br-tun: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN group default qlen 1000
    link/ether 4a:78:bc:22:b7:45 brd ff:ff:ff:ff:ff:ff

其中我们看不到任何分配给" br-ex"接口的IP地址,"我们将在后续步骤中对其进行修复"。

验证server2上的网络(计算)

同样,检查server2上的网络配置,其中我们也看不到必须创建br-eth1接口才能在控制器和计算之间建立桥接网络。

[root@server2 ~]# ip addr show
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host
       valid_lft forever preferred_lft forever
2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP group default qlen 1000
    link/ether 08:00:27:49:67:6f brd ff:ff:ff:ff:ff:ff
    inet 192.168.0.121/24 brd 192.168.0.255 scope global noprefixroute eth0
       valid_lft forever preferred_lft forever
    inet6 fe80::a00:27ff:fe49:676f/64 scope link
       valid_lft forever preferred_lft forever
3: eth1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP group default qlen 1000
    link/ether 08:00:27:51:81:1c brd ff:ff:ff:ff:ff:ff
    inet 10.10.10.3/24 brd 10.10.10.255 scope global noprefixroute eth1
       valid_lft forever preferred_lft forever
    inet6 fe80::a00:27ff:fe51:811c/64 scope link
       valid_lft forever preferred_lft forever
4: ovs-system: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN group default qlen 1000
    link/ether 02:55:bb:65:92:fd brd ff:ff:ff:ff:ff:ff
5: br-int: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN group default qlen 1000
    link/ether 2a:9d:8e:e6:37:41 brd ff:ff:ff:ff:ff:ff
6: br-tun: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN group default qlen 1000
    link/ether 8a:8a:57:05:f6:40 brd ff:ff:ff:ff:ff:ff
7: vxlan_sys_4789: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 65000 qdisc noqueue master ovs-system state UNKNOWN group default qlen 1000
    link/ether ca:3a:76:33:4b:a0 brd ff:ff:ff:ff:ff:ff
    inet6 fe80::c83a:76ff:fe33:4ba0/64 scope link
       valid_lft forever preferred_lft forever

其中我们还需要对网桥进行更多配置。

在server1(控制器)上配置OpenStack Networking

我的" eth0"接口映射到"/etc/sysconfig/network-scripts"(这是我连接到外部网络的主要接口)下的" ifcfg-eth0"配置文件。因此,我需要对此接口文件进行一些修改,并创建一个外部网桥网络接口配置文件。

[root@server1 ~]# cd /etc/sysconfig/network-scripts/

复制" ifcfg-eth0"的内容以在"/etc/sysconfig/network-scripts /"下创建一个新文件" ifcfg-br-ex"。

[root@server1 network-scripts]# cp ifcfg-eth0 ifcfg-br-ex

接下来,请备份ifcfg-eth0配置文件,以防万一我们犯了一些错误,我们将需要回滚某些内容。

[root@server1 network-scripts]# cp ifcfg-eth0 /tmp/ifcfg-eth0.bkp

以下是这两个配置文件的最终内容。我们可以相应地对其进行修改。

[root@server1 network-scripts]# cat ifcfg-br-ex
TYPE=OVSBridge
BOOTPROTO=static
IPADDR=192.168.0.120
PREFIX=24
GATEWAY=192.168.0.1
DNS1=8.8.8.8
DEVICE=br-ex
ONBOOT=yes
PEERDNS=yes
USERCTL=yes
DEVICETYPE=ovs
[root@server1 network-scripts]# cat ifcfg-eth0
BOOTPROTO=none
TYPE=OVSPort
OVS_BRIDGE=br-ex
ONBOOT=yes
DEVICETYPE=ovs
DEVICE=eth0

下次重新启动服务器,如果一切正常,则重新启动后,那么我们应该具有正确的网络配置

重新启动后验证网络地址

[root@server1 ~]# ip addr show
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host
       valid_lft forever preferred_lft forever
2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast master ovs-system state UP group default qlen 1000
    link/ether 08:00:27:f6:20:06 brd ff:ff:ff:ff:ff:ff
    inet6 fe80::a00:27ff:fef6:2006/64 scope link
       valid_lft forever preferred_lft forever
3: eth1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast master ovs-system state UP group default qlen 1000
    link/ether 08:00:27:4b:7a:80 brd ff:ff:ff:ff:ff:ff
    inet6 fe80::a00:27ff:fe4b:7a80/64 scope link
       valid_lft forever preferred_lft forever
4: ovs-system: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN group default qlen 1000
    link/ether 02:06:46:ba:95:bb brd ff:ff:ff:ff:ff:ff
5: vxlan_sys_4789: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 65000 qdisc noqueue master ovs-system state UNKNOWN group default qlen 1000
    link/ether ce:56:11:ad:10:87 brd ff:ff:ff:ff:ff:ff
    inet6 fe80::cc56:11ff:fead:1087/64 scope link
       valid_lft forever preferred_lft forever
6: br-tun: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN group default qlen 1000
    link/ether 4a:78:bc:22:b7:45 brd ff:ff:ff:ff:ff:ff
7: br-int: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN group default qlen 1000
    link/ether ae:8c:6f:e8:19:4a brd ff:ff:ff:ff:ff:ff
8: br-eth1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UNKNOWN group default qlen 1000
    link/ether 08:00:27:4b:7a:80 brd ff:ff:ff:ff:ff:ff
    inet 10.10.10.2/24 brd 10.10.10.255 scope global br-eth1
       valid_lft forever preferred_lft forever
    inet6 fe80::c0f1:57ff:fe5c:3346/64 scope link
       valid_lft forever preferred_lft forever
9: br-ex: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UNKNOWN group default qlen 1000
    link/ether 08:00:27:f6:20:06 brd ff:ff:ff:ff:ff:ff
    inet 192.168.0.120/24 brd 192.168.0.255 scope global br-ex
       valid_lft forever preferred_lft forever
    inet6 fe80::ac01:68ff:fea0:e46/64 scope link
       valid_lft forever preferred_lft forever

因此,现在我们的" br-ex"和" br-eth1"接口分别具有来自eth0和eth1接口的IP。

检查默认网关

[root@server1 network-scripts]# ip route show
default via 192.168.0.1 dev br-ex
10.10.10.0/24 dev br-eth1 proto kernel scope link src 10.10.10.2
169.254.0.0/16 dev eth0 scope link metric 1002
169.254.0.0/16 dev eth1 scope link metric 1003
169.254.0.0/16 dev br-eth1 scope link metric 1016
169.254.0.0/16 dev br-ex scope link metric 1017
192.168.0.0/24 dev br-ex proto kernel scope link src 192.168.0.120

在server2上配置OpenStack网络(计算)

要在VirtualBox上安装多节点OpenSTack,请在" server2"上的" br-eth1"上配置内部网桥

[root@server2 network-scripts]# cat ifcfg-eth1
DEVICE=eth1
NAME=eth1
DEVICETYPE=ovs
TYPE=OVSPort
OVS_BRIDGE=br-eth1
ONBOOT=yes
BOOTPROTO=none
[root@server2 network-scripts]# cat ifcfg-br-eth1
DEFROUTE=yes
NAME=eth1
ONBOOT=yes
DEVICE=br-eth1
DEVICETYPE=ovs
OVSBOOTPROTO=none
TYPE=OVSBridge
IPADDR=10.10.10.3
PREFIX=24

下次重新启动服务器,重新启动后验证server2上的网络

[root@server2 network-scripts]# ip addr show
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host
       valid_lft forever preferred_lft forever
2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP group default qlen 1000
    link/ether 08:00:27:49:67:6f brd ff:ff:ff:ff:ff:ff
    inet 192.168.0.121/24 brd 192.168.0.255 scope global eth0
       valid_lft forever preferred_lft forever
    inet6 fe80::a00:27ff:fe49:676f/64 scope link
       valid_lft forever preferred_lft forever
3: eth1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast master ovs-system state UP group default qlen 1000
    link/ether 08:00:27:51:81:1c brd ff:ff:ff:ff:ff:ff
    inet6 fe80::a00:27ff:fe51:811c/64 scope link
       valid_lft forever preferred_lft forever
4: ovs-system: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN group default qlen 1000
    link/ether 6e:35:20:83:b4:75 brd ff:ff:ff:ff:ff:ff
5: br-tun: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN group default qlen 1000
    link/ether 8a:8a:57:05:f6:40 brd ff:ff:ff:ff:ff:ff
6: vxlan_sys_4789: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 65000 qdisc noqueue master ovs-system state UNKNOWN group default qlen 1000
    link/ether d6:af:1e:ca:b4:d2 brd ff:ff:ff:ff:ff:ff
    inet6 fe80::d4af:1eff:feca:b4d2/64 scope link
       valid_lft forever preferred_lft forever
7: br-int: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN group default qlen 1000
    link/ether 2a:9d:8e:e6:37:41 brd ff:ff:ff:ff:ff:ff
9: br-eth1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UNKNOWN group default qlen 1000
    link/ether 08:00:27:51:81:1c brd ff:ff:ff:ff:ff:ff
    inet 10.10.10.3/24 brd 10.10.10.255 scope global br-eth1
       valid_lft forever preferred_lft forever
    inet6 fe80::8070:56ff:fea5:348/64 scope link
       valid_lft forever preferred_lft forever