如何在Openstack中配置HAProxy(高可用性)

时间:2020-01-09 10:37:44  来源:igfitidea点击:

因为这是我上一篇文章的第二部分,所以我分享了使用心脏起搏器和corosync配置OpenStack HA集群的步骤。

在本文中,我将分享"在Openstack中配置HAProxy的步骤"和"使用虚拟IP将关键端点移动到负载均衡器"的步骤。

`在Openstack中配置HAProxy

要在OpenStack中配置HAProxy,我们将使用HAProxy在此实验部署中对我们的控制平面服务进行负载平衡。
一些部署还可能实现" Keepalived"并在主动/主动配置中运行" HAProxy"。
对于此部署,我们将运行HAProxy Active/Passive,并将其与资源一起在Pacemaker中与我们的" VIP"一起进行管理。

首先,使用以下命令在两个节点上安装HAProxy:

说明:

在RHEL系统上,我们必须具有RHN的活动订阅,或者我们可以配置本地脱机存储库," yum"程序包管理器可以使用该本地存储库安装所提供的rpm及其依赖项。

[root@controller1 ~]# yum install -y haproxy
[root@controller2 ~]# yum install -y haproxy

使用以下命令验证安装:

[root@controller1 ~]# rpm -q haproxy
haproxy-1.5.18-7.el7.x86_64
[root@controller2 ~]# rpm -q haproxy
haproxy-1.5.18-7.el7.x86_64

接下来,我们将为HAProxy创建一个配置文件,以对两个控制器上安装的API服务进行负载平衡。
使用以下示例作为模板,将示例中的IP地址替换为两个控制器的IP地址和将用于负载均衡API服务的VIP的IP地址。

说明:

我们计划用于VIP的IP地址必须是免费的。

对两个控制器节点上的现有配置文件进行备份

[root@controller1 ~]# mv /etc/haproxy/haproxy.cfg /etc/haproxy/haproxy.cfg.bkp
[root@controller2 ~]# mv /etc/haproxy/haproxy.cfg /etc/haproxy/haproxy.cfg.bkp

以下示例/etc/haproxy/haproxy.cfg将在我们的环境中平衡Horizon:

[root@controller1 haproxy]# cat haproxy.cfg
global
  daemon
  group  haproxy
  maxconn  40000
  pidfile  /var/run/haproxy.pid
  user  haproxy
defaults
  log  127.0.0.1 local2 warning
  mode  tcp
  option  tcplog
  option  redispatch
  retries  3
  timeout  connect 10s
  timeout  client 60s
  timeout  server 60s
  timeout  check 10s
listen horizon
  bind 192.168.122.30:80
  mode http
  cookie SERVERID insert indirect nocache
  option tcplog
  timeout client 180s
  server controller1 192.168.122.20:80 cookie controller1 check inter 1s
  server controller2 192.168.122.22:80 cookie controller2 check inter 1s

在此示例中," controller1"的IP地址为" 192.168.122.20",而" controller2"的IP地址为" 192.168.122.22"。
我们选择使用的" VIP"是" 192.168.122.30"。
复制此文件,将IP地址替换为我们实验室中的地址,复制到每个控制器上的/etc/haproxy/haproxy.cfg中。

要在OpenStack中配置HAProxy,我们必须将此" haproxy.cfg"文件复制到"第二个控制器"中

[root@controller1 ~]# scp /etc/haproxy/haproxy.cfg controller2:/etc/haproxy/haproxy.cfg

为了使Horizon能够响应VIP上的请求,我们需要在Apache虚拟主机配置中将VIP添加为ServerAlias。
在我们的实验装置中的/etc/httpd/conf.d/15-horizon_vhost.conf中可以找到它。
controller1上查找以下行:

ServerAlias 192.168.122.20

并在" controller2"上的行下方

ServerAlias 192.168.122.22

在两个控制器上添加带有VIP的添加" ServerAlias"行:

ServerAlias 192.168.122.30

我们还需要告诉Apache不要监听VIP,以便HAProxy可以绑定到该地址。
为此,修改/etc/httpd/conf/ports.conf并指定除端口号以外的控制器的IP地址。
以下是一个示例:

[root@controller1 ~]# cat /etc/httpd/conf/ports.conf
# ** **** **** **** **** **** **** **** **** **
# Listen & NameVirtualHost resources in module puppetlabs-apache
# Managed by Puppet
# ** **** **** **** **** **** **** **** **** **
Listen 0.0.0.0:8778
#Listen 35357
#Listen 5000
#Listen 80
Listen 8041
Listen 8042
Listen 8777
Listen 192.168.122.20:35357
Listen 192.168.122.20:5000
Listen 192.168.122.20:80
Here 192.168.122.20 is the IP of controller1

在" controller2"上,使用相应控制器节点的IP重复相同的操作

[root@controller2 ~(keystone_admin)]# cat /etc/httpd/conf/ports.conf
# ** **** **** **** **** **** **** **** **** **
# Listen & NameVirtualHost resources in module puppetlabs-apache
# Managed by Puppet
# ** **** **** **** **** **** **** **** **** **
Listen 0.0.0.0:8778
#Listen 35357
#Listen 5000
#Listen 80
Listen 8041
Listen 8042
Listen 8777
Listen 192.168.122.22:35357
Listen 192.168.122.22:5000
Listen 192.168.122.22:80

重新启动Apache以选择新的别名:

[root@controller1 ~]# systemctl restart httpd
[root@controller2 ~]# systemctl restart httpd

接下来,将VIP和HAProxy服务添加到Pacemaker群集中作为资源。
这些命令只能在第一个控制器节点上运行。
这告诉Pacemaker有关我们要添加的资源的三件事:

  • 第一个字段(在这种情况下为" ocf")是资源脚本所遵循的标准以及在何处可以找到它。

  • 第二个字段(在这种情况下为"心跳")是特定于标准的;对于OCF资源,它告诉群集资源脚本所在的OCF名称空间。

  • 第三个字段(在这种情况下为" IPaddr2")是资源脚本的名称。

[root@controller1 ~]# pcs resource create VirtualIP IPaddr2 ip=192.168.122.30 cidr_netmask=24
Assumed agent name 'ocf:heartbeat:IPaddr2' (deduced from 'IPaddr2')
[root@controller1 ~]# pcs resource create HAProxy systemd:haproxy

将HAProxy服务与VirtualIP并置,以确保两者一起运行:

[root@controller1 ~]# pcs constraint colocation add VirtualIP with HAProxy score=INFINITY

验证是否已在两个控制器上启动资源:

[root@controller1 ~]# pcs status
Cluster name: openstack
Stack: corosync
Current DC: controller2 (version 1.1.18-11.el7_5.3-2b07d5c5a9) - partition with quorum
Last updated: Tue Oct 16 12:44:27 2016
Last change: Tue Oct 16 12:44:23 2016 by root via cibadmin on controller1
2 nodes configured
2 resources configured
Online: [ controller1 controller2 ]
Full list of resources:
 VirtualIP      (ocf::heartbeat:IPaddr2):       Started controller1
 HAProxy        (systemd:haproxy):      Started controller1
Daemon Status:
  corosync: active/enabled
  pacemaker: active/enabled
  pcsd: active/enabled

此时,我们应该可以使用指定的VIP访问Horizon。
流量将从客户端流向VIP上的HAProxy,再到两个节点之一上的Apache。

其他API服务配置

现在,这里已经完成了在Openstack中配置HAProxy的工作,最后的配置步骤是将每个OpenStack API端点移到负载均衡器之后。
此过程分为三个步骤,如下所示:

  • 更新HAProxy配置以包括服务。

  • 将Keystone服务目录中的端点移至VIP。

  • 重新配置服务以指向VIP,而不是第一个控制器的IP。

在以下示例中,我们将"将Keystone服务移到负载均衡器后面"。
每个API服务都可以遵循此过程。

首先,在HAProxy配置文件中为Keystone的授权和管理端点添加一个部分。
因此,我们将以下模板添加到两个控制器上现有的" haproxy.cfg"文件中

[root@controller1 ~]# vim /etc/haproxy/haproxy.cfg
listen keystone-admin
  bind 192.168.122.30:35357
  mode tcp
  option tcplog
  server controller1 192.168.122.20:35357 check inter 1s
  server controller2 192.168.122.22:35357 check inter 1s
listen keystone-public
  bind 192.168.122.30:5000
  mode tcp
  option tcplog
  server controller1 192.168.122.20:5000 check inter 1s
  server controller2 192.168.122.22:5000 check inter 1s

在活动节点上重新启动haproxy服务:

[root@controller1 ~]# systemctl restart haproxy.service

我们可以使用pcs状态的输出确定活动节点。
在两个控制器上使用以下命令,检查以确保HAProxy现在正在侦听端口5000和35357:

[root@controller1 ~]# curl http://192.168.122.30:5000
{"versions": {"values": [{"status": "stable", "updated": "2016-02-28T00:00:00Z", "media-types": [{"base": "application/json", "type": "application/vnd.openstack.identity-v3+json"}], "id": "v3.10", "links": [{"href": "http://192.168.122.30:5000/v3/", "rel": "self"}]}, {"status": "deprecated", "updated": "2015-08-04T00:00:00Z", "media-types": [{"base": "application/json", "type": "application/vnd.openstack.identity-v2.0+json"}], "id": "v2.0", "links": [{"href": "http://192.168.122.30:5000/v2.0/", "rel": "self"}, {"href": "htt
[root@controller1 ~]# curl http://192.168.122.30:5000/v3
{"version": {"status": "stable", "updated": "2016-02-28T00:00:00Z", "media-types": [{"base": "application/json", "type": "application/vnd.openstack.identity-v3+json"}], "id": "v3.10", "links": [{"href": "http://192.168.122.30:5000/v3/", "rel": "self"}]}}
[root@controller1 ~]# curl http://192.168.122.30:35357/v3
{"version": {"status": "stable", "updated": "2016-02-28T00:00:00Z", "media-types": [{"base": "application/json", "type": "application/vnd.openstack.identity-v3+json"}], "id": "v3.10", "links": [{"href": "http://192.168.122.30:35357/v3/", "rel": "self"}]}}
[root@controller1 ~]# curl http://192.168.122.30:35357
{"versions": {"values": [{"status": "stable", "updated": "2016-02-28T00:00:00Z", "media-types": [{"base": "application/json", "type": "application/vnd.openstack.identity-v3+json"}], "id": "v3.10", "links": [{"href": "http://192.168.122.30:35357/v3/", "rel": "self"}]}, {"status": "deprecated", "updated": "2015-08-04T00:00:00Z", "media-types": [{"base": "application/json", "type": "application/vnd.openstack.identity-v2.0+json"}], "id": "v2.0", "links": [{"href": "http://192.168.122.30:35357/v2.0/", "rel": "self"}, {"href": "https://docs.openstack.org/", "type": "text/html", "rel": "describedby"}]}]}}

以上所有命令都应输出一些描述Keystone服务状态的" JSON"。
所以所有的端口都处于监听状态

接下来,通过"创建新端点并删除旧端点"来更新Keystone服务目录中身份服务的端点。
因此,我们可以获取现有的keystonerc_admin文件

[root@controller1 ~(keystone_admin)]# source keystonerc_admin

以下是我的keystonerc_admin中的内容

[root@controller1 ~(keystone_admin)]# cat keystonerc_admin
unset OS_SERVICE_TOKEN
    export OS_USERNAME=admin
    export OS_PASSWORD='redhat'
    export OS_AUTH_URL=http://192.168.122.20:5000/v3
    export PS1='[\u@\h \W(keystone_admin)]$'
export OS_PROJECT_NAME=admin
export OS_USER_DOMAIN_NAME=Default
export OS_PROJECT_DOMAIN_NAME=Default
export OS_IDENTITY_API_VERSION=3

如我们所见,当前OS_AUTH_URL反映到控制器的现有端点。
我们将在一段时间后更新。

如果活动控制器上当前的梯形失真端点,则获取列表

[root@controller1 ~(keystone_admin)]# openstack endpoint list | grep keystone
| 3ded2a2faffe4fd485f6c3c58b1990d6 | RegionOne | keystone     | identity     | True    | internal  | http://192.168.122.20:5000/v3                 |
| b0f5b7887cd346b3aec747e5b9fafcd3 | RegionOne | keystone     | identity     | True    | admin     | http://192.168.122.20:35357/v3                |
| c1380d643f734cc1b585048b2e7a7d47 | RegionOne | keystone     | identity     | True    | public    | http://192.168.122.20:5000/v3                 |

现在,由于我们要将梯形失真校正服务中的端点移动到VIP,我们将使用如下所示的VIP地址创建新端点,分别用于admin,public和internal。

[root@controller1 ~(keystone_admin)]# openstack endpoint create --region RegionOne identity public http://192.168.122.30:5000/v3
+--------------+----------------------------------+
| Field        | Value                            |
+--------------+----------------------------------+
| enabled      | True                             |
| id           | 08a26ace08884b85a0ff869ddb20bea3 |
| interface    | public                           |
| region       | RegionOne                        |
| region_id    | RegionOne                        |
| service_id   | 555154c5facf4e96a8677362c62b2ac9 |
| service_name | keystone                         |
| service_type | identity                         |
| url          | http://192.168.122.30:5000/v3    |
+--------------+----------------------------------+
[root@controller1 ~(keystone_admin)]# openstack endpoint create --region RegionOne identity admin http://192.168.122.30:35357/v3
+--------------+----------------------------------+
| Field        | Value                            |
+--------------+----------------------------------+
| enabled      | True                             |
| id           | ef210afef1da4558abdc00cc13b75185 |
| interface    | admin                            |
| region       | RegionOne                        |
| region_id    | RegionOne                        |
| service_id   | 555154c5facf4e96a8677362c62b2ac9 |
| service_name | keystone                         |
| service_type | identity                         |
| url          | http://192.168.122.30:35357/v3   |
+--------------+----------------------------------+
[root@controller1 ~(keystone_admin)]# openstack endpoint create --region RegionOne identity internal http://192.168.122.30:5000/v3
+--------------+----------------------------------+
| Field        | Value                            |
+--------------+----------------------------------+
| enabled      | True                             |
| id           | 5205be865e2a4cb9b4ab2119b93c7461 |
| interface    | internal                         |
| region       | RegionOne                        |
| region_id    | RegionOne                        |
| service_id   | 555154c5facf4e96a8677362c62b2ac9 |
| service_name | keystone                         |
| service_type | identity                         |
| url          | http://192.168.122.30:5000/v3    |
+--------------+----------------------------------+

最后,更新每个OpenStack服务中的auth_uri,auth_url和identity_uri参数以指向新的IP地址。
以下配置文件将需要编辑:

/etc/ceilometer/ceilometer.conf
/etc/cinder/api-paste.ini
/etc/glance/glance-api.conf
/etc/glance/glance-registry.conf
/etc/neutron/neutron.conf
/etc/neutron/api-paste.ini
/etc/nova/nova.conf
/etc/swift/proxy-server.conf

接下来安装" openstack-utils"以获取可以帮助我们立即重新启动所有服务的openstack工具,而不是手动重新启动所有与openstack相关的服务

[root@controller1 ~(keystone_admin)]# yum -y install openstack-utils

编辑完每个文件后,使用以下命令在实验室部署中的所有节点上重新启动OpenStack服务:

[root@controller1 ~(keystone_admin)]# openstack-service restart

接下来,使用VIP(即" 192.168.122.30:5000/v3")将" keystonerc_admin"文件更新为指向新的" OS_AUTH_URL",如下所示

[root@controller1 ~(keystone_admin)]# cat keystonerc_admin
unset OS_SERVICE_TOKEN
    export OS_USERNAME=admin
    export OS_PASSWORD='redhat'
    export OS_AUTH_URL=http://192.168.122.30:5000/v3
    export PS1='[\u@\h \W(keystone_admin)]$'
export OS_PROJECT_NAME=admin
export OS_USER_DOMAIN_NAME=Default
export OS_PROJECT_DOMAIN_NAME=Default
export OS_IDENTITY_API_VERSION=3

现在重新获得更新的keystonerc_admin文件

[root@controller1 ~(keystone_admin)]# source keystonerc_admin

如果OS_AUTH_URL指向新的VIP,则验证新的更改

[root@controller1 ~(keystone_admin)]# echo $OS_AUTH_URL
http://192.168.122.30:5000/v3

重新评估openstack服务后,删除Keystone服务的旧端点

[root@controller1 ~(keystone_admin)]# openstack endpoint delete b0f5b7887cd346b3aec747e5b9fafcd3
[root@controller1 ~(keystone_admin)]# openstack endpoint delete c1380d643f734cc1b585048b2e7a7d47

说明:

尝试删除旧端点时,我们可能会遇到以下错误,这很可能是因为梯形校正数据库仍未正确刷新,因此请执行另一轮" openstact-service restart",然后重新尝试删除端点

[root@controller1 ~(keystone_admin)]# openstack endpoint delete 3ded2a2faffe4fd485f6c3c58b1990d6
Failed to delete endpoint with ID '3ded2a2faffe4fd485f6c3c58b1990d6': More than one endpoint exists with the name '3ded2a2faffe4fd485f6c3c58b1990d6'.
1 of 1 endpoints failed to delete.
[root@controller1 ~(keystone_admin)]# openstack endpoint list | grep 3ded2a2faffe4fd485f6c3c58b1990d6
| 3ded2a2faffe4fd485f6c3c58b1990d6 | RegionOne | keystone     | identity     | True    | internal  | http://192.168.122.20:5000/v3                 |
[root@controller1 ~(keystone_admin)]# openstack-service restart
[root@controller1 ~(keystone_admin)]# openstack endpoint delete 3ded2a2faffe4fd485f6c3c58b1990d6

重复controller2的相同步骤集

删除旧的端点并创建新的端点之后,下面是" controller2"上梯形端点的更新列表。

[root@controller2 ~(keystone_admin)]# openstack endpoint list | grep keystone
| 07fca3f48dba47cdbf6528909bd2a8e3 | RegionOne | keystone     | identity     | True    | public    | http://192.168.122.30:5000/v3                 |
| 37db43efa2934ce3ab93ea19df8adcc7 | RegionOne | keystone     | identity     | True    | internal  | http://192.168.122.30:5000/v3                 |
| e9da6923b7ff418ab7e30ef65af5c152 | RegionOne | keystone     | identity     | True    | admin     | http://192.168.122.30:35357/v3                |

现在,OpenStack服务将使用VIP提供的Keystone API端点,并且该服务将高度可用。

执行群集故障转移

由于我们的最终目标是高可用性,因此我们应该测试新资源的故障转移。

在执行故障转移之前,让我们确保集群正常运行并正在运行。

[root@controller2 ~(keystone_admin)]# pcs status
Cluster name: openstack
Stack: corosync
Current DC: controller1 (version 1.1.18-11.el7_5.3-2b07d5c5a9) - partition with quorum
Last updated: Tue Oct 16 14:54:45 2016
Last change: Tue Oct 16 12:44:23 2016 by root via cibadmin on controller1
2 nodes configured
2 resources configured
Online: [ controller1 controller2 ]
Full list of resources:
 VirtualIP      (ocf::heartbeat:IPaddr2):       Started controller1
 HAProxy        (systemd:haproxy):      Started controller1
Daemon Status:
  corosync: active/enabled
  pacemaker: active/enabled
  pcsd: active/enabled

我们看到我们的两个控制器都在线,所以让我们"停止第二个控制器"

[root@controller2 ~(keystone_admin)]# pcs cluster stop controller2
Stopping Cluster (pacemaker)...
Stopping Cluster (corosync)...

现在让我们尝试从" controller2"检查起搏器状态

[root@controller2 ~(keystone_admin)]# pcs status
Error: cluster is not currently running on this node

由于集群服务未在controller2上运行,因此我们无法检查状态。
因此,让我们从" controller1"获取状态

[root@controller1 ~(keystone_admin)]# pcs status
Cluster name: openstack
Stack: corosync
Current DC: controller1 (version 1.1.18-11.el7_5.3-2b07d5c5a9) - partition with quorum
Last updated: Tue Oct 16 13:21:32 2016
Last change: Tue Oct 16 12:44:23 2016 by root via cibadmin on controller1
2 nodes configured
2 resources configured
Online: [ controller1 ]
OFFLINE: [ controller2 ]
Full list of resources:
 VirtualIP      (ocf::heartbeat:IPaddr2):       Started controller1
 HAProxy        (systemd:haproxy):      Started controller1
Daemon Status:
  corosync: active/enabled
  pacemaker: active/enabled
  pcsd: active/enabled

正如预期的那样,它表明" controller2"处于脱机状态。
现在让我们检查梯形失真校正的端点是否可读

[root@controller2 ~(keystone_admin)]# openstack endpoint list
+----------------------------------+-----------+--------------+--------------+---------+-----------+-----------------------------------------------+
| ID                               | Region    | Service Name | Service Type | Enabled | Interface | URL                                           |
+----------------------------------+-----------+--------------+--------------+---------+-----------+-----------------------------------------------+
| 06473a06f4a04edc94314a97b29d5395 | RegionOne | cinderv3     | volumev3     | True    | internal  | http://192.168.122.20:8776/v3/%(tenant_id)s   |
| 07ad2939b59b4f4892d6a470a25daaf9 | RegionOne | aodh         | alarming     | True    | public    | http://192.168.122.20:8042                    |
| 07fca3f48dba47cdbf6528909bd2a8e3 | RegionOne | keystone     | identity     | True    | public    | http://192.168.122.30:5000/v3                 |
| 0856cd4b276f490ca48c772af2be49a3 | RegionOne | gnocchi      | metric       | True    | internal  | http://192.168.122.20:8041                    |
| 08ff114d526e4917b5849c0080cfa8f2 | RegionOne | aodh         | alarming     | True    | admin     | http://192.168.122.20:8042                    |
| 1e6cf514c885436fb14ffec0d55286c6 | RegionOne | aodh         | alarming     | True    | internal  | http://192.168.122.20:8042                    |
| 20168fdd0a064b5fa91b869ab492d2d1 | RegionOne | cinderv2     | volumev2     | True    | internal  | http://192.168.122.20:8776/v2/%(tenant_id)s   |
| 3524908122a44d7f855fd09dd2859d4e | RegionOne | nova         | compute      | True    | public    | http://192.168.122.20:8774/v2.1/%(tenant_id)s |
| 37db43efa2934ce3ab93ea19df8adcc7 | RegionOne | keystone     | identity     | True    | internal  | http://192.168.122.30:5000/v3                 |
| 3a896bde051f4ae4bfa3694a1eb05321 | RegionOne | cinderv2     | volumev2     | True    | admin     | http://192.168.122.20:8776/v2/%(tenant_id)s   |
| 3ef1f30aab8646bc96c274a116120e66 | RegionOne | nova         | compute      | True    | admin     | http://192.168.122.20:8774/v2.1/%(tenant_id)s |
| 42a690ef05aa42adbf9ac21056a9d4f3 | RegionOne | nova         | compute      | True    | internal  | http://192.168.122.20:8774/v2.1/%(tenant_id)s |
| 45fea850b0b34f7ca2443da17e82ca13 | RegionOne | glance       | image        | True    | admin     | http://192.168.122.20:9292                    |
| 46cbd1e0a79545dfac83eeb429e24a6c | RegionOne | cinderv2     | volumev2     | True    | public    | http://192.168.122.20:8776/v2/%(tenant_id)s   |
| 49f82b77105e4614b7cf57fe1785bdc3 | RegionOne | cinder       | volume       | True    | internal  | http://192.168.122.20:8776/v1/%(tenant_id)s   |
| 4aced9a3c17741608b2491a8a8fb7503 | RegionOne | cinder       | volume       | True    | public    | http://192.168.122.20:8776/v1/%(tenant_id)s   |
| 63eeaa5246f54c289881ade0686dc9bb | RegionOne | ceilometer   | metering     | True    | admin     | http://192.168.122.20:8777                    |
| 6e2fd583487846e6aab7cac4c001064c | RegionOne | gnocchi      | metric       | True    | public    | http://192.168.122.20:8041                    |
| 79f2fcdff7d740549846a9328f8aa993 | RegionOne | cinderv3     | volumev3     | True    | public    | http://192.168.122.20:8776/v3/%(tenant_id)s   |
| 9730a44676b042e1a9f087137ea52d04 | RegionOne | glance       | image        | True    | public    | http://192.168.122.20:9292                    |
| a028329f053841dfb115e93c7740d65c | RegionOne | neutron      | network      | True    | internal  | http://192.168.122.20:9696                    |
| acc7ff6d8f1941318ab4f456cac5e316 | RegionOne | placement    | placement    | True    | public    | http://192.168.122.20:8778/placement          |
| afecd931e6dc42e8aa1abdba44fec622 | RegionOne | glance       | image        | True    | internal  | http://192.168.122.20:9292                    |
| c08c1cfb0f524944abba81c42e606678 | RegionOne | placement    | placement    | True    | admin     | http://192.168.122.20:8778/placement          |
| c0c0c4e8265e4592942bcfa409068721 | RegionOne | placement    | placement    | True    | internal  | http://192.168.122.20:8778/placement          |
| d9f34d36bd2541b98caa0d6ab74ba336 | RegionOne | cinder       | volume       | True    | admin     | http://192.168.122.20:8776/v1/%(tenant_id)s   |
| e051cee0d06e45d48498b0af24eb08b5 | RegionOne | ceilometer   | metering     | True    | public    | http://192.168.122.20:8777                    |
| e9da6923b7ff418ab7e30ef65af5c152 | RegionOne | keystone     | identity     | True    | admin     | http://192.168.122.30:35357/v3                |
| ea6f1493aa134b6f9822eca447dfd1df | RegionOne | neutron      | network      | True    | admin     | http://192.168.122.20:9696                    |
| ed97856952bb4a3f953ff467d61e9c6a | RegionOne | gnocchi      | metric       | True    | admin     | http://192.168.122.20:8041                    |
| f989d76263364f07becb638fdb5fea6c | RegionOne | neutron      | network      | True    | public    | http://192.168.122.20:9696                    |
| fe32d323287c4a0cb221faafb35141f8 | RegionOne | ceilometer   | metering     | True    | internal  | http://192.168.122.20:8777                    |
| fef852af4f0d4f0cacd4620e5d5245c2 | RegionOne | cinderv3     | volumev3     | True    | admin     | http://192.168.122.20:8776/v3/%(tenant_id)s   |
+----------------------------------+-----------+--------------+--------------+---------+-----------+-----------------------------------------------+

是的,我们仍然能够读取梯形失真的端点列表,因此一切看起来都很好。

让我们再次在" controller2"上启动集群配置。

[root@controller2 ~(keystone_admin)]# pcs cluster start
Starting Cluster...

并检查状态

[root@controller2 ~(keystone_admin)]# pcs status
Cluster name: openstack
Stack: corosync
Current DC: controller1 (version 1.1.18-11.el7_5.3-2b07d5c5a9) - partition with quorum
Last updated: Tue Oct 16 13:23:17 2016
Last change: Tue Oct 16 12:44:23 2016 by root via cibadmin on controller1
2 nodes configured
2 resources configured
Online: [ controller1 controller2 ]
Full list of resources:
 VirtualIP      (ocf::heartbeat:IPaddr2):       Started controller1
 HAProxy        (systemd:haproxy):      Started controller1
Daemon Status:
  corosync: active/enabled
  pacemaker: active/enabled
  pcsd: active/enabled

这样一切都恢复了绿色,我们已经能够在Openstack中成功配置HAProxy。

最后,我希望本文中介绍的在Openstack中配置HAProxy(控制器之间的高可用性)的步骤对我们有所帮助。

因此,请在"评论"部分告诉我建议和反馈。