频道栏目
首页 > 网络 > 云计算 > 正文

openstack高可用环境搭建(二):高可环境的搭建

2016-06-08 09:01:22         来源:行者无疆  
收藏   我要投稿

1.高可用方案

1.1 网络

网段划分:

管理网络:10.192.44.148~10.192.44.151(需要接网线,后台登陆,openstack各组件的endpoint使用)

外部网络:10.192.45.211-10.192.45.220,这里外部网络需要一个段,供虚拟机绑定浮动IP使用

虚拟机网络:192.168.0.0/24 (需要实际网卡,但不需要接网线)

存储集群网络(cluster ip):目前和管理网络复用,真正部署单独划分会更好

张工申请到的合法IP:

实机环境的10.192.44.0/23网段ip要求受管理,不能任意设置

目前分到的ip为10.192.44.148-10.192.44.151用于4个节点的访问

新申请到的ip为10.192.45.211-10.192.45.220

请李工修改下实机环境配置,使用申请到的ip

这里稍后修改掉!

IP列表:

Hostname

管理网口及ip

外部网口

(br-ex)

虚拟机网络

(ovs br-int)

存储外部网络(同管理网口复用)

存储集群网络(同管理口复用)

Controller1(+network1+compute_m1)

Eth0:10.192.44.148

Eth1:10.192.45.211

Eth2:192.168.0.148

Eth0:10.192.44.148

Eth0:10.192.44.148

Controller2(+network2+compute_m2)

Eth0:10.192.44.150

Eth1:10.192.45.212

Eth2:192.168.0.150

Eth0:10.192.44.150

Eth0:10.192.44.150

Compute1

Eth0:10.192.44.149

Eth2:192.168.0.149

Eth0:10.192.44.149

Eth0:10.192.44.149

Compute2

Eth1:10.192.44.151

Eth2:192.168.0.151

Eth0:10.192.44.151

Eth0:10.192.44.151

Vip: 10.192.45.220(服务vip), 10.192.45.219(horizonvip)

因为做高可用当天150环境是需要重新安装的,所以先使用151作为控制节点2(controller2),加快进度。

网卡要求:

主控节点:

至少3个网口:

管理网口:供openstack 各endpoint使用及后台操作使用,需要接网线

外部网口:绑定br-ex,需要接网线

隧道口:br-int/br-tul,可以不接网线,需要配置IP

计算节点:

至少2个网口:

管理网口:供openstack 各endpoint使用及后台操作使用,需要接网线

隧道口:br-int/br-tul,可以不接网线,需要配置IP

1.2 openstack服务

所有节点都会作为计算节点

节点

ip

基础服务

Openstack服务

Ceph-mon

Ceph osd

高可用

controller1(+net1+com_m1)

10.192.44.148

Rabbitmq,mysql,httpd,libvirtd,qemu

Keystone

Glance

Horizon

Nova

Neutron

Cinder

Y

Y

Haproxy+keepalived

Controller2(+net2+com_m2)

10.192.44.151

Y

Y

Haproxy+keepalived

Compute1

10.192.44.149

Libvirtd,qemu

nova-compute

neutron-openvswitch-agent

cinder-volume

Y

Y

Compute2

10.192.44.150

y

Y

1.3 Ceph

系统盘还有空间,将剩余空间作为ceph的journal空间,剩余大概有90G,设置为hda5~8分区。

Hostname

Ceph mon

Ceph journal

Ceph osd

Node1

Mon0

/dev/hda5~8

Osd.0~osd.3: sdb1/sdc1/sdd1/sde1

Node2

Mon1

/dev/hda5~8

Osd.4~osd.7: sdb1/sdc1/sdd1/sde1

Node3

Mon2

/dev/hda5~8

Osd.8~osd.11: sdb1/sdc1/sdd1/sde1

Node4

Mon3

/dev/hda5~8

Osd.12~osd.15: sdb1/sdc1/sdd1/sde1

Rbd pools:

Service

Rbd pool

Pg nums

Glance

Images

128

Cinder

Volumes

128

Nova

Vms

128

备注:先给磁盘分区,否则安装时给sdx分区,每个磁盘全部分成sdx1,ceph-deploy 分出来的sdx1只有5G大小。

1.4 其他说明

1.4.1 不部署ceilometer、swift

暂时不部署ceilometer、swift服务,后续根据需求部署

1.4.2 替换libdevmapper.so.1.02为官方的

1.4.3 mysql目前是重装的:验证不重装如何启动、如何做集群

可以试一下,不带wsrep如何做高可用,看网上说,mysql登录时必须有wsrep才可以做高可用:

[root@lxp-node1 ~(keystone_admin)]# mysql-uroot -pf478ed694b4d4c45

Welcome to the MariaDB monitor. Commands end with ; or \g.

Your MariaDB connection id is 23

Server version: 5.5.40-MariaDB-wsrepMariaDB Server, wsrep_25.11.r4026

Copyright (c) 2000, 2015, Oracle, MariaDBCorporation Ab and others.

Type 'help;' or '\h' for help. Type '\c' toclear the current input statement.

MariaDB [(none)]>

这里尽早确定北京的自带版本是否做集群。

1.4.4 外部网络需要预分配资源池

这里是创建虚拟机后,绑定浮动IP使用,打算用多少虚拟机,就需要分配多少外部IP,在部署过程中,目前划分的是没有申请的

1.5 后续自动化部署

节点:

单节点:后端使用lvm

》=2节点:后端使用Ceph,节点全部作为ceph和计算节点,journal使用hdax,除系统盘外全做成OSD

配置+功能

2 替换mysql为mariadb-galera版本,重建数据库

2.1 在虚拟机上验证:删除数据库,数据导出导入功能

2.1.1 删除StorOS的数据库

StorOS自带的数据库:

[root@localhost etc]# rpm -aq |grep maria

mariadb-libs-5.5.35-3.el7.x86_64

mariadb-5.5.35-3.el7.x86_64

mariadb-server-5.5.35-3.el7.x86_64

mariadb-devel-5.5.35-3.el7.x86_64

删除:

# rpm -e --nodeps mariadb-libs mariadbmariadb-server mariadb-devel

[root@localhost etc]# rpm -aq |grep maria

[root@localhost etc]#

清理文件:

# rm /usr/share/mysql/ -rf

2.1.2 安装非galera版本的数据库

yum install mariadbmariadb-server MySQL-python

[root@localhost yum.repos.d]# rpm -aq |grepmaria

mariadb-libs-5.5.44-2.el7.centos.x86_64

mariadb-server-5.5.44-2.el7.centos.x86_64

mariadb-5.5.44-2.el7.centos.x86_64

启动:

# systemctlenable mariadb.service

# systemctl startmariadb.service

2.1.3 创建数据表

创建密码:

#mysql_secure_installation

Root密码为1,其他全部选择Y

#mysql -uroot -p1

Welcome to the MariaDB monitor. Commands end with ; or \g.

Your MariaDB connection id is 10

Server version: 5.5.44-MariaDB MariaDBServer

Copyright (c) 2000, 2015, Oracle, MariaDBCorporation Ab and others.

Type 'help;' or '\h' for help. Type '\c' toclear the current input statement.

创建数据库表:nova

mysql -u root –p

CREATE DATABASE nova;

GRANT ALL PRIVILEGES ON nova.* TO'nova'@'localhost' IDENTIFIED BY '1';

GRANT ALL PRIVILEGES ON nova.* TO'nova'@'%' IDENTIFIED BY '1';

FLUSH PRIVILEGES;

MariaDB [(none)]> show databases;

+--------------------+

| Database |

+--------------------+

| information_schema |

| mysql |

| nova |

| performance_schema |

+--------------------+

4 rows in set (0.01 sec)

这里只有nova是新增的,其他都是自带的。

在nova中创建一些表:

MariaDB [(none)]> use nova;

MariaDB [nova]> create table compute(idint(4), name char(20));

MariaDB [nova]> insert into computevalues(1, 'compute1');

MariaDB [nova]> select * from compute;

+------+----------+

| id| name |

+------+----------+

|1 | compute1 |

+------+----------+

1 row in set (0.00 sec)

2.1.4 导出数据库

mysqldump -uroot -p1 nova > nova.sql

# ls -l nova.sql

-rw-r--r-- 1 root root 1857 May 30 14:58nova.sql

2.1.6 删除非galera版本数据库

[root@localhost ~]# rpm -aq |grep maria

mariadb-libs-5.5.44-2.el7.centos.x86_64

mariadb-server-5.5.44-2.el7.centos.x86_64

mariadb-5.5.44-2.el7.centos.x86_64

# rpm -e --nodeps mariadb-libsmariadb-server mariadb

[root@localhost ~]# cd /usr

[root@localhost usr]# find ./ -name mysql

[root@localhost usr]# cd /var/

[root@localhost var]# find ./ -name mysql

./lib/mysql

./lib/mysql/mysql

[root@localhost var]# rm lib/mysql/ -rf

2.1.7 安装galera版本数据库

#yum install mariadb-galera-server galera

[root@localhost var]# rpm -aq |grep maria

mariadb-galera-common-5.5.40-3.el7.x86_64

mariadb-libs-5.5.44-2.el7.centos.x86_64

mariadb-5.5.44-2.el7.centos.x86_64

mariadb-galera-server-5.5.40-3.el7.x86_64

[root@localhost var]# rpm -aq |grep galera

mariadb-galera-common-5.5.40-3.el7.x86_64

galera-25.3.5-7.el7.x86_64

mariadb-galera-server-5.5.40-3.el7.x86_64

# systemctlenable mariadb.service

# systemctl startmariadb.service

设置密码:

#mysql_secure_installation

Root密码为1,其他全部选择Y

2.1.8 导入数据

导入nova.sql

首先要创建nova数据库:

CREATE DATABASE nova;

GRANT ALL PRIVILEGES ON nova.* TO'nova'@'localhost' IDENTIFIED BY '1';

GRANT ALL PRIVILEGES ON nova.* TO'nova'@'%' IDENTIFIED BY '1';

FLUSH PRIVILEGES;

然后导入数据库nova:

mysql -uroot -p1 nova < /root/nova.sql

查看:

MariaDB [(none)]> show databases;

+--------------------+

| Database |

+--------------------+

| information_schema |

| mysql |

| nova |

| performance_schema |

+--------------------+

4 rows in set (0.01 sec)

MariaDB [(none)]> use nova;

Reading table information for completion oftable and column names

You can turn off this feature to get aquicker startup with -A

Database changed

MariaDB [nova]> show tables;

+----------------+

| Tables_in_nova |

+----------------+

| compute |

+----------------+

1 row in set (0.00 sec)

MariaDB [nova]> select * from compute;

+------+----------+

| id| name |

+------+----------+

|1 | compute1 |

+------+----------+

1 row in set (0.00 sec)

OK,一切数据正常。

2.2 导出148(controller1)上的所有数据库

MariaDB [(none)]> show databases;

+--------------------+

| Database |

+--------------------+

| information_schema |

| cinder |

| glance |

| keystone |

| mysql |

| neutron |

| nova |

| performance_schema |

+--------------------+

8 rows in set (0.01 sec)

mysqldump -uroot -p1 nova > nova.sql

2.3 在148上(controller1)删除原来的非galera版本

[root@controller1 ~]# rpm -aq |grep maria

mariadb-server-5.5.44-2.el7.centos.x86_64

mariadb-5.5.44-2.el7.centos.x86_64

mariadb-libs-5.5.44-2.el7.centos.x86_64

# rpm -e --nodeps mariadb-server mariadbmariadb-libs

[root@controller1 var]# find ./ -name mysql

./lib/mysql

./lib/mysql/mysql

[root@controller1 var]# rm lib/mysql -rf

2.4 安装mairadb-galera版本

yum install mariadb-galera-server galera

[root@controller1 usr]# rpm -aq |grep maria

mariadb-libs-5.5.44-2.el7.centos.x86_64

mariadb-galera-common-5.5.40-3.el7.x86_64

mariadb-galera-server-5.5.40-3.el7.x86_64

mariadb-5.5.44-2.el7.centos.x86_64

[root@controller1 usr]# rpm -aq |grepgalera

galera-25.3.5-7.el7.x86_64

mariadb-galera-common-5.5.40-3.el7.x86_64

mariadb-galera-server-5.5.40-3.el7.x86_64

2.5 重建所有数据库并导入数据

# systemctlenable mariadb.service

# systemctl restartmariadb.service

#mysql_secure_installation

Root密码为1,其他全部选择Y

CREATEDATABASE nova;

GRANTALL PRIVILEGES ON nova.* TO 'nova'@'localhost' IDENTIFIED BY '1';

GRANTALL PRIVILEGES ON nova.* TO 'nova'@'%' IDENTIFIED BY '1';

CREATE DATABASE glance;

GRANT ALL PRIVILEGES ON glance.* TO 'glance'@'localhost'IDENTIFIED BY '1';

GRANT ALL PRIVILEGES ON glance.* TO 'glance'@'%'IDENTIFIED BY '1';

CREATE DATABASE neutron;

GRANT ALL PRIVILEGES ON neutron.* TO'neutron'@'localhost' IDENTIFIED BY '1';

GRANT ALL PRIVILEGES ON neutron.* TO 'neutron'@'%'IDENTIFIED BY '1';

CREATE DATABASE cinder;

GRANT ALL PRIVILEGES ON cinder.* TO 'cinder'@'localhost'IDENTIFIED BY '1';

GRANT ALL PRIVILEGES ON cinder.* TO 'cinder'@'%'IDENTIFIED BY '1';

CREATE DATABASE keystone;

GRANT ALL PRIVILEGES ON keystone.* TO'keystone'@'localhost' IDENTIFIED BY '1';

GRANT ALL PRIVILEGES ON keystone.* TO 'keystone'@'%'IDENTIFIED BY '1';

导入数据:

[root@controller1sqls]# mysql -uroot -p1 nova < nova.sql

[root@controller1sqls]# mysql -uroot -p1 cinder

[root@controller1sqls]# mysql -uroot -p1 glance < glance.sql

[root@controller1sqls]# mysql -uroot -p1 keystone < keystone.sql

[root@controller1sqls]# mysql -uroot -p1 neutron < neutron.sql

重启:

# systemctl restartmariadb.service

重启服务器

2.6 重新验证功能

验证通过

3 数据库引擎使用innodb还是mariadb

3.1 mariadb都有什么引擎?

(1)MyISAM存储引擎

MyISAM存储引擎:是MySQL的默认存储引擎。MyISAM不支持事务、也不支持外键,但其访问速度快,对事务完整性没有要求。

(2)InnoDB

InnoDB存储引擎:提供了具有提交、回滚和崩溃恢复能力的事务安全。但是比起MyISAM存储引擎,InnoDB写的处理效率差一些并且会占用更多的磁盘空间以保留数据和索引。

(3) MEMORY(heap)存储引擎

MEMORY(heap)存储引擎:memory实际是heap的替代品。使用存在内存中的内容来创建表。每个MEMORY表只实际对应一个磁盘文件(只包含表定义)。

(4) MERGE存储引擎

MERGE存储引擎:是一组MyISAM表的组合,这些MyISAM表必须结构完全相同。MERGE表本身没有数据,类似于一个视图.对MERGE类型的表进行查询、更新、删除的操作,就是对内部的MyISAM表进行的。似乎看到分区表的影子,但完全是两种不同的东西.

3.2 如何选择?Galera需要什么引擎?

官网的一些建议:

Xtradb andInnoDBis a goodgeneral transaction storage engine and usually the best choice if unsure.

--Xtradb andInnoDB是一种很好的通用事务存储引擎,在无法确定如何选择时,它也许就是一种最佳的选择

MyISAMandAriahave a smallfootprint and allow for easy copy between systems. MyISAM is MySQL's oldeststorage engine, while Aria isMariaDB's more modern improvement.

--MyISAMandAria占用的资源比较小,两者是兼容的,可以很容易的在这两个系统间进行迁移 . MyISAM是mysql最古老的存储引擎,而Aria是MariaDB对它的更先进的改进.

MariaDBGalera Cluster 介绍MariaDB集群是MariaDB同步多主机集群。它仅支持XtraDB/InnoDB存储引擎(虽然有对MyISAM实验支持 - 看wsrep_replicate_myisam系统变量)。

MariaDB集群是MariaDB同步多主机集群。它仅支持XtraDB/InnoDB存储引擎(虽然有对MyISAM实验支持 - 看wsrep_replicate_myisam系统变量)。

3.3 和石磊的确认

Storos默认引擎如果不是InnoDB,则集群使用的引擎需要验证确认。已和石磊确认,引擎使用InnoDB,那么这里不必再做其他验证。

lixiangping(李祥平) 06-01 09:55:43

请教一下,你们mariadb现在使用的是什么引擎?

lixiangping(李祥平) 06-01 09:56:25

这个我们也需要定一下

shileibjbn1(石磊bjbn1) 06-01 09:56:58

InnoDB

lixiangping(李祥平) 06-01 09:57:16

好的,那就没问题了

4. 重建ceph:确保HEALTH_OK

方案:使用SSD硬盘的hda5~8作为日志盘

使用SATA盘作为数据盘

Ceph使用日志来提高性能及保证数据一致性。使用快速的SSD作为OSD的日志盘来提高集群性能是SSD应用于Ceph环境中最常见的使用场景。

4.1 卸载与清理目前的ceph

首先删除目前的云硬盘、虚拟机和镜像

卸载已安装的ceph

# ceph-deploy purge controller1 compute1 controller2compute2

清理目录:

# find /usr/ -name ceph |xargs rm -rf

# find /var/ -name ceph |xargs rm –rf

4.2 安装ceph、ceph-deploy

yum install ceph -y

yum install ceph-deploy -y

yum install yum-plugin-priorities -y

yum install snappy leveldb gdiskpython-argparse gperftools-libs -y

systemctl stop firewalld.service;systemctldisable firewalld.service

4.3 磁盘分区

先添加sd1,后面再增加osd

Hda5~8作为journal

Osd

Hostname

Journal

0

Controller1

/dev/sdb1

/dev/hda5

1

Compute1

/dev/sdb1

/dev/hda5

2

Controller2

/dev/sdb1

/dev/hda5

3

Compute2

/dev/sdb1

/dev/hda5

4

Controller1

/dev/sdc1

/dev/hda6

5

Compute1

/dev/sdc1

/dev/hda6

6

Controller2

/dev/sdc1

/dev/hda6

7

Compute2

/dev/sdc1

/dev/hda6

8

Controller1

/dev/sdd1

/dev/hda7

9

Compute1

/dev/sdd1

/dev/hda7

10

Controller2

/dev/sdd1

/dev/hda7

11

Compute2

/dev/sdd1

/dev/hda7

12

Controller1

/dev/sde1

/dev/hda8

13

Compute1

/dev/sde1

/dev/hda8

14

Controller2

/dev/sde1

/dev/hda8

15

Compute2

/dev/sde1

/dev/hda8

├─hda5 8:5 023.2G 0 part

├─hda6 8:6 019.1G 0 part

├─hda7 8:7 023.9G 0 part

└─hda8 8:8 023.9G 0 part

sdb8:16 0 1.8T0 disk

└─sdb1 8:17 01.8T 0 part

sdc8:32 0 1.8T0 disk

sdd8:48 0 1.8T0 disk

sde8:64 0 1.8T0 disk

4.4 安装mon

ceph-deploy new controller1 compute1 controller2compute2

[root@controller1 ceph]# cat ceph.conf

[global]

fsid = fd6da09a-b1e7-4a83-9c7b-42a733402825

mon_initial_members = controller1, compute1,controller2, compute2

mon_host = 10.192.44.148,10.192.44.149,10.192.44.150,10.192.44.151

auth_cluster_required = none

auth_service_required = none

auth_client_required = none

filestore_xattr_use_omap = true

public network = 10.192.44.0/23

cluster network = 10.192.44.0/23

mon clock drift allowed = 20

mon clock drift warn backoff = 30

osd pool default min size = 3

ceph-deploy install controller1 compute1 controller2compute2

ceph-deploy --overwrite-conf mon create-initial

ceph-deploy mon create controller1 compute1controller2 compute2

ceph-deploy gatherkeys controller1 compute1controller2 compute2

scp * 10.192.44.149:/etc/ceph

scp * 10.192.44.150:/etc/ceph

scp * 10.192.44.151:/etc/ceph

[root@controller1 ceph]# ceph -s

cluster fd6da09a-b1e7-4a83-9c7b-42a733402825

health HEALTH_ERR

64 pgs stuck inactive

64 pgs stuck unclean

no osds

monmap e1: 4 mons at{compute1=10.192.44.149:6789/0,compute2=10.192.44.151:6789/0,controller1=10.192.44.148:6789/0,controller2=10.192.44.150:6789/0}

election epoch 4, quorum 0,1,2,3controller1,compute1,controller2,compute2

osdmap e1: 0 osds: 0 up, 0 in

pgmap v2: 64 pgs, 1 pools, 0 bytes data, 0 objects

0 kB used, 0 kB / 0 kB avail

64 creating

4.5 安装osd(/dev/sdb1:/dev/hda5)

ceph-deploy osdprepare controller1:/dev/sdb1:/dev/hda5 compute1:/dev/sdb1:/dev/hda5controller2:/dev/sdb1:/dev/hda5 compute2:/dev/sdb1:/dev/hda5

ceph-deploy osdactivate controller1:/dev/sdb1:/dev/hda5 compute1:/dev/sdb1:/dev/hda5controller2:/dev/sdb1:/dev/hda5 compute2:/dev/sdb1:/dev/hda5

[root@controller1 ceph]# ceph -s

cluster fd6da09a-b1e7-4a83-9c7b-42a733402825

health HEALTH_OK

monmap e1: 4 mons at{compute1=10.192.44.149:6789/0,compute2=10.192.44.151:6789/0,controller1=10.192.44.148:6789/0,controller2=10.192.44.150:6789/0}

election epoch 4, quorum 0,1,2,3controller1,compute1,controller2,compute2

osdmap e17: 4 osds: 4 up, 4 in

pgmap v26: 64 pgs, 1 pools, 0 bytes data, 0 objects

134 MB used, 7448 GB / 7448 GB avail

64 active+clean

4.6 增加osd

ceph-deploy osd prepare controller1:/dev/sdc1:/dev/hda6compute1:/dev/sdc1:/dev/hda6 controller2:/dev/sdc1:/dev/hda6compute2:/dev/sdc1:/dev/hda6

ceph-deploy osd activatecontroller1:/dev/sdc1:/dev/hda6 compute1:/dev/sdc1:/dev/hda6controller2:/dev/sdc1:/dev/hda6 compute2:/dev/sdc1:/dev/hda6

[root@controller1ceph]# ceph -s

cluster fd6da09a-b1e7-4a83-9c7b-42a733402825

health HEALTH_WARN

too few PGs per OSD (24 < min 30)

monmap e1: 4 mons at{compute1=10.192.44.149:6789/0,compute2=10.192.44.151:6789/0,controller1=10.192.44.148:6789/0,controller2=10.192.44.150:6789/0}

election epoch 4, quorum 0,1,2,3controller1,compute1,controller2,compute2

osdmap e33: 8 osds: 8 up, 8 in

pgmap v53: 64 pgs, 1 pools, 0 bytes data,0 objects

271 MB used, 14896 GB / 14896 GBavail

64 active+clean

[root@controller1ceph]# ceph osd pool setrbd pg_num 128

set pool 0 pg_numto 128

[root@controller1ceph]# ceph osd pool setrbd pgp_num 128

set pool 0 pgp_numto 128

[root@controller1ceph]# ceph -s

clusterfd6da09a-b1e7-4a83-9c7b-42a733402825

health HEALTH_OK

monmap e1: 4 mons at{compute1=10.192.44.149:6789/0,compute2=10.192.44.151:6789/0,controller1=10.192.44.148:6789/0,controller2=10.192.44.150:6789/0}

election epoch 4, quorum 0,1,2,3controller1,compute1,controller2,compute2

osdmap e37: 8 osds: 8 up, 8 in

pgmap v103: 128 pgs, 1 pools, 0 bytesdata, 0 objects

272 MB used, 14896 GB / 14896 GBavail

128 active+clean

继续增加OSD:

ceph-deploy osdprepare controller1:/dev/sdd1:/dev/hda7 compute1:/dev/sdd1:/dev/hda7controller2:/dev/sdd1:/dev/hda7 compute2:/dev/sdd1:/dev/hda7

# ceph-deploy osdactivate controller1:/dev/sdd1:/dev/hda7 compute1:/dev/sdd1:/dev/hda7controller2:/dev/sdd1:/dev/hda7 compute2:/dev/sdd1:/dev/hda7

[root@controller1ceph]# ceph -s

clusterfd6da09a-b1e7-4a83-9c7b-42a733402825

health HEALTH_OK

monmap e1: 4 mons at{compute1=10.192.44.149:6789/0,compute2=10.192.44.151:6789/0,controller1=10.192.44.148:6789/0,controller2=10.192.44.150:6789/0}

election epoch 4, quorum 0,1,2,3controller1,compute1,controller2,compute2

osdmap e53: 12 osds: 12 up, 12 in

pgmap v140: 128 pgs, 1 pools, 0 bytesdata, 0 objects

416 MB used, 22344 GB / 22345 GBavail

128 active+clean

继续增加OSD:

ceph-deploy osdprepare controller1:/dev/sde1:/dev/hda8 compute1:/dev/sde1:/dev/hda8controller2:/dev/sde1:/dev/hda8 compute2:/dev/sde1:/dev/hda8

ceph-deploy osdactivate controller1:/dev/sde1:/dev/hda8 compute1:/dev/sde1:/dev/hda8controller2:/dev/sde1:/dev/hda8 compute2:/dev/sde1:/dev/hda8

[root@controller1ceph]# ceph -s

clusterfd6da09a-b1e7-4a83-9c7b-42a733402825

health HEALTH_WARN

too few PGs per OSD (24 < min30)

monmap e1: 4 mons at{compute1=10.192.44.149:6789/0,compute2=10.192.44.151:6789/0,controller1=10.192.44.148:6789/0,controller2=10.192.44.150:6789/0}

election epoch 4, quorum 0,1,2,3controller1,compute1,controller2,compute2

osdmap e69: 16 osds: 16 up, 16 in

pgmap v176: 128 pgs, 1 pools, 0 bytesdata, 0 objects

558 MB used, 29793 GB / 29793 GBavail

128 active+clean

[root@controller1 ceph]# ceph osd pool set rbd pg_num 256

set pool 0 pg_numto 256

[root@controller1 ceph]# ceph osd pool set rbd pgp_num 256

set pool 0 pgp_numto 256

[root@controller1 ceph]# ceph -s

clusterfd6da09a-b1e7-4a83-9c7b-42a733402825

healthHEALTH_OK

monmap e1: 4 mons at{compute1=10.192.44.149:6789/0,compute2=10.192.44.151:6789/0,controller1=10.192.44.148:6789/0,controller2=10.192.44.150:6789/0}

election epoch 4, quorum 0,1,2,3controller1,compute1,controller2,compute2

osdmap e73: 16 osds: 16 up, 16 in

pgmap v184: 256 pgs, 1 pools, 0 bytes data,0 objects

559 MB used, 29793 GB / 29793 GBavail

256 active+clean

4.7 创建osd pool

[root@controller1 ceph]# ceph pg stat

v209: 256 pgs: 256 active+clean; 0 bytesdata, 562 MB used, 29793 GB / 29793 GB avail

ceph osd pool create image 128

ceph osd pool create volumes 128

ceph osd pool create vms 128

[root@controller1 ceph]# ceph osd poolcreate image 128

pool 'image' created

[root@controller1 ceph]# ceph osd poolcreate volumes 128

pool 'volumes' created

[root@controller1 ceph]# ceph osd poolcreate vms 128

pool 'vms' created

[root@controller1 ceph]#

4.8 重新做基本功能验证:镜像、存储、虚拟机

4.8.1 配置确认及重启服务(这个之前已经配置好)

后台的配置,目前已经配置好,参见PART5:

Cinder:

[DEFAULT]

enabled_backends = ceph

[ceph]

volume_driver = cinder.volume.drivers.rbd.RBDDriver

rbd_pool = volumes

rbd_ceph_conf = /etc/ceph/ceph.conf

rbd_flatten_volume_from_snapshot = false

rbd_max_clone_depth = 5

rbd_store_chunk_size = 4

rados_connect_timeout = -1

glance_api_version = 2

Glance:

[glance_store]

default_store = rbd

stores = rbd

rbd_store_pool = image

rbd_store_user = glance

rbd_store_ceph_conf=/etc/ceph/ceph.conf

rbd_store_chunk_size = 8

Nova:

[libvirt]

images_type = rbd

images_rbd_pool = vms

images_rbd_ceph_conf = /etc/ceph/ceph.conf

disk_cachemodes ="network=writeback"

inject_password = false

inject_key = false

inject_partition = -2

live_migration_flag ="VIR_MIGRATE_UNDEFINE_SOURCE,VIR_MIGRATE_PEER2PEER,VIR_MIGRATE_LIVE,VIR_MIGRATE_PERSIST_DEST"

重启服务:

Cinder-volume:

计算节点和控制节点都要开启cinder-volume:

# systemctlenable openstack-cinder-volume.service target.service

# systemctl restartopenstack-cinder-volume.service target.service

Glance服务:

systemctl restartopenstack-glance-api.service openstack-glance-registry.service

nova服务:

systemctl restartopenstack-nova-api.service openstack-nova-cert.serviceopenstack-nova-consoleauth.service openstack-nova-scheduler.serviceopenstack-nova-conductor.service openstack-nova-novncproxy.service

systemctl restartopenstack-nova-compute.service

4.8.2 cinder、nova、glance基本功能简单验证

上传镜像:通过

创建云硬盘:通过

启动虚拟机:通过

虚拟机热迁移:迁移通过,状态正常,但是VNC不可用,这里要高可用设置,配置VIP端口访问

4.9 异常问题:服务器重启后,该服务器上的osd状态变成down

排查结果:osd对应的数据盘(比如/dev/sdb1)没有挂载到/var/lib/ceph/osd/ceph-x/目录,导致osd启动失败

解决方法:在/etc/fstab中添加如下内容,实现开机挂载

Controller1:

/dev/sdb1/var/sdb1 xfs defaults0 0

5. 安装ntp、keepalived和haproxy(控制节点controller1、controller2)

参考文档PART3

# yum install keepalived

# yum install haproxy

5.1 ntp同步

主控节点controller1:

# systemctl restart ntpd.service

其他节点:

systemctl stop ntpd.service

ntpdate controller1

systemctl start ntpd.service

5.2 配置keepalived(controller1、controller2)

Controller1(master):

vrrp_script checkhaproxy

{

script "/etc/keepalived/check.sh"

interval 3

weight -20

}

vrrp_instance VI_1 {

state MASTER

interface eth0

virtual_router_id 51

priority 101

advert_int 1

authentication {

auth_type PASS

auth_pass 1111

}

virtual_ipaddress {

10.192.45.220

}

track_script{

checkhaproxy

}

}

Controller2(backup):

vrrp_script checkhaproxy

{

script "/etc/keepalived/check.sh"

interval 3

weight -20

}

vrrp_instance VI_1 {

state BACKUP

interface eth0

virtual_router_id 51

priority 100

advert_int 1

authentication {

auth_type PASS

auth_pass 1111

}

virtual_ipaddress {

10.192.45.220 //服务vip

}

track_script{

checkhaproxy

}

}

[root@controller1 keepalived]# cat check.sh

#!/bin/bash

count = `ps aux | grep -v grep | grep haproxy| wc -l`

if [ $count > 0 ]; then

exit 0

else

exit 1

fi

5.3 配置haproxy(controller1、controller2)

配置文件:/etc/haproxy/haproxy.cfg

两台haproxy节点都设置:

/etc/sysctl.conf 添加

net.ipv4.ip_nonlocal_bind=1

[root@controller1 haproxy]# sysctl -p

net.ipv4.ip_nonlocal_bind = 1

# vi /etc/default/haproxy //重要

ENABLED=1

[root@lxp-node2 haproxy]# cat haproxy.cfg

global

log 127.0.0.1 local2

chroot /var/lib/haproxy

pidfile /var/run/haproxy.pid

maxconn 4000

user haproxy

group haproxy

daemon

#turn on stats unix socket

stats socket /var/lib/haproxy/stats

defaults

mode http

log global

option httplog

optiondontlognull

option http-server-close

option forwardfor except127.0.0.0/8

option redispatch

retries 3

timeout http-request 10s

timeout queue 1m

timeout connect 10s

timeout client 1m

timeout server 1m

timeout http-keep-alive 10s

timeout check 10s

maxconn 3000

frontendmain *:5050

acl url_static path_beg -i /static /images /javascript/stylesheets

aclurl_static path_end -i .jpg .gif .png .css .js

5.4启动haproxy和keepalived

systemctl enable haproxy

systemctl restart haproxy

[root@controller2 haproxy]# ps -A |grephaproxy

13721 ? 00:00:00 haproxy-systemd

13722 ? 00:00:00 haproxy

13723 ? 00:00:00 haproxy

#systemctl enable keepalived

#systemctl restart keepalived

[root@controller1 system]# ps -A |grep keep

32071 ? 00:00:00 keepalived

32072 ? 00:00:00 keepalived

32073 ? 00:00:00 keepalived

6. rabbitmq高可用

6.1 在controller2安装rabbitmq

# yum installrabbitmq-server

# systemctlenable rabbitmq-server.service

# systemctl startrabbitmq-server.service

# rabbitmqctl add_useropenstack 1

# rabbitmqctl set_permissionsopenstack ".*" ".*" ".*"

6.2 rabbitmq组集群

同步cookie:

[root@controller2 haproxy]# cd/var/lib/rabbitmq/

[root@controller2 rabbitmq]# rm.erlang.cookie

[root@controller2 rabbitmq]# scpcontroller1:/var/lib/rabbitmq/.erlang.cookie ./

Warning: Permanently added'controller1,10.192.44.148' (ECDSA) to the list of known hosts.

.erlang.cookie100% 200.0KB/s 00:00

[root@controller2 rabbitmq]#

[root@controller2 rabbitmq]# chownrabbitmq:rabbitmq .erlang.cookie

[root@controller2 rabbitmq]#

[root@controller2 rabbitmq]# systemctlrestart rabbitmq-server.service

组集群:

[root@controller2 rabbitmq]# rabbitmqctlstop_app

Stopping node rabbit@controller2 ...

...done.

[root@controller2 rabbitmq]#rabbitmqctl join_clusterrabbit@controller1

Clustering node rabbit@controller2 withrabbit@controller1 ...

...done.

[root@controller2 rabbitmq]# rabbitmqctlstart_app

Starting node rabbit@controller2 ...

...done.

[root@controller2 rabbitmq]# rabbitmqctl cluster_status

Cluster status of node rabbit@controller2...

[{nodes,[{disc,[rabbit@controller1,rabbit@controller2]}]},

{running_nodes,[rabbit@controller1,rabbit@controller2]},

{cluster_name,<<"rabbit@controller1">>},

{partitions,[]}]

...done.

[root@controller2 rabbitmq]# rabbitmqctlset_policy ha-all '^(?!amq\.).*' '{"ha-mode": "all"}'

Setting policy "ha-all" forpattern "^(?!amq\\.).*" to "{\"ha-mode\":\"all\"}" with priority "0" ...

...done.

[root@controller2 rabbitmq]#

6.3 修改所有服务后端,改为双rabbit_host,重启服务

Keystone:

[root@controller1 keystone]# greprabbit_hosts ./ -r

./keystone.conf:rabbit_hosts ="10.192.44.148:5672"

改为:

rabbit_hosts ="10.192.44.148:5672,10.192.44.150:5672"

# systemctl restarthttpd.service

Glance:修改glance-registry.conf、glance-api.conf:

[oslo_messaging_rabbit]

rabbit_host = 10.192.44.148

rabbit_port = 5672

rabbit_hosts = "10.192.44.148:5672,10.192.44.150:5672"

rabbit_use_ssl = False

rabbit_userid = openstack

rabbit_password = 1

#systemctl restartopenstack-glance-api.service openstack-glance-registry.service

Nova: controller、compute节点都需要改

Nova.conf:

[oslo_messaging_rabbit]

kombu_reconnect_delay=1.0

rabbit_host=10.192.44.148

rabbit_port=5672

rabbit_hosts ="10.192.44.148:5672,10.192.44.150:5672"

rabbit_use_ssl=False

rabbit_userid=openstack

rabbit_password=1

rabbit_virtual_host=/

rabbit_ha_queues=False

heartbeat_timeout_threshold=0

heartbeat_rate=2

systemctl restartopenstack-nova-api.service openstack-nova-cert.serviceopenstack-nova-consoleauth.service openstack-nova-scheduler.serviceopenstack-nova-conductor.service openstack-nova-novncproxy.service

计算:

systemctl restartopenstack-nova-compute.service

Neutron:(控制、计算节点都要改)

[oslo_messaging_rabbit]

kombu_reconnect_delay = 1.0

rabbit_host = 10.192.44.148

rabbit_port = 5672

rabbit_hosts ="10.192.44.148:5672,10.192.44.150:5672"

rabbit_use_ssl = False

rabbit_userid = openstack

rabbit_password = 1

rabbit_virtual_host = /

rabbit_ha_queues = False

heartbeat_rate=2

heartbeat_timeout_threshold=0

控制、网络:

systemctl startopenvswitch.service

systemctl restartneutron-server.service

systemctlrestart neutron-openvswitch-agent.service neutron-l3-agent.serviceneutron-dhcp-agent.service neutron-metadata-agent.service

计算:

systemctl startopenvswitch.service

systemctl restartneutron-openvswitch-agent.service

Cinder(控制、计算都要改):

[oslo_messaging_rabbit]

kombu_ssl_keyfile =

kombu_ssl_certfile =

kombu_ssl_ca_certs =

rabbit_host = 10.192.44.148

rabbit_port = 5672

rabbit_hosts ="10.192.44.148:5672,10.192.44.150:5672"

rabbit_userid = openstack

rabbit_password = 1

rabbit_use_ssl = False

rabbit_virtual_host = /

rabbit_ha_queues = False

heartbeat_timeout_threshold = 0

heartbeat_rate = 2

systemctlrestart openstack-cinder-api.service openstack-cinder-scheduler.service

systemctlrestart openstack-cinder-volume.service target.service

Horizon: 无

6.4 镜像、网络、存储、虚拟机功能验证

镜像:删除、上传OK

存储: 穿件、删除OK

虚拟机:创建、删除OK

网络:OK

6.5 rabbitmq单点故障验证

将controller的rabbitmq停掉

systemctl stoprabbitmq-server.service

[root@controller1 nova(keystone_admin)]#systemctl restart openstack-nova-compute.service

[root@controller1 nova(keystone_admin)]#systemctl stop rabbitmq-server.service

[root@controller1 nova(keystone_admin)]# ps-auxf |grep rabbit

root12379 0.0 0.0 112640976 pts/0 S+ 15:530:00 | \_ grep --color=auto rabbit

[root@controller1 nova(keystone_admin)]#

验证以上功能:

镜像:删除、上传OK

存储: 穿件、删除OK

虚拟机:创建、删除OK

网络:OK

systemctl restartrabbitmq-server.service

恢复时间:立即恢复

7. mysql高可用

方案:mysql采用galera方案,数据库为mariadb,数据库引擎采用innodb

7.1 卸载StorOS自带mariadb,安装mariadb-galera

7.1.1 卸载storos自带的mariadb

在controller2上下载storos自带的mariadb:

[root@controller2 ~]# rpm -aq |grep maria

mariadb-devel-5.5.35-3.el7.x86_64

mariadb-5.5.35-3.el7.x86_64

mariadb-test-5.5.35-3.el7.x86_64

mariadb-libs-5.5.35-3.el7.x86_64

mariadb-embedded-5.5.35-3.el7.x86_64

mariadb-embedded-devel-5.5.35-3.el7.x86_64

mariadb-server-5.5.35-3.el7.x86_64

# rpm -e --nodeps mariadb-devel mariadbmariadb-test mariadb-libs mariadb-embedded mariadb-embedded-develmariadb-server

[root@controller2 ~]# rpm -aq |grep maria

[root@controller2 ~]#

# find /var/ -name mysql |xargs rm –rf

# find /usr/ -name mysql |xargs rm –rf

7.1.2 安装mariadb-galera

#yum install mariadb-galera-server galeraxtrabackup socat

[root@controller2 ~]# rpm -aq |grep maria

mariadb-galera-common-5.5.40-3.el7.x86_64

mariadb-libs-5.5.44-2.el7.centos.x86_64

mariadb-galera-server-5.5.40-3.el7.x86_64

mariadb-5.5.44-2.el7.centos.x86_64

[root@controller2 ~]# rpm -aq |grep galera

mariadb-galera-common-5.5.40-3.el7.x86_64

galera-25.3.14-1.rhel7.el7.centos.x86_64

mariadb-galera-server-5.5.40-3.el7.x86_64

7.2 做mysql数据库集群

7.2.1 controller1(主节点)配置

(1)版本确认

[root@controller1 my.cnf.d]# mysql -uroot-p1

Welcome to the MariaDB monitor. Commands end with ; or \g.

Your MariaDB connection id is 280

Server version: 5.5.40-MariaDB-wsrep MariaDBServer, wsrep_25.11.r4026

Copyright (c) 2000, 2015, Oracle, MariaDBCorporation Ab and others.

Type 'help;' or '\h' for help. Type '\c' toclear the current input statement.

(2)创建用于同步数据库的sst账号

MariaDB [(none)]> GRANT USAGE ON *.* to sst@'%' IDENTIFIEDBY 'sstpass123';

Query OK, 0 rows affected (0.00 sec)

MariaDB [(none)]> GRANT ALL PRIVILEGES on *.* tosst@'%';

Query OK, 0 rows affected (0.00 sec)

MariaDB [(none)]> FLUSH PRIVILEGES;

Query OK, 0 rows affected (0.00 sec)

(3)创建wsrep.cnf文件

# cp/usr/share/mariadb-galera/wsrep.cnf /etc/my.cnf.d/

修改:

wsrep_provider=/usr/lib64/galera/libgalera_smm.so

wsrep_cluster_address="dummy://" -> wsrep_cluster_address="gcomm://"注意这里要改为gcomm,原来是dummy

wsrep_sst_auth=sst:sstpass123

wsrep_sst_method=rsync

实际:

wsrep_provider=/usr/lib64/galera/libgalera_smm.so

wsrep_cluster_name="my_wsrep_cluster" //这是默认的,最好也改掉

wsrep_cluster_address="gcomm://"

wsrep_sst_auth=sst:sstpass123

wsrep_sst_method=rsync

(4)在/etc/my.cnf中加入如下一行

!includedir /etc/my.cnf.d

(5)关闭防火墙

systemctl stop firewalld.service

systemctl disable firewalld.service

(6)重启mysql 服务

# systemctl enable mariadb.service

# systemctl restart mariadb.service

[root@controller1 my.cnf.d]# ps -A |grepmysql

16919 ? 00:00:00 mysqld_safe

17664 ? 00:00:00 mysqld

(7)检查端口:3306、4567

[root@controller1 my.cnf.d]# ss -nalp |grep3306

tcpLISTEN 0 50 *:3306 *:* users:(("mysqld",17664,23))

[root@controller1 my.cnf.d]# ss -nalp |grep4567

tcpLISTEN 0 128 *:4567 *:* users:(("mysqld",17664,11))

7.2.2 controller2(备节点)配置

(1)下载安装mariadb

yum install MySQL-pythonmariadb-galera-server galera xtrabackupsocat

(2) 设置开机启动mysql

# systemctl enable mariadb.service

# systemctl start mariadb.service

[root@controller2 my.cnf.d]# ps -A |grepmysql

5313? 00:00:00 mysqld_safe

5816? 00:00:00 mysqld

[root@controller2 my.cnf.d]#

(3)设置密码及安全加固

#/usr/bin/mysql_secure_installation

密码同主控:

(4)确认mariadb已正确安装并处于运行状态

[root@controller2 my.cnf.d]# mysql -uroot-p1

Welcome to the MariaDB monitor. Commands end with ; or \g.

Your MariaDB connection id is 10

Server version: 5.5.40-MariaDB-wsrepMariaDB Server, wsrep_25.11.r4026

Copyright (c) 2000, 2015, Oracle, MariaDBCorporation Ab and others.

Type 'help;' or '\h' for help. Type '\c' toclear the current input statement.

MariaDB [(none)]>

(5)创建用于同步数据库的SST账号

MariaDB [(none)]> GRANT USAGE ON *.* tosst@'%' IDENTIFIED BY 'sstpass123';

Query OK, 0 rows affected (0.00 sec)

MariaDB [(none)]> GRANT ALL PRIVILEGESon *.* to sst@'%';

Query OK, 0 rows affected (0.00 sec)

MariaDB [(none)]> FLUSH PRIVILEGES;

Query OK, 0 rows affected (0.00 sec)

(6) 创建wsrep.cnf文件

# cp/usr/share/mariadb-galera/wsrep.cnf/etc/my.cnf.d/

wsrep_provider=/usr/lib64/galera/libgalera_smm.so

wsrep_cluster_address="gcomm://10.192.44.148:4567"

wsrep_sst_auth=sst:sstpass123

wsrep_sst_method=rsync

(7)在/etc/my.cnf中加入

!includedir /etc/my.cnf.d

(8)关闭防火墙

systemctl stop firewalld.service

systemctl disable firewalld.service

(9)重启mariadb服务

# systemctl enable mariadb.service

# systemctl restart mariadb.service

出错:

160601 16:51:27 [ERROR] WSREP: Could notprepare state transfer request: failed to guess address to accept statetransfer at. wsrep_sst_receive_address must be set manually.

160601 16:51:27 [ERROR] Aborting

对比配置:

设置一下wsrep_sst_receive_address参数试试

wsrep_sst_receive_address=10.192.44.150

先安装rsync试试

# yum install rsync

[root@controller1 system]# systemctl enablersyncd

Created symlink from/etc/systemd/system/multi-user.target.wants/rsyncd.service to/usr/lib/systemd/system/rsyncd.service.

[root@controller1 system]#

[root@controller1 system]# systemctlrestart rsyncd

[root@controller1 system]# ps -A |greprsync

24101 ? 00:00:00 rsync

[root@controller1 system]#

OK,启动成功

7.2.3 验证集群

在controller2上查看数据库,可以看到数据已经同步。

MariaDB [(none)]> show databases;

+--------------------+

| Database |

+--------------------+

| information_schema |

| cinder |

| glance |

| keystone |

| mysql |

| neutron |

| nova |

| performance_schema |

+--------------------+

8 rows in set (0.00 sec)

7.3 haproxy配置

将Mysql本身的端口改为3311:

修改/etc/my.cnf.d/ server.cnf

[mysqld]

port=3311

重启服务:

systemctl restart mariadb.service

检查端口:

[root@controller1my.cnf.d(keystone_admin)]# ss -nalp |grep mysql

u_strLISTEN 0 50/var/lib/mysql/mysql.sock 7089035* 0users:(("mysqld",12708,24))

tcpLISTEN 0 50 *:3311 *:* users:(("mysqld",12708,23))

tcpLISTEN 0 128 *:4567 *:* users:(("mysqld",12708,11))

在haproxy中,监听mysql的3311端口:

这两个端口都配置haproxy,集群端口不必监听

listen mariadb

bind 10.192.45.213:3306

mode tcp

balance leastconn

option mysql-check user haproxy

server controller1 10.192.44.148:3311 weight 1 check inter 2000 rise 2 fall 5

server controller2 10.192.44.150:3311 weight 1 check inter 2000 rise 2 fall 5

重启haproxy:

systemctl restart haproxy

查看端口:

[root@controller1 haproxy]# ss -nalp |grep-e 3306 -e 3311 -e 4567

tcpLISTEN 0 12810.192.45.213:3306*:* users:(("haproxy",10067,7))

tcpLISTEN 0 50 *:3311 *:* users:(("mysqld",8422,23))

tcpLISTEN 0 128 *:4567 *:* users:(("mysqld",8422,11))

[root@controller1 haproxy]#

设置好所有权限:

GRANT ALL PRIVILEGES ON nova.* TO'nova'@'localhost' IDENTIFIED BY '1';

GRANT ALL PRIVILEGES ON nova.* TO'nova'@'%' IDENTIFIED BY '1';

GRANT ALL PRIVILEGES ON glance.* TO'glance'@'%' IDENTIFIED BY '1';

GRANT ALL PRIVILEGES ON glance.* TO'glance'@'localhost' IDENTIFIED BY '1';

GRANT ALL PRIVILEGES ON cinder.* TO'cinder'@'%' IDENTIFIED BY '1';

GRANT ALL PRIVILEGES ON cinder.* TO'cinder'@'localhost' IDENTIFIED BY'1';

GRANT ALL PRIVILEGES ON neutron.* TO'neutron'@'localhost' IDENTIFIED BY '1';

GRANT ALL PRIVILEGES ON neutron.* TO'neutron'@'%' IDENTIFIED BY '1';

GRANT ALL PRIVILEGES ON keystone.* TO'keystone'@'localhost' IDENTIFIED BY '1';

GRANT ALL PRIVILEGES ON keystone.* TO'keystone'@'%' IDENTIFIED BY '1';

如果出现:

160602 14:32:43 [ERROR] WSREP: Local stateseqno (861) is greater than group seqno (851): states diverged. Aborting toavoid potential data loss. Remove '/var/lib/mysql//grastate.dat' file andrestart if you wish to continue. (FATAL)

解决:

# rm /var/lib/mysql//grastate.dat

[root@controller2 my.cnf.d]# systemctlrestart mariadb.service

[root@controller2 my.cnf.d]#

7.4 数据库集群权限设置及访问验证

(1)以下操作说明mysql数据库本身的端口使用3306

[root@controller1 haproxy]# mysql --host=10.192.44.148--port=3306 --user=glance --password='1'

ERROR 2003 (HY000): Can't connect to MySQLserver on '10.192.44.148' (111)

[root@controller1 haproxy]#

[root@controller1 haproxy]#

[root@controller1 haproxy]# mysql --host=10.192.44.148--port=3311 --user=glance --password='1'

Welcome to the MariaDB monitor. Commands end with ; or \g.

Your MariaDB connection id is 137

Server version: 5.5.40-MariaDB-wsrepMariaDB Server, wsrep_25.11.r4026

Copyright (c) 2000, 2015, Oracle, MariaDBCorporation Ab and others.

Type 'help;' or '\h' for help. Type '\c' toclear the current input statement.

MariaDB [(none)]>

(2)以下操作说明监听成功

[root@controller1 haproxy]# mysql --host=10.192.45.220 --port=3306--user=glance --password='1'

Welcome to the MariaDB monitor. Commands end with ; or \g.

Your MariaDB connection id is 219

Server version: 5.5.40-MariaDB-wsrepMariaDB Server, wsrep_25.11.r4026

Copyright (c) 2000, 2015, Oracle, MariaDBCorporation Ab and others.

Type 'help;' or '\h' for help. Type '\c' toclear the current input statement.

MariaDB [(none)]> Ctrl-C -- exit!

如果出现如下问题:

[root@controller1my.cnf.d(keystone_admin)]# mysql --host=10.192.45.220 --port=3306 --user=glance--password='1'

ERROR 2013 (HY000): Lost connection toMySQL server at 'reading initial communication packet', system error: 0

解决办法:

Haproxy.cfg设置不对

(3)停掉本节点后访问集群地址

[root@controller1 haproxy]# systemctl stopmariadb.service

[root@controller1 haproxy]# ps -A |grep mysql

[root@controller1 haproxy]# mysql--host=10.192.45.220 --port=3306 --user=glance --password='1'

Welcome to the MariaDB monitor. Commands end with ; or \g.

Your MariaDB connection id is 306

Server version: 5.5.40-MariaDB-wsrepMariaDB Server, wsrep_25.11.r4026

Copyright (c) 2000, 2015, Oracle, MariaDBCorporation Ab and others.

Type 'help;' or '\h' for help. Type '\c' toclear the current input statement.

MariaDB [(none)]> Ctrl-C -- exit!

Aborted

说明集群正常

7.6 修改openstack所有组件数据库配置,配置为VIP,不能配置地址,地址使用默认的3306

Keystone:

./keystone.conf:#connection =mysql://keystone:1@10.192.44.148/keystone

./keystone.conf:connection = mysql://keystone:1@10.192.45.220/keystone

Glance:

./glance-registry.conf:connection=mysql://glance:1@10.192.44.148/glance

./glance-api.conf:connection=mysql://glance:1@10.192.44.148/glance

改为:

./glance-registry.conf:connection=mysql://glance:1@10.192.45.220/glance

./glance-api.conf:connection=mysql://glance:1@10.192.45.220/glance

Nova:

./nova.conf:sql_connection=mysql://nova:1@10.192.45.220/nova

Cinder:

./cinder.conf:#connection =mysql://cinder:1@10.192.44.148/cinder

./cinder.conf:connection = mysql://cinder:1@10.192.45.220/cinder

Neutron:

./neutron.conf:#connection =mysql://neutron:1@10.192.44.148/neutron

./neutron.conf:connection =mysql://neutron:1@10.192.45.220/neutron

重启服务:

Nova:

systemctl restartopenstack-nova-api.service openstack-nova-cert.serviceopenstack-nova-consoleauth.service openstack-nova-scheduler.serviceopenstack-nova-conductor.service openstack-nova-novncproxy.service

计算:

systemctl restartopenstack-nova-compute.service

neutron:

控制、网络:

systemctl restartopenvswitch.service

systemctl restartneutron-server.service

systemctlrestart neutron-openvswitch-agent.service neutron-l3-agent.serviceneutron-dhcp-agent.service neutron-metadata-agent.service

计算:

systemctl restartopenvswitch.service

systemctl restartneutron-openvswitch-agent.service

glance:

systemctlrestart openstack-glance-api.service openstack-glance-registry.service

cinder:

systemctlrestart openstack-cinder-api.service openstack-cinder-scheduler.service

systemctlrestart openstack-cinder-volume.service target.service

keystone:

systemctl restarthttpd.service memcached.service

7.7 openstack基本功能验证

验证未通过,各种功能失常

7.8 修改端口为vip:3306,4567

修改haproxy:

[root@controller2 haproxy]# systemctlrestart haproxy

[root@controller2 haproxy]# systemctlrestart keepalived

验证远程访问:

mysql --host=10.192.45.213 --port=3306 --user=glance --password='1'

[root@controller2 haproxy]# mysql --host=10.192.45.213 --port=3306--user=glance --password='1'

Welcome to the MariaDB monitor. Commands end with ; or \g.

Your MariaDB connection id is 1195

Server version: 5.5.40-MariaDB-wsrepMariaDB Server, wsrep_25.11.r4026

Copyright (c) 2000, 2015, Oracle, MariaDBCorporation Ab and others.

Type 'help;' or '\h' for help. Type '\c' toclear the current input statement.

MariaDB [(none)]> Ctrl-C -- exit!

Aborted

[root@controller2 haproxy]# systemctl stop mariadb.service

[root@controller2 haproxy]# ps -A |grep mysql

[root@controller2 haproxy]# mysql --host=10.192.45.213--port=3306 --user=glance --password='1'

Welcome to the MariaDB monitor. Commands end with ; or \g.

Your MariaDB connection id is 1196

Server version: 5.5.40-MariaDB-wsrepMariaDB Server, wsrep_25.11.r4026

Copyright (c) 2000, 2015, Oracle, MariaDBCorporation Ab and others.

Type 'help;' or '\h' for help. Type '\c' toclear the current input statement.

MariaDB [(none)]>

将haproxy.cfg中端口改为:3306、4567:

listen mariadb

bind10.192.45.213:3306

mode tcp

balance leastconn

option mysql-check user haproxy

server controller1 10.192.44.148:3311 weight 1 check inter 2000 rise 2fall 5

server controller2 10.192.44.150:3311 weight 1 check inter 2000 rise 2fall 5

7.9 将openstack各配置修改如下:

Keystone:

connection =mysql://keystone:1@10.192.45.213/keystone

Glance:

glance-api.conf

connection=mysql://glance:1@10.192.45.213/glance

vi glance-registry.conf

connection=mysql://glance:1@10.192.45.213/glance

Neutron:

connection =mysql://neutron:1@10.192.45.213/neutron

Nova:

sql_connection=mysql://nova:1@10.192.45.213/nova

Cinder:

connection =mysql://cinder:1@10.192.45.213/cinder

重启服务:

镜像服务:OK

云硬盘:出错:

2016-06-03 11:13:30.370 24731 ERRORcinder.volume.api File"/usr/lib64/python2.7/site-packages/numpy/lib/polynomial.py", line11, in

2016-06-03 11:13:30.370 24731 ERRORcinder.volume.api importnumpy.core.numeric as NX

2016-06-03 11:13:30.370 24731 ERRORcinder.volume.api AttributeError: 'module' object has no attribute 'core'

解决:

[root@controller2numpy]# find ./ -name \*.pyc |xargs rm -rf

[root@controller2numpy]# find ./ -name \*.pyo |xargs rm -rf

7.10 mysql单点故障验证

功能一切正常

8. keystone及httpd高可用

8.1 在controller2安装keystone及httpd

yuminstall openstack-keystone httpd mod_wsgi python-openstackclient memcachedpython-memcached

systemctlenable memcached.service

systemctlstart memcached.service

8.2 配置keystone

将controller1的keystone配置拷贝过来

检查、修改如下配置:

[root@controller2 keystone]# grep 192 ./ -r

./keystone.conf:connection =mysql://keystone:1@10.192.45.220/keystone

./keystone.conf:rabbit_host = 10.192.44.148

./keystone.conf:rabbit_hosts ="10.192.44.148:5672,10.192.44.150:5672"

这里只有数据库和rabbit,已经实现高可用

8.3 配置httpd

将controller1的keystone配置拷贝过来:

8.3.1 Conf目录

httpd.conf

ServerName "10.192.44.150"

8.3.2 conf.d目录

[root@controller2 conf.d]# grep 192 ./ -r

./15-horizon_vhost.conf: ServerAlias 10.192.44.148

[root@controller2 conf.d]# grep controller./ -r

./15-horizon_vhost.conf: ServerName controller1

./15-horizon_vhost.conf: ServerAlias controller1

./10-keystone_wsgi_admin.conf: ServerName controller1

./10-keystone_wsgi_main.conf: ServerName controller1

./15-default.conf: ServerName controller1

这里的10.192.44.148和controller1全部改为10.192.44.150

修改后:

[root@controller2 conf.d]# grep 192 ./ -r

./15-horizon_vhost.conf: ServerName 10.192.44.150

./15-horizon_vhost.conf: ServerAlias 10.192.44.150

./10-keystone_wsgi_admin.conf: ServerName 10.192.44.150

./10-keystone_wsgi_main.conf: ServerName 10.192.44.150

修改./conf.d/15-default.conf:ServerName controller1

ServerName 10.192.44.150

8.3.3 创建/var/www/cgi-bin/keystone目录

可以直接将controller1的/var/www目录拷贝过来

[root@controller2 cgi-bin]# chown -Rkeystone:keystone /var/www/cgi-bin/keystone

[root@controller2 cgi-bin]# chmod 755/var/www/cgi-bin/keystone/ -R

重启服务

# systemctl enablehttpd.service

# systemctl restarthttpd.service

登录验证:

8.4 keystone高可用配置:配置haproxy

配置haproxy.cfg

listen keystone_admin_cluster

bind 10.192.45.220:35362

balance source

option tcpka

option httpchk

option tcplog

server controller1 10.192.44.148:35357 check inter 2000 rise 2 fall 5

server controller2 10.192.44.150:35357 check inter 2000 rise 2 fall 5

listen keystone_public_internal_cluster

bind 10.192.45.220:5005

balance source

option tcpka

option httpchk

option tcplog

server controller1 10.192.44.148:5000 check inter 2000 rise 2 fall 5

server controller2 10.192.44.150:5000 check inter 2000 rise 2 fall 5

重启haproxy:

systemctl restart haproxy

检查:

[root@controller1 haproxy(keystone_admin)]#ps -A |grep ha

20412 ? 00:00:00 haproxy-systemd

20413 ? 00:00:00 haproxy

20414 ? 00:00:00 haproxy

[root@controller1 haproxy(keystone_admin)]#ss -nalp|grep haproxy

u_strLISTEN 0 10/var/lib/haproxy/stats.20413.tmp 3131922 * 0 users:(("haproxy",20414,4))

tcpUNCONN 0 0*:50375 *:*users:(("haproxy",20414,6),("haproxy",20413,6))

tcpLISTEN 0 12810.192.45.220:35362*:*users:(("haproxy",20414,8))

tcpLISTEN 0 12810.192.45.220:3306*:*users:(("haproxy",20414,7))

tcpLISTEN 0 12810.192.45.220:5005*:*users:(("haproxy",20414,9))

tcpLISTEN 0 128 *:5050 *:* users:(("haproxy",20414,5))

8.5 修改数据库endpoint

查看endpoint:

[root@controller1 haproxy(keystone_admin)]#openstack endpoint list

+----------------------------------+-----------+--------------+--------------+

| ID | Region | Service Name | Service Type |

+----------------------------------+-----------+--------------+--------------+

| 49a032e19f9841b381e795f60051f131 |RegionOne | glance | image |

| 7d19c203fdc7495fbfd0b01d9bc6203c |RegionOne | cinder | volume |

| c34d670ee15b47bda43830a48e9c4ef2 |RegionOne | nova | compute |

| 63fa679e443a4249a96a86ff17387b9f |RegionOne | neutron | network |

| 7fd7d16a27d74eeea3a9df764d3e0a74 |RegionOne | cinderv2 | volumev2 |

| 6df505c12153483a9f8dc42d64879c69 | RegionOne | keystone | identity |

+----------------------------------+-----------+--------------+--------------+

[root@controller1 haproxy(keystone_admin)]#openstack endpoint show identity

+--------------+----------------------------------+

| Field | Value |

+--------------+----------------------------------+

| adminurl | http://controller1:35357/v2.0 |

| enabled | True |

| id | 6df505c12153483a9f8dc42d64879c69 |

| internalurl | http://controller1:5000/v2.0 |

| publicurl | http://controller1:5000/v2.0 |

| region | RegionOne |

| service_id | 69c389157be24cf6b4511d648e8412be |

| service_name | keystone |

| service_type | identity |

+--------------+----------------------------------+

删除该endpoint:

[root@controller1 haproxy(keystone_admin)]#openstack endpoint delete 6df505c12153483a9f8dc42d64879c69

export OS_TOKEN= 5a67199a1ba44a78ddcb

export OS_URL=http://10.192.44.148:35357/v2.0

[root@controller1 haproxy(keystone_admin)]#

创建新的:

$ openstack endpoint create \

--publicurl http://10.192.45.220:5005/v2.0\

--internalurl http://10.192.45.220:5005/v2.0\

--adminurl http://10.192.45.220:35362/v2.0\

--region RegionOne \

identity

# openstack endpoint create --publicurlhttp://10.192.45.220:5005/v2.0 --internalurl http://10.192.45.220:5005/v2.0--adminurl http://10.192.45.220:35362/v2.0 --region RegionOne identity

+--------------+----------------------------------+

| Field | Value |

+--------------+----------------------------------+

| adminurl | http://10.192.45.220:35362/v2.0 |

| id | 1ac4fa49f27745ec9e20b8e2c93b8f4b |

| internalurl | http://10.192.45.220:5005/v2.0 |

| publicurl | http://10.192.45.220:5005/v2.0 |

| region | RegionOne |

| service_id | 69c389157be24cf6b4511d648e8412be |

| service_name | keystone |

| service_type | identity |

+--------------+----------------------------------+

8.6 keystone新的endpoint验证

unset OS_TOKEN OS_URL

openstack --os-auth-url http://10.192.45.220:35362--os-project-name admin --os-username admin --os-auth-type password token issue

[root@controller1 ~]# openstack--os-auth-url http://10.192.45.220:35362 --os-project-name admin --os-usernameadmin --os-auth-type password token issue

Password:

+------------+----------------------------------+

| Field| Value|

+------------+----------------------------------+

| expires| 2016-06-03T07:48:51Z|

| id| 9d12c94970744122a497bcbe9a794171 |

| project_id | 617e98e151b245d081203adcbb0ce7a4|

| user_id| cfca3361950644de990b52ad341a06f0 |

+------------+----------------------------------+

修改环境变量脚本为:

[root@controller1 ~]# cat admin_keystone

unset OS_SERVICE_TOKEN OS_TOKEN OS_URL

export OS_USERNAME=admin

export OS_PASSWORD=1

exportOS_AUTH_URL=http://10.192.45.220:35362/v2.0

export PS1='[\u@\h \W(keystone_admin)]\$ '

export OS_TENANT_NAME=admin

export OS_REGION_NAME=RegionOne

export OS_IMAGE_API_VERSION=2

[root@controller1 ~]#

Source脚本,执行命令:

[root@controller1 ~]# . admin_keystone

[root@controller1 ~(keystone_admin)]#openstack user list

+----------------------------------+---------+

| ID | Name |

+----------------------------------+---------+

| 0520ac06230f4c238ef96c66dc9d7ba6 |nova |

| 2398cfe405ac4480b27d3dfba36b64b4 |neutron |

| 290aac0402914399a187218ac6d351af |cinder |

| 9b9b7d340f5c47fa8ead236b55400675 |glance |

| cfca3361950644de990b52ad341a06f0 |admin |

+----------------------------------+---------+

[root@controller1 ~(keystone_admin)]#

OK,验证通过

8.7 其他组件的[keystone_authtoken]部分修改、功能验证

Glance:

[keystone_authtoken]

auth_uri = http://10.192.45.220:5005

auth_url = http://10.192.45.220:35362

auth_plugin = password

project_domain_id = default

user_domain_id = default

project_name = service

username = glance

password = 1

Cinder:

[keystone_authtoken]

auth_uri = http://10.192.45.220:5005

auth_url = http://10.192.45.220:35362

auth_plugin = password

project_domain_id = default

user_domain_id = default

project_name = service

username = cinder

password = 1

Nova:

[keystone_authtoken]

auth_uri = http://10.192.45.220:5005

auth_url = http://10.192.45.220:35362

admin_user=nova

admin_password=1

admin_tenant_name=service

Neutron:

[keystone_authtoken]

auth_uri = http://10.192.45.220:5005

auth_url = http://10.192.45.220:35362

admin_tenant_name = service

admin_user = neutron

admin_password = 1

horizon端口先不改

重启服务,验证功能:

重启服务:

Nova:

systemctl restartopenstack-nova-api.service openstack-nova-cert.serviceopenstack-nova-consoleauth.service openstack-nova-scheduler.serviceopenstack-nova-conductor.service openstack-nova-novncproxy.service

计算:

systemctl restart openstack-nova-compute.service

neutron:

控制、网络:

systemctl restartopenvswitch.service

systemctl restartneutron-server.service

systemctlrestart neutron-openvswitch-agent.service neutron-l3-agent.serviceneutron-dhcp-agent.service neutron-metadata-agent.service

计算:

systemctl restartopenvswitch.service

systemctl restartneutron-openvswitch-agent.service

glance:

systemctlrestart openstack-glance-api.service openstack-glance-registry.service

cinder:

systemctlrestart openstack-cinder-api.service openstack-cinder-scheduler.service

systemctlrestart openstack-cinder-volume.service target.service

keystone:

systemctl restarthttpd.service memcached.service

neutron需要改的很多:

[root@controller1 neutron(keystone_admin)]#grep 5000 ./ -r

./api-paste.ini:auth_uri=http://10.192.44.148:5000/v2.0

./metadata_agent.ini:auth_url =http://10.192.44.148:5000/v2.0

./plugins/ml2/ml2_conf_fslsdn.ini:# e.g.:http://127.0.0.1:5000/v2.0/

[root@controller1neutron(keystone_admin)]# grep 35357 ./ -r

./api-paste.ini:identity_uri=http://10.192.44.148:35357

修改后:

[root@controller1 neutron(keystone_admin)]#grep 5000 ./ -r

./plugins/ml2/ml2_conf_fslsdn.ini:# e.g.:http://127.0.0.1:5000/v2.0/

[root@controller1 neutron(keystone_admin)]#grep 5005 ./ -r

./api-paste.ini:auth_uri=http://10.192.45.220:5005/v2.0

./neutron.conf:nova_admin_auth_url=http://10.192.45.220:5005/v2.0

./neutron.conf:auth_uri =http://10.192.45.220:5005

./metadata_agent.ini:auth_url =http://10.192.45.220:5005/v2.0

[root@controller1 neutron(keystone_admin)]#grep 35357 ./ -r

[root@controller1 neutron(keystone_admin)]#grep 35362 ./ -r

./api-paste.ini:identity_uri=http://10.192.45.220:35362

./neutron.conf:auth_url = http://10.192.45.220:35362

8.8 基本功能验证

功能全部验证通过

删除网络、路由、云硬盘和云主机

保存数据库、保存配置!!!!!

8.9 备份配置及数据库

[root@controller1mysql_bak_keystone_ha_ok(keystone_admin)]# mysqldump -uroot -p1 nova >nova.sql

[root@controller1mysql_bak_keystone_ha_ok(keystone_admin)]# mysqldump -uroot -p1 cinder >cinder.sql

[root@controller1mysql_bak_keystone_ha_ok(keystone_admin)]# mysqldump -uroot -p1 keystone >keystone.sql

[root@controller1 mysql_bak_keystone_ha_ok(keystone_admin)]#mysqldump -uroot -p1 neutron > neutron.sql

[root@controller1mysql_bak_keystone_ha_ok(keystone_admin)]# mysqldump -uroot -p1 glance >glance.sql

[root@controller1mysql_bak_keystone_ha_ok(keystone_admin)]#

9. glance高可用

9.1 在controller2安装glance

yum install openstack-glancepython-glance python-glanceclient

配置:

将controller1的glance配置拷贝过来

9.2 配置:rabbitmq

./glance-api.conf:rabbit_host =10.192.44.148

./glance-api.conf:rabbit_hosts ="10.192.44.148:5672,10.192.44.150:5672"

9.3 配置:mysql

./glance-api.conf:#connection=mysql://glance:1@10.192.44.148/glance

./glance-api.conf:connection=mysql://glance:1@10.192.45.220/glance

9.4 配置:keystone

./glance-api.conf:auth_uri =http://10.192.45.220:5005

./glance-api.conf:auth_url = http://10.192.45.220:35362

9.5 配置ceph存储

[glance_store]

default_store = rbd

stores = rbd

rbd_store_pool = image

rbd_store_user = glance

rbd_store_ceph_conf=/etc/ceph/ceph.conf

rbd_store_chunk_size = 8

9.6 haproxy配置

listen glance_api_cluster

bind 10.192.45.220:9297

balance source

option tcpka

option httpchk

option tcplog

server controller1 10.192.44.148:9292 check inter 2000 rise 2 fall 5

server controller2 10.192.44.150:9292 check inter 2000 rise 2 fall 5

listen glance_registry_cluster

bind 10.192.45.220:9196

balance source

option tcpka

option tcplog

server controller1 10.192.44.148:9191 check inter 2000 rise 2 fall 5

server controller2 10.192.44.150:9191 check inter 2000 rise 2 fall 5

重启haproxy:

[root@controller1haproxy(keystone_admin)]# ss -nalp |grep 929

tcp LISTEN0 128 *:9292 *:*users:(("glance-api",13197,4),("glance-api",13196,4),("glance-api",13124,4))

tcp LISTEN0 128 10.192.45.220:9297 *:* users:(("haproxy",4001,10))

9.7 数据库修改:删除原endpoint,新的endpoint设置vip:9297

删除原来endpoint:

[root@controller1 ~(keystone_admin)]#openstack endpoint list

+----------------------------------+-----------+--------------+--------------+

| ID | Region | Service Name | Service Type |

+----------------------------------+-----------+--------------+--------------+

| 49a032e19f9841b381e795f60051f131 |RegionOne | glance | image |

| 1f7c4d63eafa483c8c0942bf80302e98 |RegionOne | keystone | identity |

| 7d19c203fdc7495fbfd0b01d9bc6203c |RegionOne | cinder | volume |

| c34d670ee15b47bda43830a48e9c4ef2 | RegionOne| nova | compute |

| 63fa679e443a4249a96a86ff17387b9f |RegionOne | neutron | network |

| 7fd7d16a27d74eeea3a9df764d3e0a74 |RegionOne | cinderv2 | volumev2 |

+----------------------------------+-----------+--------------+--------------+

[root@controller1 ~(keystone_admin)]#openstack endpoint delete 49a032e19f9841b381e795f60051f131

[root@controller1 ~(keystone_admin)]#openstack endpoint list

+----------------------------------+-----------+--------------+--------------+

| ID | Region | Service Name | Service Type |

+----------------------------------+-----------+--------------+--------------+

| 7d19c203fdc7495fbfd0b01d9bc6203c |RegionOne | cinder | volume |

| c34d670ee15b47bda43830a48e9c4ef2 |RegionOne | nova | compute |

| 7fd7d16a27d74eeea3a9df764d3e0a74 |RegionOne | cinderv2 | volumev2 |

| 1f7c4d63eafa483c8c0942bf80302e98 |RegionOne | keystone | identity |

| 63fa679e443a4249a96a86ff17387b9f |RegionOne | neutron | network |

+----------------------------------+-----------+--------------+--------------+

添加新的endpoint,端口使用vip:9297

openstack endpoint create \

--publicurl http://10.192.45.220:9297\

--internalurl http://10.192.45.220:9297 \

--adminurl http://10.192.45.220:9297\

--region RegionOne \

image

[root@controller1~(keystone_admin)]# openstack endpoint create --publicurlhttp://10.192.45.220:9297 --internalurl http://10.192.45.220:9297 --adminurlhttp://10.192.45.220:9297 --region RegionOne image

+--------------+----------------------------------+

| Field | Value |

+--------------+----------------------------------+

| adminurl | http://10.192.45.220:9297 |

| id | 83889920c32f476a98fef1594e4c47b9 |

| internalurl | http://10.192.45.220:9297 |

| publicurl | http://10.192.45.220:9297 |

| region | RegionOne |

| service_id | a0c905098446491cbb2f948285364c43 |

| service_name | glance |

| service_type | image |

+--------------+----------------------------------+

9.8 使用glance组件的服务配置修改

9.8.1 nova:注意所有节点都有nova

[glance]

#api_servers=10.192.44.148:9292

api_servers=10.192.45.220:9297

9.8.2 cinder

Cinder一般用不到,从镜像启动云硬盘才会用到,目前不配置

目前cinder只用作数据盘

[DEFAULT]

glance_host = 10.192.44.148

重启服务:

glance:

systemctlenable openstack-glance-api.service openstack-glance-registry.service

systemctlrestart openstack-glance-api.service openstack-glance-registry.service

重启服务:

Nova:

systemctl restartopenstack-nova-api.service openstack-nova-cert.serviceopenstack-nova-consoleauth.service openstack-nova-scheduler.serviceopenstack-nova-conductor.service openstack-nova-novncproxy.service

计算:

systemctl restartopenstack-nova-compute.service

9.9 验证:上传镜像、创建虚拟机

上传镜像OK

9.10 单点故障验证

将10.192.44.148的glance服务停掉:

systemctlstop openstack-glance-api.service openstack-glance-registry.service

[root@controller1 nova(keystone_admin)]#systemctl stop openstack-glance-api.service openstack-glance-registry.service

[root@controller1nova(keystone_admin)]# ps -A |grep glance

[root@controller1nova(keystone_admin)]#

验证上传功能:

上传成功,实现高可用!!!!

重新启动148的服务:

systemctlrestart openstack-glance-api.service openstack-glance-registry.service

9.11 导出数据库,进行备份

10 cinder高可用

10.1 在controller2、compute2安装cinder

yuminstall openstack-cinder targetcli python-oslo-db python-oslo-log MySQL-python

将controller1的配置拷贝过来

10.2 配置:rabbitmq

rabbit_host = 10.192.44.148

rabbit_port = 5672

rabbit_hosts ="10.192.44.148:5672,10.192.44.150:5672"

rabbit_userid = openstack

rabbit_password = 1

10.3 配置:mysql

[database]

#connection =mysql://cinder:1@10.192.44.148/cinder

connection = mysql://cinder:1@10.192.45.220/cinder

10.4 配置:keystone

[keystone_authtoken]

#auth_uri = http://10.192.44.148:5000

#auth_url = http://10.192.44.148:35357

auth_uri = http://10.192.45.220:5005

auth_url = http://10.192.45.220:35362

auth_plugin = password

project_domain_id = default

user_domain_id = default

project_name = service

username = cinder

password = 1

10.5 配置ceph存储

[ceph]

volume_driver =cinder.volume.drivers.rbd.RBDDriver

rbd_pool = volumes

rbd_ceph_conf = /etc/ceph/ceph.conf

rbd_flatten_volume_from_snapshot = false

rbd_max_clone_depth = 5

rbd_store_chunk_size = 4

rados_connect_timeout = -1

glance_api_version = 2

10.6 haproxy配置:cinder-api配置,cinder-volume不必配置

listen cinder_api

bind 10.192.45.220:8781

balance source

option tcpka

option tcplog

server controller1 10.192.44.148:8776 check inter 2000 rise 2 fall 5

server controller2 10.192.44.150:8776 check inter 2000 rise 2 fall 5

10.7 数据库修改:将endpoint改为vip: 8781

删除原来的cinder endpoint:

[root@controller1 haproxy(keystone_admin)]#openstack endpoint list

+----------------------------------+-----------+--------------+--------------+

| ID | Region | Service Name | Service Type |

+----------------------------------+-----------+--------------+--------------+

| 1f7c4d63eafa483c8c0942bf80302e98 |RegionOne | keystone | identity |

| 7d19c203fdc7495fbfd0b01d9bc6203c |RegionOne | cinder | volume |

| c34d670ee15b47bda43830a48e9c4ef2 |RegionOne | nova | compute |

| 63fa679e443a4249a96a86ff17387b9f |RegionOne | neutron | network |

| 7fd7d16a27d74eeea3a9df764d3e0a74 |RegionOne | cinderv2 | volumev2 |

| 83889920c32f476a98fef1594e4c47b9 |RegionOne | glance | image |

+----------------------------------+-----------+--------------+--------------+

[root@controller1 haproxy(keystone_admin)]#openstack endpoint delete 7d19c203fdc7495fbfd0b01d9bc6203c

[root@controller1 haproxy(keystone_admin)]#openstack endpoint delete 7fd7d16a27d74eeea3a9df764d3e0a74

创建新的endpoint:

openstack endpoint create \

--publicurl http://10.192.45.220:8781/v2/%\(tenant_id\)s\

--internalurl http://10.192.45.220:8781/v2/%\(tenant_id\)s\

--adminurl http://10.192.45.220:8781/v2/%\(tenant_id\)s\

--region RegionOne \

volume

openstack endpoint create \

--publicurl http://10.192.45.220:8781/v2/%\(tenant_id\)s\

--internalurl http://10.192.45.220:8781/v2/%\(tenant_id\)s\

--adminurl http://10.192.45.220:8781/v2/%\(tenant_id\)s\

--region RegionOne \

volumev2

[root@controller1 haproxy(keystone_admin)]#openstack endpoint create --publicurlhttp://10.192.45.220:8781/v2/%\(tenant_id\)s --internalurlhttp://10.192.45.220:8781/v2/%\(tenant_id\)s --adminurlhttp://10.192.45.220:8781/v2/%\(tenant_id\)s --region RegionOne volume

+--------------+--------------------------------------------+

| Field | Value |

+--------------+--------------------------------------------+

| adminurl | http://10.192.45.220:8781/v2/%(tenant_id)s|

| id |ae4b639f6ec448839ffca79fd95425fd|

| internalurl | http://10.192.45.220:8781/v2/%(tenant_id)s|

| publicurl |http://10.192.45.220:8781/v2/%(tenant_id)s |

| region | RegionOne |

| service_id | 3bfefe0409ba4b658d14071d3dbae348 |

| service_name | cinder |

| service_type | volume |

+--------------+--------------------------------------------+

[root@controller1 haproxy(keystone_admin)]#openstack endpoint create --publicurlhttp://10.192.45.220:8781/v2/%\(tenant_id\)s --internalurlhttp://10.192.45.220:8781/v2/%\(tenant_id\)s --adminurlhttp://10.192.45.220:8781/v2/%\(tenant_id\)s --region RegionOne volumev2

+--------------+--------------------------------------------+

| Field | Value |

+--------------+--------------------------------------------+

| adminurl | http://10.192.45.220:8781/v2/%(tenant_id)s|

| id |3637c1235b02437c9e47f96324702433|

| internalurl | http://10.192.45.220:8781/v2/%(tenant_id)s|

| publicurl |http://10.192.45.220:8781/v2/%(tenant_id)s |

| region | RegionOne |

| service_id | 4094a5b3cf5546f2b5de7ceac3229160 |

| service_name | cinderv2 |

| service_type | volumev2 |

+--------------+--------------------------------------------+

10.8 使用cinder的服务配置修改

没有地方需要修改,即使nova、glance也是去调cinder的api

10.9 验证

重启controller1\ controller2的cinder服务:

cinder:

systemctlenable openstack-cinder-api.service openstack-cinder-scheduler.service

systemctlrestart openstack-cinder-api.service openstack-cinder-scheduler.service

systemctlenable openstack-cinder-volume.service target.service

systemctlrestart openstack-cinder-volume.service target.service

重启compute1 compute2的cinder-volume服务:

systemctlenable openstack-cinder-volume.service target.service

systemctlrestart openstack-cinder-volume.service target.service

重启haproxy:

Compute2需要安装:如下包:

# yum install python-oslo-servicepython-oslo-vmware python-oslo-log python-oslo-policy python-oslo-versionedobjectspython2-oslo-reports python2-oslo-context python2-oslo-serialization

# yum install python2-oslo-configpython-oslo-concurrency python-oslo-db python2-oslo-utilspython-oslo-middleware python-oslo-rootwrap python2-oslo-i18npython-oslo-messaging

查看服务:

[root@controller1mysql_bak_cinder_ha_ok(keystone_admin)]# cinder service-list

+------------------+------------------+------+---------+-------+----------------------------+-----------------+

|Binary | Host | Zone |Status | State |Updated_at | DisabledReason |

+------------------+------------------+------+---------+-------+----------------------------+-----------------+

| cinder-scheduler |controller1 | nova | enabled| up| 2016-06-04T02:54:03.000000 |- |

| cinder-scheduler |controller2 | nova | enabled| up| 2016-06-04T02:54:00.000000 |- |

|cinder-volume | compute1 | nova | enabled | down | - | -|

| cinder-volume |compute1@ceph | nova | enabled| up| 2016-06-04T02:53:58.000000 |- |

|cinder-volume | compute1@lvm | nova | enabled | down | 2016-05-27T06:53:18.000000 | -|

| cinder-volume |compute2@ceph | nova | enabled| up| 2016-06-04T02:53:58.000000 |- |

| cinder-volume | controller1@ceph | nova | enabled | up |2016-06-04T02:53:59.000000 |- |

|cinder-volume |controller1@lvm | nova | enabled | down | 2016-05-27T03:55:06.000000 | -|

| cinder-volume | controller2@ceph | nova | enabled | up |2016-06-04T02:54:05.000000 |- |

+------------------+------------------+------+---------+-------+----------------------------+-----------------+

创建云硬盘:

通过

10.10 导出数据库备份

(1)导出数据库数据

(2)备份4个节点的所有配置

11 nova高可用

Controller2:Nova-api、nova-scheduler…nova-compute + neutron openvswitch

Compute2: nova-compute + neutronopenvswitch

11.1 安装nova服务(nova-compute除外)

在controller2安装nova服务:

#yum install openstack-nova-apiopenstack-nova-cert openstack-nova-conductor openstack-nova-consoleopenstack-nova-novncproxy openstack-nova-scheduler python-novaclient

11.2 安装nova-compute

参考:PART5:参考5.14:

yum install sysfsutils

yum install python-nova

yum install python-novaclient

yum install python-cinderclient

yum install --downloadonly--downloaddir=/root/rpm_nova_compute openstack-nova-compute

rpm -ivh --force --nodepslibguestfs-1.28.1-1.55.el7.centos.x86_64.rpm

rpm -ivh --force --nodepspython-libguestfs-1.28.1-1.55.el7.centos.x86_64.rpm

rpm -ivhopenstack-nova-common-12.0.1-1.el7.noarch.rpm

rpm -ivhopenstack-nova-compute-12.0.1-1.el7.noarch.rpm

11.3 配置nova服务

拷贝controller1的nova配置:

修改如下内容:

[root@controller2 nova]# grep 192 ./ -r|grep -v '#'

./nova.conf:metadata_host=10.192.44.148

./nova.conf:sql_connection=mysql://nova:1@10.192.45.220/nova

./nova.conf:novncproxy_base_url=http://10.192.44.148:6080/vnc_auto.html

./nova.conf:api_servers=10.192.45.220:9297

./nova.conf:auth_uri =http://10.192.45.220:5005

./nova.conf:auth_url =http://10.192.45.220:35362

./nova.conf:url=http://10.192.44.148:9696

./nova.conf:admin_auth_url=http://10.192.45.220:5005/v2.0

./nova.conf:rabbit_host=10.192.44.148

./nova.conf:rabbit_hosts ="10.192.44.148:5672,10.192.44.150:5672"

改为:

./nova.conf:metadata_host=10.192.44.150 (改为本节点)

./nova.conf:sql_connection=mysql://nova:1@10.192.45.220/nova(不必修改,mysql已高可用)

./nova.conf:novncproxy_base_url=http://10.192.44.148:6080/vnc_auto.html(horizon,暂时不需要修改)

./nova.conf:api_servers=10.192.45.220:9297 (不必修改,glance已高可用)

./nova.conf:auth_uri = http://10.192.45.220:5005 (不必修改,keystone已高可用)

./nova.conf:auth_url = http://10.192.45.220:35362 (不必修改,keystone已高可用)

./nova.conf:url=http://10.192.44.148:9696 (neutron,暂时不需要修改)

./nova.conf:admin_auth_url=http://10.192.45.220:5005/v2.0(不必修改,keystone已高可用)

./nova.conf:rabbit_host=10.192.44.148 (不必修改,rabbitmq已高可用)

./nova.conf:rabbit_hosts = "10.192.44.148:5672,10.192.44.150:5672"(不必修改,rabbitmq已高可用)

这里只需要修改metadata_host,vnc和neutron部分在horizon和neutron高可用时再配置

启动服务:

#systemctl enable openstack-nova-api.serviceopenstack-nova-cert.service openstack-nova-consoleauth.service openstack-nova-scheduler.serviceopenstack-nova-conductor.service openstack-nova-novncproxy.service

#systemctl restartopenstack-nova-api.service openstack-nova-cert.serviceopenstack-nova-consoleauth.service openstack-nova-scheduler.serviceopenstack-nova-conductor.service openstack-nova-novncproxy.service

启动有错,单个启动,逐个排查,之前site-packages误删除,可能恢复还有些问题

#systemctl restartopenstack-nova-api.service

#systemctl restart openstack-nova-cert.service

#systemctl restart openstack-nova-consoleauth.service

#systemctl restart openstack-nova-scheduler.service

#systemctl restartopenstack-nova-conductor.service

#systemctl restartopenstack-nova-novncproxy.service

所有服务启动正常

[root@controller1 tmp(keystone_admin)]#nova service-list

+----+------------------+-------------+----------+---------+-------+----------------------------+-----------------+

| Id | Binary | Host | Zone | Status| State | Updated_at| Disabled Reason |

+----+------------------+-------------+----------+---------+-------+----------------------------+-----------------+

| 1| nova-cert | controller1 |internal | enabled | up |2016-06-04T04:02:58.000000 | -|

| 2| nova-consoleauth | controller1 | internal | enabled | up | 2016-06-04T04:02:52.000000 | - |

| 3| nova-conductor | controller1 |internal | enabled | up |2016-06-04T04:02:57.000000 | -|

| 4| nova-scheduler | controller1 |internal | enabled | up |2016-06-04T04:02:56.000000 | -|

| 5| nova-compute | compute1 | nova| enabled | up |2016-06-04T04:02:53.000000 | -|

| 6| nova-compute | controller1 |nova | enabled | up | 2016-06-04T04:02:53.000000 | - |

| 7| nova-cert | controller2 | internal | enabled | up | 2016-06-04T04:03:01.000000 | - |

| 8| nova-consoleauth | controller2 | internal | enabled | up | 2016-06-04T04:02:56.000000 | - |

| 10 | nova-scheduler | controller2 | internal | enabled | up | 2016-06-04T04:03:02.000000 | - |

| 11 | nova-conductor | controller2 | internal | enabled | up | 2016-06-04T04:02:57.000000 | - |

+----+------------------+-------------+----------+---------+-------+----------------------------+-----------------+

11.4 qemu更新版本

yum install qemu qemu-img

11.5 libvirtd更新版本【注意替换libdevmapper】

手动安装libvirtd:参考6.2.7,注意启动脚本要加-l

手动安装libvirt:

yum install --downloadonly--downloaddir=/root/libvirt libvirtlibvirtd

卸载老的:

rpm -e --nodeps libvirt-clientlibvirt-daemon-driver-nodedev libvirt-glib libvirt-daemon-config-networklibvirt-daemon-driver-nwfilter libvirt-devel libvirt-daemon-driver-qemulibvirt-daemon-driver-interface libvirt-gobject libvirt-daemon-driver-storagelibvirt-daemon-driver-network libvirt-daemon-config-nwfilter libvirtlibvirt-daemon-driver-secret libvirt-gconfig libvirt-java-devellibvirt-daemon-kvm libvirt-docs libvirt-daemon-driver-lxc libvirt-pythonlibvirt-daemon libvirt-java

安装其他依赖:

yum install systemd

开始安装:

rpm -aq |grep libvirt

# rpm -ivhlibvirt-client-1.2.17-13.el7.x86_64.rpm

# rpm -ivhlibvirt-daemon-1.2.17-13.el7.x86_64.rpm

# rpm -ivhlibvirt-daemon-driver-network-1.2.17-13.el7.x86_64.rpm

# rpm -ivhlibvirt-daemon-driver-nodedev-1.2.17-13.el7.x86_64.rpm

# rpm -ivhlibvirt-daemon-driver-secret-1.2.17-13.el7.x86_64.rpm

# rpm -ivhlibvirt-daemon-driver-nwfilter-1.2.17-13.el7.x86_64.rpm

# rpm -ivhlibvirt-daemon-driver-storage-1.2.17-13.el7.x86_64.rpm

]# rpm -ivhlibvirt-daemon-driver-lxc-1.2.17-13.el7.x86_64.rpm

# rpm -ivhlibvirt-daemon-1.2.17-13.el7.x86_64.rpm

]# rpm -ivhlibvirt-daemon-driver-interface-1.2.17-13.el7.x86_64.rpm

# rpm -ivhlibvirt-daemon-driver-qemu-1.2.17-13.el7.x86_64.rpm

# rpm -ivh libvirt-daemon-config-nwfilter-1.2.17-13.el7.x86_64.rpm

# rpm -ivhlibvirt-daemon-config-network-1.2.17-13.el7.x86_64.rpm

# rpm -ivh libvirt-1.2.17-13.el7.x86_64.rpm

# rpm -ivhlibvirt-daemon-kvm-1.2.17-13.el7.x86_64.rpm

# rpm -ivh libvirt-docs-1.2.17-13.el7.x86_64.rpm

# rpm -ivhlibvirt-devel-1.2.17-13.el7.x86_64.rpm

# rpm -ivhlibvirt-python-1.2.17-2.el7.x86_64.rpm

# rpm -ivh dracut-033-359.el7.x86_64.rpm

rpm -ivhdracut-config-rescue-033-359.el7.x86_64.rpm

# rpm -ivhdracut-network-033-359.el7.x86_64.rpm

# rpm -ivhinitscripts-9.49.30-1.el7.x86_64.rpm

# rpm -ivh kmod-20-5.el7.x86_64.rpm

# rpm -ivh libgudev1-219-19.el7.x86_64.rpm

# rpm -ivhlibgudev1-devel-219-19.el7.x86_64.rpm

(1)先替换libdevmapper库

(2)修改libvirtd的配置

libvirtd.conf:

[root@controller2 libvirt]# vilibvirtd.conf

host_uuid="c92c64aa-d477-4a85-883f-96d9e914e4dc"

listen_tls = 0

listen_tcp = 1

tls_port = "16514"

tcp_port = "16509"

auth_tcp = "none"

设置开机自启动:

systemctl enablelibvirtd.service

修改启动脚本:

libvirtd.service

ExecStart=/usr/sbin/libvirtd -d --listen $LIBVIRTD_ARGS

启动libvirt:

systemctl restartlibvirtd.service

(3)验证virsh -c

[root@compute2 system]# virsh -cqemu+tcp://localhost/system

Welcome to virsh, the virtualizationinteractive terminal.

Type:'help' for help with commands

'quit' to quit

virsh #

11.6 启动nova-compute

systemctl enableopenstack-nova-compute.service

systemctl restartopenstack-nova-compute.service

[root@controller1 ~(keystone_admin)]# novaservice-list

+----+------------------+-------------+----------+---------+-------+----------------------------+-----------------+

| Id | Binary | Host | Zone | Status| State | Updated_at| Disabled Reason |

+----+------------------+-------------+----------+---------+-------+----------------------------+-----------------+

| 1| nova-cert | controller1 |internal | enabled | up |2016-06-04T06:15:18.000000 | -|

| 2| nova-consoleauth | controller1 | internal | enabled | up | 2016-06-04T06:15:22.000000 | - |

| 3| nova-conductor | controller1 |internal | enabled | up |2016-06-04T06:15:23.000000 | -|

| 4| nova-scheduler | controller1 |internal | enabled | up | 2016-06-04T06:15:16.000000| - |

| 5 | nova-compute | compute1 | nova| enabled | up |2016-06-04T06:15:23.000000 | -|

| 6 | nova-compute | controller1 | nova | enabled | up | 2016-06-04T06:15:16.000000 | - |

| 7| nova-cert | controller2 |internal | enabled | up |2016-06-04T06:15:21.000000 | -|

| 8| nova-consoleauth | controller2 | internal | enabled | up | 2016-06-04T06:15:17.000000 | - |

| 10 | nova-scheduler | controller2 | internal | enabled | up | 2016-06-04T06:15:22.000000 | - |

| 11 | nova-conductor | controller2 | internal | enabled | up | 2016-06-04T06:15:18.000000 | - |

| 13 | nova-compute | controller2| nova | enabled | up | 2016-06-04T06:15:17.000000 | - |

| 15 | nova-compute |compute2 | nova | enabled | up | 2016-06-04T06:15:21.000000 | - |

+----+------------------+-------------+----------+---------+-------+----------------------------+-----------------+

11.7 neutron openvswitch安装

在controller2和compute2:

# yum installopenstack-neutron openstack-neutron-ml2 openstack-neutron-openvswitch

将compute2的/etc/neutron目录拷贝过来:

这里只需要修改

在controller2 (10.192.44.150)上的设置:

设置eth2为隧道口,ip为192.168.0.150

在compute2 (10.192.44.151)上的设置:

设置eth2为隧道口,ip为192.168.0.151

[root@compute1 network-scripts]# catifcfg-eth2

DEVICE=eth2

ONBOOT=yes

STARTMODE=onboot

MTU=1500

BOOTPROTO=static

IPADDR=192.168.0.150 #151

NETMASK=255.255.255.0

[root@controller2 network-scripts]#ifconfig eth2

eth2:flags=4163 mtu 1500

[root@compute2 network-scripts]# ifconfigeth2

eth2:flags=4163 mtu 1500

inet 192.168.0.151 netmask255.255.255.0 broadcast 192.168.0.255

修改:

./plugins/ml2/openvswitch_agent.ini:local_ip=192.168.0.149

修改为本节点的隧道IP

[ovs]

integration_bridge = br-int

tunnel_bridge = br-tun

local_ip =192.168.0.151

enable_tunneling=True

[agent]

polling_interval = 2

tunnel_types =vxlan

vxlan_udp_port =4789

l2_population = False

arp_responder = False

prevent_arp_spoofing = True

enable_distributed_routing = False

drop_flows_on_start=False

[securitygroup]

firewall_driver =neutron.agent.linux.iptables_firewall.OVSHybridIptablesFirewallDriver

启动服务:

# systemctl enable openvswitch.service

ln -s /etc/neutron/plugins/ml2/ml2_conf.ini/etc/neutron/plugin.ini

cp/usr/lib/systemd/system/neutron-openvswitch-agent.service /usr/lib/systemd/system/neutron-openvswitch-agent.service.orig

sed -i's,plugins/openvswitch/ovs_neutron_plugin.ini,plugin.ini,g'/usr/lib/systemd/system/neutron-openvswitch-agent.service

# systemctl restart openvswitch.service

# systemctl restart openstack-nova-compute.service

# systemctl enableneutron-openvswitch-agent.service

# systemctl restartneutron-openvswitch-agent.service

查看此时网络服务:

[root@controller1 ~(keystone_admin)]#neutron agent-list

+--------------------------------------+--------------------+-------------+-------+----------------+---------------------------+

| id |agent_type | host | alive | admin_state_up | binary |

+--------------------------------------+--------------------+-------------+-------+----------------+---------------------------+

| 06468c16-3cc9-40d5-bcce-d7b19aa52954 | Open vSwitch agent |controller2 | :-) | True | neutron-openvswitch-agent |

| 1746662a-081c-4800-b371-479e670fbb20 |Metadata agent | controller1 |:-) | True | neutron-metadata-agent |

| 2ead14e3-6d3d-4e1c-9e07-7665a2632565 | L3agent | controller1 | :-) | True | neutron-l3-agent |

| 5749371b-df3e-4a51-a7f9-279ee2b8666a | Open vSwitch agent |compute2 | :-) | True | neutron-openvswitch-agent |

| 96820906-bc31-4fcf-a473-10a6d6865b2a | Open vSwitch agent |compute1 | :-) | True | neutron-openvswitch-agent |

| ad55ffa2-dd19-4cee-b5fc-db4bc60b796b | DHCPagent | controller1 | :-) | True | neutron-dhcp-agent |

| d264e9b0-c0c1-4e13-9502-43c248127dff | Open vSwitch agent |controller1 | :-) | True | neutron-openvswitch-agent |

+--------------------------------------+--------------------+-------------+-------+----------------+---------------------------+

[root@controller1 ~(keystone_admin)]# novaservice-list

+----+------------------+-------------+----------+---------+-------+----------------------------+-----------------+

| Id | Binary | Host | Zone | Status| State | Updated_at| Disabled Reason |

+----+------------------+-------------+----------+---------+-------+----------------------------+-----------------+

| 1| nova-cert | controller1 |internal | enabled | up |2016-06-04T06:47:48.000000 | -|

| 2| nova-consoleauth | controller1 | internal | enabled | up | 2016-06-04T06:47:52.000000 | - |

| 3| nova-conductor | controller1 |internal | enabled | up |2016-06-04T06:47:54.000000 | -|

| 4| nova-scheduler | controller1 |internal | enabled | up |2016-06-04T06:47:46.000000 | -|

| 5 | nova-compute | compute1 | nova| enabled | up |2016-06-04T06:47:46.000000 | -|

| 6 | nova-compute | controller1 | nova | enabled | up | 2016-06-04T06:47:54.000000 | - |

| 7| nova-cert | controller2 |internal | enabled | up |2016-06-04T06:47:51.000000 | -|

| 8| nova-consoleauth | controller2 | internal | enabled | up | 2016-06-04T06:47:57.000000 | - |

| 10 | nova-scheduler | controller2 | internal | enabled | up | 2016-06-04T06:47:52.000000 | - |

| 11 | nova-conductor | controller2 | internal | enabled | up | 2016-06-04T06:47:48.000000 | - |

| 13 | nova-compute |controller2 | nova | enabled |up | 2016-06-04T06:47:54.000000 |- |

| 15 | nova-compute |compute2 | nova | enabled | up | 2016-06-04T06:47:52.000000 | - |

+----+------------------+-------------+----------+---------+-------+----------------------------+-----------------+

[root@controller1 ~(keystone_admin)]# cinderservice-list

+------------------+------------------+------+---------+-------+----------------------------+-----------------+

|Binary | Host | Zone |Status | State |Updated_at | DisabledReason |

+------------------+------------------+------+---------+-------+----------------------------+-----------------+

| cinder-scheduler | controller1 | nova | enabled | up |2016-06-04T06:48:33.000000 |- |

| cinder-scheduler | controller2 | nova | enabled | up |2016-06-04T06:48:31.000000 |- |

|cinder-volume | compute1 | nova | enabled | down | - | -|

| cinder-volume |compute1@ceph | nova | enabled| up| 2016-06-04T06:48:28.000000 |- |

|cinder-volume | compute1@lvm | nova | enabled | down | 2016-05-27T06:53:18.000000 | -|

| cinder-volume |compute2@ceph | nova | enabled| up| 2016-06-04T06:48:39.000000 |- |

| cinder-volume | controller1@ceph | nova | enabled | up |2016-06-04T06:48:30.000000 |- |

|cinder-volume |controller1@lvm | nova | enabled | down | 2016-05-27T03:55:06.000000 | -|

| cinder-volume | controller2@ceph | nova | enabled | up |2016-06-04T06:48:35.000000 |- |

+------------------+------------------+------+---------+-------+----------------------------+-----------------+

11.8 非高可用nova功能验证、保存配置及数据库

功能正常,改一下如下配置,否则vnc连接不上:

vncserver_proxyclient_address=compute1

systemctl restartopenstack-nova-compute.service

保存此时的数据库及配置

11.9 配置nova-api高可用:使用vip:8779

11.9.1 修改haproxy.cfg

listen nova_compute_api

bind 10.192.45.220:8779

balance source

option tcpka

option tcplog

server controller1 10.192.44.148:8774 check inter 2000 rise 2 fall 5

server controller2 10.192.44.150:8774 check inter 2000 rise 2 fall 5

重启haproxy

systemctl restart haproxy

11.9.2 修改数据库:endpoint改为vip:8779

删除原来的:

[root@controller1 ~(keystone_admin)]#openstack endpoint list

+----------------------------------+-----------+--------------+--------------+

| ID | Region | Service Name | Service Type |

+----------------------------------+-----------+--------------+--------------+

| ae4b639f6ec448839ffca79fd95425fd |RegionOne | cinder | volume |

| 1f7c4d63eafa483c8c0942bf80302e98 |RegionOne | keystone | identity |

| c34d670ee15b47bda43830a48e9c4ef2 |RegionOne | nova | compute |

| 63fa679e443a4249a96a86ff17387b9f |RegionOne | neutron | network |

| 83889920c32f476a98fef1594e4c47b9 |RegionOne | glance | image |

| 3637c1235b02437c9e47f96324702433 |RegionOne | cinderv2 | volumev2 |

+----------------------------------+-----------+--------------+--------------+

[root@controller1 ~(keystone_admin)]#openstack endpoint delete c34d670ee15b47bda43830a48e9c4ef2

openstack endpoint create \

--publicurl http://10.192.45.220:8779/v2/%\(tenant_id\)s\

--internalurl http://10.192.45.220:8779/v2/%\(tenant_id\)s\

--adminurl http://10.192.45.220:8779/v2/%\(tenant_id\)s\

--region RegionOne \

compute

# openstack endpoint create --publicurlhttp://10.192.45.220:8779/v2/%\(tenant_id\)s --internalurlhttp://10.192.45.220:8779/v2/%\(tenant_id\)s --adminurlhttp://10.192.45.220:8779/v2/%\(tenant_id\)s --region RegionOne compute

+--------------+--------------------------------------------+

| Field | Value |

+--------------+--------------------------------------------+

| adminurl |http://10.192.45.220:8779/v2/%(tenant_id)s |

| id |ddd601fd0c40444fa282f84f4fb9ca0c|

| internalurl | http://10.192.45.220:8779/v2/%(tenant_id)s|

| publicurl |http://10.192.45.220:8779/v2/%(tenant_id)s |

| region | RegionOne |

| service_id | f82db038024746449b5b6be918b826f0 |

| service_name | nova |

| service_type | compute |

+--------------+--------------------------------------------+

重启nova所有服务:

重启服务:

Nova:

systemctl restartopenstack-nova-api.service openstack-nova-cert.serviceopenstack-nova-consoleauth.service openstack-nova-scheduler.serviceopenstack-nova-conductor.service openstack-nova-novncproxy.service

计算:

systemctl restartopenstack-nova-compute.service

[root@controller1 ~(keystone_admin)]# novaservice-list

+----+------------------+-------------+----------+---------+-------+----------------------------+-----------------+

| Id | Binary | Host | Zone | Status| State | Updated_at| Disabled Reason |

+----+------------------+-------------+----------+---------+-------+----------------------------+-----------------+

| 1| nova-cert | controller1 |internal | enabled | up |2016-06-04T08:11:44.000000 | -|

| 2| nova-consoleauth | controller1 | internal | enabled | up | 2016-06-04T08:11:44.000000 | - |

| 3| nova-conductor | controller1 |internal | enabled | up |2016-06-04T08:11:44.000000 | -|

| 4| nova-scheduler | controller1 |internal | enabled | up |2016-06-04T08:11:44.000000 | -|

| 5| nova-compute | compute1 | nova| enabled | up |2016-06-04T08:11:47.000000 | -|

| 6| nova-compute | controller1 |nova | enabled | up | 2016-06-04T08:11:51.000000 | - |

| 7| nova-cert | controller2 |internal | enabled | up |2016-06-04T08:11:42.000000 | -|

| 8| nova-consoleauth | controller2 | internal | enabled | up | 2016-06-04T08:11:42.000000 | - |

| 10 | nova-scheduler | controller2 | internal | enabled | up | 2016-06-04T08:11:42.000000 | - |

| 11 | nova-conductor | controller2 | internal | enabled | up | 2016-06-04T08:11:41.000000 | - |

| 13 | nova-compute | controller2 | nova | enabled | up | 2016-06-04T08:11:41.000000 | - |

| 15 | nova-compute | compute2 | nova| enabled | up |2016-06-04T08:11:47.000000 | -|

+----+------------------+-------------+----------+---------+-------+----------------------------+-----------------+

11.9.3 修改相关openstack组件的配置

Keystone: 无

Glance:无

Cinder:无

Neutron:

[root@controller1 neutron(keystone_admin)]#grep 877 ./ -r

./neutron.conf:nova_url =http://10.192.44.148:8774/v2

./metadata_agent.ini:nova_metadata_port =8775

这里先neutron的noa_url修改掉,metadata稍后处理

改为:

nova_url = http://10.192.45.220:8779/v2

11.10 nova metadata api高可用配置(VIP:8780):只在neutron-metadata-agent中使用,其他agents中不用

nova ec2、nova metadata api是否要配置高可用:metadata需要配置、ec2不需要配置

修改haproxy:

listen nova_metadata_api

bind 10.192.45.220:8780

balance source

option tcpka

option tcplog

server controller1 10.192.44.148:8775 check inter 2000 rise 2 fall 5

server controller2 10.192.44.150:8775 check inter 2000 rise 2 fall 5

修改neutron配置:metadata_agent.ini

改为:

#nova_metadata_ip = 10.192.44.148

#nova_metadata_port = 8775

nova_metadata_ip = 10.192.45.220

nova_metadata_port = 8780

重启haproxy

systemctlrestart haproxy

重启nova

systemctl restart openstack-nova-api.serviceopenstack-nova-cert.service openstack-nova-consoleauth.serviceopenstack-nova-scheduler.service openstack-nova-conductor.serviceopenstack-nova-novncproxy.service

计算:

systemctl restartopenstack-nova-compute.service

重启neutron

控制、网络:

systemctl restart openvswitch.service

systemctl restart neutron-server.service

systemctl restartneutron-openvswitch-agent.service neutron-l3-agent.serviceneutron-dhcp-agent.service neutron-metadata-agent.service

计算:

systemctl restart openvswitch.service

systemctl restartneutron-openvswitch-agent.service

11.11 nova高可用验证

11.11.1 服务状态检查

[root@controller1 ~(keystone_admin)]# neutron agent-list

+--------------------------------------+--------------------+-------------+-------+----------------+---------------------------+

| id |agent_type | host | alive | admin_state_up | binary |

+--------------------------------------+--------------------+-------------+-------+----------------+---------------------------+

| 06468c16-3cc9-40d5-bcce-d7b19aa52954 |Open vSwitch agent | controller2 | :-)| True |neutron-openvswitch-agent |

| 1746662a-081c-4800-b371-479e670fbb20 |Metadata agent | controller1 |:-) | True | neutron-metadata-agent |

| 2ead14e3-6d3d-4e1c-9e07-7665a2632565 | L3agent | controller1 | :-) | True | neutron-l3-agent |

| 5749371b-df3e-4a51-a7f9-279ee2b8666a |Open vSwitch agent | compute2 |:-) | True | neutron-openvswitch-agent |

| 96820906-bc31-4fcf-a473-10a6d6865b2a |Open vSwitch agent | compute1 |:-) | True | neutron-openvswitch-agent |

| ad55ffa2-dd19-4cee-b5fc-db4bc60b796b |DHCP agent | controller1 |:-) | True | neutron-dhcp-agent |

| d264e9b0-c0c1-4e13-9502-43c248127dff |Open vSwitch agent | controller1 | :-)| True |neutron-openvswitch-agent |

+--------------------------------------+--------------------+-------------+-------+----------------+---------------------------+

[root@controller1 ~(keystone_admin)]# nova service-list

+----+------------------+-------------+----------+---------+-------+----------------------------+-----------------+

| Id | Binary | Host | Zone | Status| State | Updated_at| Disabled Reason |

+----+------------------+-------------+----------+---------+-------+----------------------------+-----------------+

| 1| nova-cert | controller1 |internal | enabled | up |2016-06-04T08:39:44.000000 | -|

| 2| nova-consoleauth | controller1 | internal | enabled | up | 2016-06-04T08:39:44.000000 | - |

| 3| nova-conductor | controller1 |internal | enabled | up |2016-06-04T08:39:44.000000 | -|

| 4| nova-scheduler | controller1 |internal | enabled | up |2016-06-04T08:39:44.000000 | -|

| 5| nova-compute | compute1 | nova| enabled | up |2016-06-04T08:39:45.000000 | -|

| 6| nova-compute | controller1 |nova | enabled | up | 2016-06-04T08:39:50.000000 | - |

| 7| nova-cert | controller2 |internal | enabled | up |2016-06-04T08:39:53.000000 | -|

| 8| nova-consoleauth | controller2 | internal | enabled | up | 2016-06-04T08:39:52.000000 | - |

| 10 | nova-scheduler | controller2 | internal | enabled | up | 2016-06-04T08:39:52.000000 | - |

| 11 | nova-conductor | controller2 | internal | enabled | up | 2016-06-04T08:39:52.000000 | - |

| 13 | nova-compute | controller2 | nova | enabled | up | 2016-06-04T08:39:54.000000 | - |

| 15 | nova-compute | compute2 | nova| enabled | up |2016-06-04T08:39:46.000000 | -|

+----+------------------+-------------+----------+---------+-------+----------------------------+-----------------+

[root@controller1 ~(keystone_admin)]# cinder service-list

+------------------+------------------+------+---------+-------+----------------------------+-----------------+

|Binary | Host | Zone |Status | State | Updated_at | Disabled Reason |

+------------------+------------------+------+---------+-------+----------------------------+-----------------+

| cinder-scheduler | controller1 | nova | enabled | up |2016-06-04T08:40:03.000000 |- |

| cinder-scheduler | controller2 | nova | enabled | up |2016-06-04T08:40:11.000000 |- |

|cinder-volume | compute1 | nova | enabled | down | - | -|

| cinder-volume |compute1@ceph | nova | enabled| up| 2016-06-04T08:40:08.000000 |- |

|cinder-volume | compute1@lvm | nova | enabled | down | 2016-05-27T06:53:18.000000 | -|

| cinder-volume |compute2@ceph | nova | enabled| up| 2016-06-04T08:40:09.000000 |- |

| cinder-volume | controller1@ceph | nova | enabled | up |2016-06-04T08:40:10.000000 |- |

|cinder-volume |controller1@lvm | nova | enabled | down | 2016-05-27T03:55:06.000000 | -|

| cinder-volume | controller2@ceph | nova | enabled | up |2016-06-04T08:40:05.000000 |- |

+------------------+------------------+------+---------+-------+----------------------------+-----------------+

[root@controller1 ~(keystone_admin)]#

11.11.2 基本功能验证

网络创建:OK

路由器创建:OK

镜像上传:OK

云硬盘创建:OK

虚拟机创建:OK

11.11.3 单点故障验证

11.12 备份配置及数据库

11.13 注意

(1)nova的vnc是否还有问题:已确认没问题,点进控制台可以正常显示

(2)neutron部分配置之后,nova的配置还要改

12. neutron高可用(neutron-server & neutron agents)

12.1 controller2安装 neutron-server及neutronagents

Neutron-server(控制节点):

# yum installopenstack-neutron openstack-neutron-ml2 python-neutronclient which

Neutronagents(网络节点):

yuminstall openstack-neutron openstack-neutron-ml2 openstack-neutron-openvswitch

12.2 配置修改

将controller1的配置拷贝过来

查看哪里需要修改:

[root@controller2 neutron]# grep 192 ./ -r

Keystone部分已经高可用:

./api-paste.ini:#identity_uri=http://10.192.44.148:35357

./api-paste.ini:identity_uri=http://10.192.45.220:35362

./api-paste.ini:#auth_uri=http://10.192.44.148:5000/v2.0

./api-paste.ini:auth_uri=http://10.192.45.220:5005/v2.0

Nova端口已修改:

./neutron.conf:#nova_url =http://10.192.44.148:8774/v2

./neutron.conf:nova_url = http://10.192.45.220:8779/v2

Keystone已修改:

./neutron.conf:#nova_admin_auth_url=http://10.192.44.148:5000/v2.0

./neutron.conf:nova_admin_auth_url=http://10.192.45.220:5005/v2.0

./neutron.conf:#auth_uri =http://10.192.44.148:5000/v2.0

./neutron.conf:#identity_uri =http://10.192.44.148:35357

./neutron.conf:auth_uri =http://10.192.45.220:5005/v2.0

./neutron.conf:identity_uri = http://10.192.45.220:35362

数据库已修改:

./neutron.conf:connection =mysql://neutron:1@10.192.45.220/neutron

./neutron.conf:#connection =mysql://neutron:1@10.192.44.148/neutron

Rabbitmq已修改:

./neutron.conf:rabbit_host = 10.192.44.148

./neutron.conf:rabbit_hosts ="10.192.44.148:5672,10.192.44.150:5672"

Keystone已修改:

./metadata_agent.ini:#auth_url =http://10.192.44.148:5000/v2.0

./metadata_agent.ini:auth_url =http://10.192.45.220:5005/v2.0

Nova配置已修改:

./metadata_agent.ini:#nova_metadata_ip =10.192.44.148

./metadata_agent.ini:nova_metadata_ip =10.192.45.220

隧道口需要修改为150:

./plugins/ml2/openvswitch_agent.ini:local_ip = 192.168.0.148

[root@controller2 ml2]# catopenvswitch_agent.ini

[ovs]

integration_bridge = br-int

tunnel_bridge = br-tun

local_ip = 192.168.0.150

enable_tunneling=True

[agent]

polling_interval = 2

tunnel_types =vxlan

vxlan_udp_port =4789

l2_population = False

arp_responder = False

prevent_arp_spoofing = True

enable_distributed_routing = False

drop_flows_on_start=False

[securitygroup]

firewall_driver =neutron.agent.linux.iptables_firewall.OVSHybridIptablesFirewallDriver

12.3 配置br-ex

添加br-ex,使用eth1,IP使用外网IP,这里暂时使用:10.192.45.212(controller1使用10.192.45.211)

[root@controller2 network-scripts]# catifcfg-eth1

DEVICE=eth1

ONBOOT=yes

STARTMODE=onboot

MTU=1500

BOOTPROTO=static

IPADDR= 10.192.45.212

NETMASK=255.255.254.0

千万不要在此设置路由

#ifdown eth1

#ifup eth1

ovs-vsctl add-br br-ex

ovs-vsctl add-port br-exeth1

ethtool -K eth1 grooff

ifconfig br-ex up

12.4 启动服务验证功能

ln -s /etc/neutron/plugins/ml2/ml2_conf.ini/etc/neutron/plugin.ini

cp/usr/lib/systemd/system/neutron-openvswitch-agent.service /usr/lib/systemd/system/neutron-openvswitch-agent.service.orig

sed -i 's,plugins/openvswitch/ovs_neutron_plugin.ini,plugin.ini,g'/usr/lib/systemd/system/neutron-openvswitch-agent.service

systemctl restartopenstack-nova-api.service openstack-nova-scheduler.serviceopenstack-nova-conductor.service

systemctl enable neutron-server.service

systemctl restart neutron-server.service

systemctl enable openvswitch.service

systemctl restart openvswitch.service

systemctl enableneutron-openvswitch-agent.service neutron-l3-agent.serviceneutron-dhcp-agent.service neutron-metadata-agent.serviceneutron-ovs-cleanup.service

systemctl restartneutron-openvswitch-agent.service neutron-l3-agent.serviceneutron-dhcp-agent.service neutron-metadata-agent.service

查看服务启动情况:

[root@controller1 ~(keystone_admin)]#neutron agent-list

+--------------------------------------+--------------------+-------------+-------+----------------+---------------------------+

| id |agent_type | host |alive | admin_state_up | binary |

+--------------------------------------+--------------------+-------------+-------+----------------+---------------------------+

| 06468c16-3cc9-40d5-bcce-d7b19aa52954 | Open vSwitch agent | controller2| :-) | True | neutron-openvswitch-agent |

| 1746662a-081c-4800-b371-479e670fbb20 |Metadata agent | controller1 |:-) | True | neutron-metadata-agent |

| 2ead14e3-6d3d-4e1c-9e07-7665a2632565 | L3agent | controller1 | :-) | True | neutron-l3-agent |

| 3c38aeea-6541-452a-b5cd-24ca99bb364c | DHCP agent | controller2 | :-) | True | neutron-dhcp-agent |

| 5749371b-df3e-4a51-a7f9-279ee2b8666a |Open vSwitch agent | compute2 |:-) | True | neutron-openvswitch-agent |

| 96820906-bc31-4fcf-a473-10a6d6865b2a |Open vSwitch agent | compute1 |:-) | True | neutron-openvswitch-agent |

| a9294c1a-3557-4c48-a6ef-b87a53ea01fe | Metadata agent | controller2 | :-) | True | neutron-metadata-agent |

| ad55ffa2-dd19-4cee-b5fc-db4bc60b796b |DHCP agent | controller1 |:-) | True | neutron-dhcp-agent |

| d264e9b0-c0c1-4e13-9502-43c248127dff |Open vSwitch agent | controller1 | :-)| True |neutron-openvswitch-agent |

| f119179d-f87e-4d7e-bd24-235087a4ea33 | L3 agent | controller2 | :-) | True | neutron-l3-agent |

+--------------------------------------+--------------------+-------------+-------+----------------+---------------------------+

[root@controller2 neutron]# ps -A |grepneutron

13074 ? 00:00:01 neutron-server

13107 ? 00:00:00 neutron-server

13108 ? 00:00:00 neutron-server

13109 ? 00:00:00 neutron-server

13110 ? 00:00:00 neutron-server

13111 ? 00:00:00 neutron-server

13112 ? 00:00:00 neutron-server

13113 ? 00:00:00 neutron-server

13114 ? 00:00:00 neutron-server

20111 ? 00:00:01 neutron-openvsw

20123 ? 00:00:00 neutron-metadat

20129 ? 00:00:00 neutron-l3-agen

20131 ? 00:00:00 neutron-dhcp-ag

20160 ? 00:00:00 neutron-rootwra

20198 ? 00:00:00 neutron-metadat

20200 ? 00:00:00 neutron-metadat

20201 ? 00:00:00 neutron-metadat

20202 ? 00:00:00 neutron-metadat

20213 ? 00:00:00 neutron-rootwra

[root@controller2 neutron]#

OK,neutron所有服务正常

12.5配置haproxy监听neutron端口,使用vip:9701 (9696+5)

修改haproxy.cfg:

listen nova_metadata_api

bind 10.192.45.220:9701

balance source

option tcpka

option tcplog

server controller1 10.192.44.148:9696 check inter 2000 rise 2 fall 5

server controller2 10.192.44.150:9696 check inter 2000 rise 2 fall 5

重启haproxy

12.6修改数据库中的endpoint,改为vip:9701

删除老的neutron endpoint:

[root@controller1 haproxy(keystone_admin)]#openstack endpoint list

+----------------------------------+-----------+--------------+--------------+

| ID | Region |Service Name | Service Type |

+----------------------------------+-----------+--------------+--------------+

| ae4b639f6ec448839ffca79fd95425fd |RegionOne | cinder | volume |

| 1f7c4d63eafa483c8c0942bf80302e98 |RegionOne | keystone | identity |

| ddd601fd0c40444fa282f84f4fb9ca0c |RegionOne | nova | compute |

| 63fa679e443a4249a96a86ff17387b9f |RegionOne | neutron | network |

| 83889920c32f476a98fef1594e4c47b9 |RegionOne | glance | image |

| 3637c1235b02437c9e47f96324702433 |RegionOne | cinderv2 | volumev2 |

+----------------------------------+-----------+--------------+--------------+

[root@controller1 haproxy(keystone_admin)]#ls

haproxy.cfghaproxy.cfg.bak

[root@controller1 haproxy(keystone_admin)]#openstack endpoint delete 63fa679e443a4249a96a86ff17387b9f

[root@controller1 haproxy(keystone_admin)]#openstack endpoint list

+----------------------------------+-----------+--------------+--------------+

| ID | Region | Service Name | Service Type |

+----------------------------------+-----------+--------------+--------------+

| 83889920c32f476a98fef1594e4c47b9 |RegionOne | glance | image |

| ae4b639f6ec448839ffca79fd95425fd | RegionOne| cinder | volume |

| 1f7c4d63eafa483c8c0942bf80302e98 |RegionOne | keystone | identity |

| 3637c1235b02437c9e47f96324702433 |RegionOne | cinderv2 | volumev2 |

| ddd601fd0c40444fa282f84f4fb9ca0c |RegionOne | nova | compute |

+----------------------------------+-----------+--------------+--------------+

增加新的endpoint,使用vip:9701

openstack endpoint create \

--publicurl http://10.192.45.220:9701\

--adminurl http:// 10.192.45.220:9701\

--internalurl http:// 10.192.45.220:9701\

--region RegionOne \

network

# openstack endpoint create --publicurl http://10.192.45.220:9701--adminurl http://10.192.45.220:9701 --internalurl http://10.192.45.220:9701--region RegionOne network

+--------------+----------------------------------+

| Field | Value |

+--------------+----------------------------------+

| adminurl | http://10.192.45.220:9701 |

| id | 92ef4881ec754eaca192a9308d36ea9b |

| internalurl | http://10.192.45.220:9701 |

| publicurl | http://10.192.45.220:9701 |

| region | RegionOne |

| service_id | a3f4980ffb63482b905282ca7d3a2b01 |

| service_name | neutron |

| service_type | network |

+--------------+----------------------------------+

12.7修改其他相关openstack组件配置:nova

Keystone:无

Cinder:无

Glance:无

Nova:

需要修改nova.conf:

[neutron]

service_metadata_proxy=True

metadata_proxy_shared_secret=1

url=http://10.192.44.148:9696

改为:

url=http://10.192.45.220:9701

12.8重启服务,查看服务状态

重启服务:

neutron:

控制、网络:

systemctl restartopenvswitch.service

systemctl restartneutron-server.service

systemctlrestart neutron-openvswitch-agent.service neutron-l3-agent.serviceneutron-dhcp-agent.service neutron-metadata-agent.service

计算:

systemctl restartopenvswitch.service

systemctl restartneutron-openvswitch-agent.service

Nova:

systemctl restartopenstack-nova-api.service openstack-nova-cert.serviceopenstack-nova-consoleauth.service openstack-nova-scheduler.serviceopenstack-nova-conductor.service openstack-nova-novncproxy.service

计算:

systemctl restartopenstack-nova-compute.service

服务查看

[root@controller1 nova(keystone_admin)]#neutron agent-list

+--------------------------------------+--------------------+-------------+-------+----------------+---------------------------+

| id |agent_type | host | alive | admin_state_up | binary |

+--------------------------------------+--------------------+-------------+-------+----------------+---------------------------+

| 06468c16-3cc9-40d5-bcce-d7b19aa52954 |Open vSwitch agent | controller2 | :-)| True |neutron-openvswitch-agent |

| 1746662a-081c-4800-b371-479e670fbb20 |Metadata agent | controller1 |:-) | True | neutron-metadata-agent |

| 2ead14e3-6d3d-4e1c-9e07-7665a2632565 | L3agent | controller1 | :-) | True | neutron-l3-agent |

| 3c38aeea-6541-452a-b5cd-24ca99bb364c |DHCP agent | controller2 |:-) | True | neutron-dhcp-agent |

| 5749371b-df3e-4a51-a7f9-279ee2b8666a |Open vSwitch agent | compute2 |:-) | True | neutron-openvswitch-agent |

| 96820906-bc31-4fcf-a473-10a6d6865b2a |Open vSwitch agent | compute1 |:-) | True | neutron-openvswitch-agent |

| a9294c1a-3557-4c48-a6ef-b87a53ea01fe |Metadata agent | controller2 |:-) | True | neutron-metadata-agent |

| ad55ffa2-dd19-4cee-b5fc-db4bc60b796b |DHCP agent | controller1 |:-) | True | neutron-dhcp-agent |

| d264e9b0-c0c1-4e13-9502-43c248127dff |Open vSwitch agent | controller1 | :-)| True | neutron-openvswitch-agent|

| f119179d-f87e-4d7e-bd24-235087a4ea33 | L3agent | controller2 | :-) | True | neutron-l3-agent |

+--------------------------------------+--------------------+-------------+-------+----------------+---------------------------+

[root@controller1 nova(keystone_admin)]#nova service-list

+----+------------------+-------------+----------+---------+-------+----------------------------+-----------------+

| Id | Binary | Host | Zone | Status| State | Updated_at| Disabled Reason |

+----+------------------+-------------+----------+---------+-------+----------------------------+-----------------+

| 1| nova-cert | controller1 |internal | enabled | up |2016-06-05T03:31:40.000000 | -|

| 2| nova-consoleauth | controller1 | internal | enabled | up | 2016-06-05T03:31:40.000000 | - |

| 3| nova-conductor | controller1 |internal | enabled | up |2016-06-05T03:31:40.000000 | -|

| 4| nova-scheduler | controller1 |internal | enabled | up |2016-06-05T03:31:40.000000 | -|

| 5| nova-compute | compute1 | nova| enabled | up |2016-06-05T03:31:45.000000 | -|

| 6| nova-compute | controller1 | nova | enabled | up | 2016-06-05T03:31:46.000000 | - |

| 7| nova-cert | controller2 |internal | enabled | up |2016-06-05T03:31:46.000000 | -|

| 8| nova-consoleauth | controller2 | internal | enabled | up | 2016-06-05T03:31:46.000000 | - |

| 10 | nova-scheduler | controller2 | internal | enabled | up | 2016-06-05T03:31:46.000000 | - |

| 11 | nova-conductor | controller2 | internal | enabled | up | 2016-06-05T03:31:46.000000 | - |

| 13 | nova-compute | controller2 | nova | enabled | up | 2016-06-05T03:31:40.000000 | - |

| 15 | nova-compute | compute2 | nova| enabled | up |2016-06-05T03:31:38.000000 | -|

+----+------------------+-------------+----------+---------+-------+----------------------------+-----------------+

12.9 功能验证:通过

网络:

[root@controller1 ~(keystone_admin)]#neutron net-list

+--------------------------------------+------+-----------------------------------------------------+

| id | name |subnets |

+--------------------------------------+------+-----------------------------------------------------+

| 81612a20-d439-428a-9e6a-fc9b0b0650b9 |int |86865625-0ce1-4db0-a6f0-f239917ce280 192.168.0.0/24 |

| a9ae2030-c1b4-44f6-855d-28979e6a034f |ext |7c4a29cd-8098-4021-b0fc-eda379a3e1cc 10.192.44.0/23 |

+--------------------------------------+------+-----------------------------------------------------+

路由:

[root@controller1 ~(keystone_admin)]#neutron router-list

+--------------------------------------+--------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+-------------+-------+

| id | name | external_gateway_info|distributed | ha |

+--------------------------------------+--------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+-------------+-------+

| 5a22044e-a7f6-4e42-bd49-4620f342c087 |router | {"network_id":"a9ae2030-c1b4-44f6-855d-28979e6a034f", "enable_snat":true, "external_fixed_ips": [{"subnet_id":"7c4a29cd-8098-4021-b0fc-eda379a3e1cc", "ip_address":"10.192.44.215"}]} | False| False |

+--------------------------------------+--------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+-------------+-------+

镜像:

[root@controller1 ~(keystone_admin)]#glance image-list

+--------------------------------------+------+

| ID|Name |

+--------------------------------------+------+

| 003fa737-1839-4d31-8438-b696b80f40bd |cs |

+--------------------------------------+------+

虚拟机:

[root@controller1 ~(keystone_admin)]# novalist

+--------------------------------------+------+--------+------------+-------------+-------------------+

| ID | Name |Status | Task State | Power State | Networks |

+--------------------------------------+------+--------+------------+-------------+-------------------+

| ce3359ed-4a46-458c-8f62-28d9e8e96fce |css | ACTIVE | - | Running | int=192.168.0.201 |

+--------------------------------------+------+--------+------------+-------------+-------------------+

云硬盘:

[root@controller1 ~(keystone_admin)]#cinder list

+--------------------------------------+-----------+------------------+------+------+-------------+----------+-------------+-------------+

| ID | Status| Migration Status | Name | Size | Volume Type | Bootable | Multiattach| Attached to |

+--------------------------------------+-----------+------------------+------+------+-------------+----------+-------------+-------------+

| a55ff72b-36d1-4742-8e91-df4653d0d9ab |available | - | vvv| 1 |- | false| False | |

+--------------------------------------+-----------+------------------+------+------+-------------+----------+-------------+-------------+

[root@controller1 ~(keystone_admin)]#

12.10 保存所有节点配置、文档,及数据库数据

https://git.hikvision.com.cn/projects/FSDMDEPTHCLOUD/repos/hcloud_install_centos/commits/324591451662fecf81b1e62687298caeab823cb9

13. horizon高可用

13.1 在controller2安装horizon

下载:

yuminstall openstack-dashboard httpd mod_wsgi memcached python-memcached

配置:

将controller1的/etc/openstack-dashboard拷贝过来

先不修改配置

重启httpd,登录验证

重启失败

还有一些操作:

setsebool -Phttpd_can_network_connect on

chown -R apache:apache/usr/share/openstack-dashboard/static

还是出错

最后排查原因:

Jun 05 16:27:00 controller2 httpd[20213]:AH00526: Syntax error on line 1 of /etc/httpd/conf.d/openstack-dashboard.conf:

Jun 05 16:27:00 controller2 httpd[20213]:Name duplicates previous WSGI daemon definition.

将该内容改为与controller1一致

登录验证:

http://10.192.44.150/dashboard/auth/login/

失败:

检查httpd中的配置

重新拷贝148的httpd目录修改,登录正常:

13.2 controller2的horizon功能验证

登录正常,保存配置

接下来做高可用

13.3 将controller1/controller2、horizon默认端口改为8080

将httpd默认端口改为8080

[root@controller2 httpd]# grep 80 ./-r

./conf/ports.conf:Listen 80

./conf/httpd.conf:#Listen 80

./conf.d/15-horizon_vhost.conf:

./conf.d/15-default.conf:

改为:

[root@controller2 httpd]# grep 80 ./ -r

./conf/ports.conf:#Listen 80

./conf/ports.conf:Listen 8080

./conf/httpd.conf:#Listen 80

./conf/httpd.conf:Listen 8080

./conf.d/15-horizon_vhost.conf:

./conf.d/15-default.conf:

重启服务

登录验证:

同样修改10.192.44.148:

13.4 将local_settings中的keystone地址改为vip:5005

OPENSTACK_KEYSTONE_URL = http://10.192.44.148:5000/v2.0

改为:

OPENSTACK_KEYSTONE_URL = http://10.192.45.220:5005/v2.0

登录:

OK,正常

13.5 使用vip:80监听httpd的8080端口

修改haproxy:

listen openstack_dashboard

bind 10.192.45.220:80

balance source

option tcpka

option tcplog

server controller1 10.192.44.148:8080 check inter 2000 rise 2 fall 5

server controller2 10.192.44.150:8080 check inter 2000 rise 2 fall 5

重启ha,登录验证

需要在如下文件中添加别名:

15-horizon_vhost.conf

添加:

##Server aliases

ServerAlias 10.192.44.150

ServerAlias 10.192.45.220

ServerAlias controller2

ServerAlias localhost

目前使用三个IP都可以登录:

Controller2: 8080端口

http://10.192.44.150:8080/dashboard/

controller1: 8080端口

http://10.192.44.148:8080/dashboard

vip:80端口

http://10.192.45.220/dashboard

13.6 保存当前所有配置

13.7 vnc高可用配置

目前vnc中都设置的:

[vnc]

enabled=true

vncserver_listen=0.0.0.0

vncserver_proxyclient_address=10.192.44.148

novncproxy_base_url=http://controller1:6080/vnc_auto.html

这样如果主控节点宕机,那么所有节点会访问vnc失败

参考官方文档:

listen spice_cluster

bind :6080

balance source

option tcpka

option tcplog

server controller110.0.0.1:6080 check inter 2000 rise 2 fall 5

server controller210.0.0.2:6080 check inter 2000 rise 2 fall 5

server controller310.0.0.3:6080 check inter 2000 rise 2 fall 5

这里haproxy修改为:

listen novnc_proxy

bind 10.192.45.220:6085

balance source

option tcpka

option tcplog

server controller1 10.192.44.148:6080 check inter 2000 rise 2 fall 5

server controller2 10.192.44.150:6080 check inter 2000 rise 2 fall 5

nova配置改为:

[vnc]

enabled=true

vncserver_listen=0.0.0.0

vncserver_proxyclient_address=10.192.44.148

#novncproxy_base_url=http://controller1:6080/vnc_auto.html

novncproxy_base_url=http://10.192.45.220:6085/vnc_auto.html

14. 高可用验证

14.0 基本功能验证

登录:

http://10.192.44.148:8080/dashboard (controller1)

http://10.192.44.150:8080/dashboard (controller2)

http://10.192.45.220/dashboard (vip)

ceph部署:

osd.0~3 为各节点/dev/sdb1 + SSD(/dev/hda4)

osd.4~15:分别使用/dev/sdc1~/dev/sde1

功能验证情况:

创建网络 OK

创建路由 OK

上传镜像 OK

创建云硬盘 OK

启动虚拟机 OK

虚拟机热迁移OK

已做高可用:

Rabbitmq

Mysql

Httpd

Keystone

Cinder

Nova

Neutron

Glance

Horizon

vnc

问题:

(1)系统启动后,IP会乱,本来是eth0的配置会跳到eth4上,这里怀疑是storos初始化的问题。重启需要手动ifdown eth4等操作,有时候远程连不上。

(2)ceph osd有时重启未挂载,需要手动挂载osd,这里已经写了脚本,目前路径:/root/ ceph_osd_mount.sh。可以设置成开机调用。不是问题。

14.1 逐个服务单点故障验证

14.2 单服务器宕机验证

15.根据验收标准自测

16 问题汇总及处理

16.1 虚拟机热迁移后状态正常,但是vnc无法查看【已解决】

有一个vnc没配好,修改nova.conf:

vncserver_proxyclient_address=controller1

vnc_keymap=en-us

vnc_enabled=True

vncserver_listen=0.0.0.0

novncproxy_base_url=http://10.192.44.148:6080/vnc_auto.html

16.2 IP使用张工申请的IP,统一改掉【已改掉】

张工申请到的合法IP:

实机环境的10.192.44.0/23网段ip要求受管理,不能任意设置

目前分到的ip为10.192.44.148-10.192.44.151用于4个节点的访问

新申请到的ip为10.192.45.211-10.192.45.220

请李工修改下实机环境配置,使用申请到的ip

这里稍后修改掉!

16.3 节点名字重新规划【暂时用controller1\2\compute1\2,修改代价较大】

zhangxinji(张新纪) 06-01 18:17:41

节点名最好改成node1、node2这样的

每个节点不是单一功能的,运行过程中跑的组件也应该不一样

lixiangping(李祥平) 06-01 18:19:00

好的,我先记下来这一点

(1)暂时不改,代价比较大,ceph可能要重装,httpd配置也要改,nova配置也要改

(2)控制节点是控制、计算、网络复用,但控制服务是比较突出的特点;计算节点就是计算服务;这里区分开更清爽一些

16.4 四节点全部断网,导致ceph异常,openstack的后端都配置ceph,所以各种操作会出现问题,重装ceph

解决:osd必须mount,将mount操作添加到/etc/fstab中

解决:将mount操作添加到/etc/fstab可能导致物理机启动失败(在vmware虚拟机中可以)

写个脚本开机检测ceph osd,做自动挂载操作

16.4.1 在虚拟机验证开机脚本放在哪里合适

(1)方法1

/etc/profile.d/

chown & chmod

(2)方法2:

第二种方法

编辑 /etc/rc.d/rc.local文件

StorOS应该也是在这里加了自己的脚本

所以我的脚本也加在这里

16.4.2 脚本编写

共有16个OSD

如何检查那个目录属于哪个OSD?

对应关系:

Osd

Node

0 4 8 12

0

1 5 9 13

1

2 6 10 14

2

3 7 11 15

3

设osd num为x

x/4 余数为服务器节点号,节点148、149、150、151分别为0、1、2、3

加入osd.13,那么13/4=3余数1,则为node2上的osd

脚本:

16.4.3 脚本验证

16.5 数据库使用VIP连接丢失【开机后IP迁移到其他网卡,同16.6】

如果出现如下问题:

[root@controller1my.cnf.d(keystone_admin)]# mysql --host=10.192.45.213 --port=3306 --user=glance--password='1'

ERROR 2013 (HY000): Lost connection toMySQL server at 'reading initial communication packet', system error: 0

解决办法:

首先确认keepalived中设置的是MASTER\BACKUP模式

然后确认IP是否只在主这边

解决:修改haproxy.py配置:

成功:

listen mariadb

bind10.192.45.220:3311

mode tcp

option httpchk

balanceleastconn

servercontroller1 10.192.44.148:3306 weight 1

servercontroller2 10.192.44.150:3306 weight 1

失败:

listen mariadb

bind10.192.45.220:3311

mode tcp

balanceleastconn

optionmysql-check user haproxy

servercontroller1 10.192.44.148:3306 weight 1 check inter 2000 rise 2 fall 5

servercontroller2 10.192.44.150:3306 weight 1 check inter 2000 rise 2 fall 5

上面的配置会成功

下面的配置会失败

遗留问题:

(1)系统启动时,eth4会变成vip的ip,这里比较诡异

(2)controller2的mysql启动经常失败,重启服务器后容易启动正常

16.6 开机后vip会被eth4设置掉【怀疑StorOS一些初始化脚本有问题】

这里比较诡异,开机后controller1\2的eth4自动变为vip,这里需要搞个脚本将eth4的 down掉,然后重启keepalived、haproxy

16.7 memcached是否配置【已处理】

16.8 ceph最新调整:【已处理】

sdb1+hda4 作为osd01~3

其余osd不适用ssd作为journal。这里jorunal太小,并且用逻辑分区,所有sata都配ssd,本来ssd就太小,又做成逻辑分区,不太稳定。

换了大的SSD盘再做全journal的(hda只剩大概不到90G,再分成4个逻辑分区,分区分到hd8,比较不合理)

ceph-deploy osdprepare controller1:/dev/sdb1:/dev/hda4 compute1:/dev/sdb1:/dev/hda4controller2:/dev/sdb1:/dev/hda4 compute2:/dev/sdb1:/dev/hda4

ceph-deploy osdprepare controller1:/dev/sdc1 compute1:/dev/sdc1 controller2:/dev/sdc1compute2:/dev/sdc1

ceph-deploy osdprepare controller1:/dev/sdd1 compute1:/dev/sdd1 controller2:/dev/sdd1compute2:/dev/sdd1

ceph-deploy osdprepare controller1:/dev/sde1 compute1:/dev/sde1 controller2:/dev/sde1compute2:/dev/sde1

目前的ceph osd:

[root@controller1ceph]# ceph osd tree

ID WEIGHT TYPE NAME UP/DOWN REWEIGHT PRIMARY-AFFINITY

-1 29.11987 rootdefault

-2 7.27997host controller1

01.81999 osd.0 up1.00000 1.00000

41.81999 osd.4 up1.00000 1.00000

81.81999 osd.8 up1.00000 1.00000

12 1.81999 osd.12 up1.00000 1.00000

-3 7.27997host compute1

11.81999 osd.1 up1.00000 1.00000

51.81999 osd.5 up1.00000 1.00000

91.81999 osd.9 up1.00000 1.00000

13 1.81999 osd.13 up1.00000 1.00000

-4 7.27997host controller2

21.81999 osd.2 up1.00000 1.00000

61.81999 osd.6 up1.00000 1.00000

10 1.81999 osd.10 up1.00000 1.00000

14 1.81999 osd.14 up1.00000 1.00000

-5 7.27997host compute2

31.81999 osd.3 up1.00000 1.00000

71.81999 osd.7 up1.00000 1.00000

11 1.81999 osd.11 up1.00000 1.00000

15 1.81999 osd.15 up1.00000 1.00000

这样重启之后,可以使用初始化脚本来初始化

脚本先放到/root/下:

暂时不设置开机自启动

16.9 热迁移失败,报错【已解决】

(1)libvirtd.conf中的host_uuid设置

(2)

/usr/libexec/qemu-kvm:error while loading shared libraries: libiscsi.so.2: cannot open shared objectfile: No such file or directory

实际是安装的,可能没在/usr/lib64下,而是在/usr/lib64/iscsi下,拷贝出来

16.10 horizon的vnc显示”Failed connectto server”,但是“点击此处只显示控制台”后可以进入终端【已解决】

解决:

检查nova.conf中的vnc配置、libvirt、qemu中的vnc配置

和正常的环境做对比

16.10.1 nova配置

Packstack安装的标准配置:

Controller1:

[root@node1 nova]# grep vnc ./ -r |grep -v'#' |grep vnc

./nova.conf:novncproxy_host=0.0.0.0

./nova.conf:novncproxy_port=6080

./nova.conf:vncserver_proxyclient_address=node1

./nova.conf:vnc_keymap=en-us

./nova.conf:vnc_enabled=True

./nova.conf:vncserver_listen=0.0.0.0

./nova.conf:novncproxy_base_url=http://192.168.129.130:6080/vnc_auto.html

./nova.conf:[vnc]

./policy.json: "compute:get_vnc_console":"",

Controller2:

[root@node2 nova]# grep vnc ./ -r |grep -v'#' |grep vnc

./nova.conf:vncserver_proxyclient_address=node2

./nova.conf:vnc_keymap=en-us

./nova.conf:vnc_enabled=True

./nova.conf:vncserver_listen=0.0.0.0

./nova.conf:novncproxy_base_url=http://192.168.129.130:6080/vnc_auto.html

./nova.conf:[vnc]

./policy.json: "compute:get_vnc_console":"",

[root@node2 nova]#

Storos配置:

[root@controller1 nova(keystone_admin)]#grep vnc ./ -r

./nova.conf:novncproxy_host=0.0.0.0

./nova.conf:novncproxy_port=6080

./nova.conf:vncserver_proxyclient_address=10.192.44.148

./nova.conf:vnc_keymap=en-us

./nova.conf:vnc_enabled=True

./nova.conf:vncserver_listen=0.0.0.0

./nova.conf:novncproxy_base_url=http://10.192.44.148:6080/vnc_auto.html

./nova.conf:[vnc]

./policy.json: "compute:get_vnc_console":"",

[root@compute1 lib]# cd /etc/nova/

[root@compute1 nova]# grep vnc ./ -r

./nova.conf:vncserver_proxyclient_address=10.192.44.149

./nova.conf:vnc_keymap=en-us

./nova.conf:vnc_enabled=True

./nova.conf:vncserver_listen=0.0.0.0

./nova.conf:novncproxy_base_url=http://10.192.44.148:6080/vnc_auto.html

./nova.conf:[vnc]

./policy.json: "compute:get_vnc_console":"",

[root@controller2 nova]# grep vnc ./ -r

./nova.conf:novncproxy_host=0.0.0.0

./nova.conf:novncproxy_port=6080

./nova.conf:vncserver_proxyclient_address=10.192.44.150

./nova.conf:vnc_keymap=en-us

./nova.conf:vnc_enabled=True

./nova.conf:vncserver_listen=0.0.0.0

./nova.conf:novncproxy_base_url=http://10.192.44.148:6080/vnc_auto.html

./nova.conf:[vnc]

./policy.json: "compute:get_vnc_console":"",

[root@compute2 lib]# cd /etc/nova/

[root@compute2 nova]# grep vnc ./ -r

./nova.conf:vncserver_proxyclient_address=10.192.44.151

./nova.conf:vnc_keymap=en-us

./nova.conf:vnc_enabled=True

./nova.conf:vncserver_listen=0.0.0.0

./nova.conf:novncproxy_base_url=http://10.192.44.148:6080/vnc_auto.html

./nova.conf:[vnc]

./policy.json: "compute:get_vnc_console":"",

查看日志:

./nova-compute.log:2016-06-07 11:05:53.6285489 WARNING oslo_config.cfg [req-c662ff0e-89fd-44a6-b121-1aa95f85e47a - - - --] Option "vncserver_proxyclient_address" from group"DEFAULT" is deprecated. Use option"vncserver_proxyclient_address" from group "vnc".

原因:

如果nova.conf中有[vnc],那么[DEFAULT]中的vnc设置是无效的

这里讲nova.conf中的[vnc]去掉,本来也是空的

还是同样的问题

加上[vnc],将以上参数复制过来试试

"vnc_enabled" from group"DEFAULT" is deprecated. Use option "enabled" from group"vnc".

"vnc_keymap" from group"DEFAULT" is deprecated. Use option "keymap" from group"vnc".

"novncproxy_base_url" from group"DEFAULT" is deprecated. Use option "novncproxy_base_url"from group "vnc".

"vncserver_listen" from group"DEFAULT" is deprecated. Use option "vncserver_listen" fromgroup "vnc".

"vncserver_proxyclient_address"from group "DEFAULT" is deprecated. Use option"vncserver_proxyclient_address" from group "vnc".

修改为:

[vnc]

enabled=True

keymap=en-us

vncserver_listen=0.0.0.0

vncserver_proxyclient_address=10.192.44.150

novncproxy_base_url=http://10.192.44.148:6080/vnc_auto.html

还是有问题

nova-novncproxy.log

nova.console.websocketproxy[req-eeb86902-070e-4d11-9f14-ca50ece763e9 - - - - -] handler exception: Thetoken 'f371a5f3-a94e-43db-8df9-2fb9d7add579' is invalid or has expired

解决方案:

 发现compute节点的5900端口是打开的,而且可以正常连接该端口。

  因此重新检查controller节点和compute节点关于nova的配置文件,配置文件也没有太大问题。

在controller节点上输入“nova get-vnc-console test2 novnc”,结果如下:

+——-+———————————————————————————–+

| Type | Url |

+——-+———————————————————————————–+

| novnc |http://10.186.182.2:6080/vnc_auto.html?token=1a724d49-dd86-4081-aa05-64cadeac52ef|

+——-+———————————————————————————–+

  通过此URL去访问虚拟机,发现还是同样的报错”connection timeout”

  后来在一个论坛上终于找到了解决的方案,是一个DNS引起的问题。

https://bugs.launchpad.net/mos/+bug/1409661

  原因是Mirantis的Master节点没有连接到公网的地址,而默认在这个节点上会使用8.8.8.8的DNS服务器地址来做地址解析,所以如果该节点访问不了这个地址(就算连接到Internet也可能访问不了这个地址,因为8.8.8.8是Google的服务器,会被和谐掉,你懂的),就会出现DNS解析超时的情况,因此会出现如题所示的错误。解决方法有2个,一个是将8.8.8.8地址改为一个内网或者公网可以访问到DNS服务器,二是将这个地址改为127.0.0.1,此服务器会忽略DNS解析过程。具体操作步骤如下:

dockerctl shell cobbler

vi /etc/dnsmasq.upstream

更改8.8.8.8到一个内网或者公网的DNS服务器,或者改成127.0.0.1

?0?2/etc/init.d/dnsmasqreload

  然后再试试访问虚拟机的VNC Console,发现已经可以正常访问啦!

同时查看官网的说明:

#pkill dnsmasq 官网让杀死dnsmasq

将dns卸载掉,仍出错

[root@controller1 system(keystone_admin)]# rpm-e --nodeps dnsmasq

[root@controller1 system(keystone_admin)]#ps -A |grep dns

2174? 00:00:00 dnsmasq

[root@controller1 system(keystone_admin)]#kill -9 2174

16.10.2 进一步排查

tail -f nova-consoleauth.log

2016-06-07 14:11:37.216 13067 INFO nova.consoleauth.manager[req-e4fa8511-420b-4baf-bae3-97390a0b17b5 - - - - -] Checking Token:72f5cfcb-5d8d-4713-ae6a-aa492cbd298a, False

tail -f nova-novncproxy.log

2016-06-07 14:11:37.223 22225 INFOnova.console.websocketproxy [req-e4fa8511-420b-4baf-bae3-97390a0b17b5 - - - --] handler exception: 'NoneType' object has no attribute 'get'

https://ask.openstack.org/en/question/88773/novnc-connect-error-1006/

Hi,

When running a multi node environment with HA between twoor more controller nodes(or controller plane service nodes), nova consoleauthservice must be configured with memcached.

If not, no more than one consoleauth service can berunning in active state, since it need to save the state of the sessions. Whenmemcached is not used, you can check that can connect to the vnc console only afew times when you refresh the page. If that occurs means that the connectionis handled by the consoleauth service that currently is issuing sessions.

To solve your issue, configure memcached as backend tonova-consoleauth service.

To solve your issue add this line to nova.conf:

memcached_servers = 192.168.100.2:11211,192.168.100.3:11211

This should work to solve your issue.

Regards

我确实没有配置memcached_servers

这里配置一下试试

果然可以解决!!!

16.11 概况里显示的虚拟机数多一个:数据库回退清理不干净导致【暂时不处理】

相关TAG标签 环境 高可
上一篇:openstack高可用环境搭建(一):非高可用环境的搭建
下一篇:AmazonKinesis实时数据分析最佳实践分享
相关文章
图文推荐
热门新闻

关于我们 | 联系我们 | 广告服务 | 投资合作 | 版权申明 | 在线帮助 | 网站地图 | 作品发布 | Vip技术培训 | 举报中心

版权所有: 红黑联盟--致力于做实用的IT技术学习网站