频道栏目
首页 > 网络 > 云计算 > 正文

openstack高可用环境搭建(一):非高可用环境的搭建

2016-06-08 09:01:21         来源:行者无疆  
收藏   我要投稿

查看libvirtd的参数:

192.168.129.130:

[root@localhost libvirt]# ps -auxf |greplibvirt

root57595 0.0 0.0 112640960 pts/0 S+ 17:500:00 \_ grep --color=autolibvirt

root1267 0.0 0.4 1124500 17436 ? SslMay26 0:36 /usr/sbin/libvirtd–listen

10.192.44.149:

[root@compute1 libvirt]# ps -auxf |greplibvirt

root5656 0.0 0.0 112640980 pts/0 S+ 08:500:00 \_ grep --color=autolibvirt

nobody3121 0.0 0.015524 868 ? SMay27 0:00 /sbin/dnsmasq--conf-file=/var/lib/libvirt/dnsmasq/default.conf

root7032 0.0 0.0 1050596 12496 ? SslMay27 0:32 /usr/sbin/libvirtd

检查Libvirt版本:

[root@localhost libvirt]# libvirtd--version

libvirtd (libvirt) 1.2.17

[root@compute1 libvirt]# libvirtd --version

libvirtd (libvirt) 1.1.1

版本差别巨大

6.2.7 在149安装配置libvirt-1.2.17,解决16509端口(libvirt tcp_port)没有监听问题

卸载老的:

# rpm -e --nodeps libvirt-client libvirt-daemon-driver-nodedevlibvirt-glib libvirt-daemon-config-network libvirt-daemon-driver-nwfilterlibvirt-devel libvirt-daemon-driver-qemu libvirt-daemon-driver-interfacelibvirt-gobject libvirt-daemon-driver-storage libvirt-daemon-driver-network libvirt-daemon-config-nwfilterlibvirt libvirt-daemon-driver-secret libvirt-gconfig libvirt-java-devellibvirt-daemon-kvm libvirt-docs libvirt-daemon-driver-lxc libvirt-pythonlibvirt-daemon libvirt-java

[root@compute1 libvirt]# rpm -aq |greplibvirt

[root@compute1 libvirt]#

扫清障碍:

# yum install systemd ceph glusterfs glusterfs-api

安装新的:

warning: libvirt-1.2.17-13.el7.x86_64.rpm:Header V3 RSA/SHA256 Signature, key ID f4a80eb5: NOKEY

error: Failed dependencies:

libvirt-daemon = 1.2.17-13.el7 is needed by libvirt-1.2.17-13.el7.x86_64

libvirt-daemon-config-network = 1.2.17-13.el7 is needed bylibvirt-1.2.17-13.el7.x86_64

libvirt-daemon-config-nwfilter = 1.2.17-13.el7 is needed bylibvirt-1.2.17-13.el7.x86_64

libvirt-daemon-driver-lxc = 1.2.17-13.el7 is needed bylibvirt-1.2.17-13.el7.x86_64

libvirt-daemon-driver-qemu = 1.2.17-13.el7 is needed bylibvirt-1.2.17-13.el7.x86_64

libvirt-daemon-driver-nwfilter = 1.2.17-13.el7 is needed bylibvirt-1.2.17-13.el7.x86_64

libvirt-daemon-driver-interface =1.2.17-13.el7 is needed by libvirt-1.2.17-13.el7.x86_64

libvirt-daemon-driver-secret = 1.2.17-13.el7 is needed bylibvirt-1.2.17-13.el7.x86_64

libvirt-daemon-driver-storage = 1.2.17-13.el7 is needed by libvirt-1.2.17-13.el7.x86_64

libvirt-daemon-driver-network = 1.2.17-13.el7 is needed bylibvirt-1.2.17-13.el7.x86_64

libvirt-daemon-driver-nodedev = 1.2.17-13.el7 is needed bylibvirt-1.2.17-13.el7.x86_64

libvirt-client = 1.2.17-13.el7 is needed by libvirt-1.2.17-13.el7.x86_64

# rpm -ivhlibvirt-client-1.2.17-13.el7.x86_64.rpm

# rpm -ivhlibvirt-daemon-1.2.17-13.el7.x86_64.rpm

# rpm -ivhlibvirt-daemon-driver-network-1.2.17-13.el7.x86_64.rpm

# rpm -ivhlibvirt-daemon-driver-nodedev-1.2.17-13.el7.x86_64.rpm

# rpm -ivhlibvirt-daemon-driver-secret-1.2.17-13.el7.x86_64.rpm

# rpm -ivhlibvirt-daemon-driver-nwfilter-1.2.17-13.el7.x86_64.rpm

# rpm -ivhlibvirt-daemon-driver-storage-1.2.17-13.el7.x86_64.rpm

]# rpm -ivhlibvirt-daemon-driver-lxc-1.2.17-13.el7.x86_64.rpm

# rpm -ivhlibvirt-daemon-1.2.17-13.el7.x86_64.rpm

]# rpm -ivhlibvirt-daemon-driver-interface-1.2.17-13.el7.x86_64.rpm

# rpm -ivhlibvirt-daemon-driver-qemu-1.2.17-13.el7.x86_64.rpm

# rpm -ivhlibvirt-daemon-config-nwfilter-1.2.17-13.el7.x86_64.rpm

# rpm -ivhlibvirt-daemon-config-network-1.2.17-13.el7.x86_64.rpm

# rpm -ivh libvirt-1.2.17-13.el7.x86_64.rpm

配置libvirt:

[root@compute1 libvirt]# vi libvirtd.conf

listen_tls = 0

listen_tcp = 1

auth_tcp = "none"

重启libvirtd

[root@compute1 libvirt]# service libvirtdrestart

Redirecting to /bin/systemctl restart libvirtd.service

验证连接:

[root@compute1 libvirt]# virsh -cqemu+tcp://10.192.44.149/system

error: failed to connect to the hypervisor

error: unable to connect to server at'10.192.44.149:16509': Connection refused

还是有问题

systemctl stop firewalld.service

systemctl disable firewalld.service

将packstack配置原封不动拷贝过来:

# systemctlenable libvirtd.service

# systemctl startlibvirtd.service

16509端口没有被监听

正常的:

[root@localhost etc]# ss -nalp |grep 16509

tcpLISTEN 0 30 *:16509 *:* users:(("libvirtd",1267,13))

tcpLISTEN 0 30 :::16509 :::* users:(("libvirtd",1267,14))

本机:

[root@compute1 libvirt]# ss -nalp |grep16409

[root@compute1 libvirt]#

可能是启动脚本有问题,将packstack的对比一下:没有差别

为什么16509没有监听呢?

要让TCP、TLS等连接的生效,需要在启动 libvirtd 时加上 –listen 参数(简写为 -l )。而默认的 service libvirtd start 命令启动 libvirtd 服务时,并没带 --listen 参数,所以如果要使用TCP等连接方式,可以使用 libvirtd –listen -d 命令来启动libvirtd。

修改启动脚本:

ExecStart=/usr/sbin/libvirtd --listen$LIBVIRTD_ARGS

改为:

ExecStart=/usr/sbin/libvirtd --listen$LIBVIRTD_ARGS

[root@compute1 system]#systemctl reload libvirtd.service

Warning: libvirtd.servicechanged on disk. Run 'systemctl daemon-reload' to reload units.

[root@compute1 system]#systemctl daemon-reload

systemctlenable libvirtd.service

systemctl startlibvirtd.service

[root@compute1 system]# ss -nalp |grep 16509

tcpLISTEN 0 30 *:16509 *:* users:(("libvirtd",29569,13))

tcpLISTEN 0 30 :::16509 :::* users:(("libvirtd",29569,14))

但还是有问题:

[root@compute1 libvirt]# virsh -cqemu+tcp://localhost/system

error: failed to connect to the hypervisor

error: no connection driver available forqemu:///system

参考萤石云:

# vi /etc/libvirt/qemu.conf

vnc_listen = "0.0.0.0"

user = "root"

group = "root"

[root@compute1 libvirt]# virsh -cqemu+tcp://localhost/system

error: failed to connect to the hypervisor

error: no connection driver available forqemu:///system

还是不行!!!

为什么呢?端口已经监听了:

加了-d 还是不行:

ExecStart=/usr/sbin/libvirtd -d --listen$LIBVIRTD_ARGS

[root@compute1 system]# virsh -cqemu+tcp://localhost:16509/system

error: failed to connect to the hypervisor

error: no connection driver available forqemu:///system

[root@compute1 system]# virsh -cqemu+tcp://localhost/system

error: failed to connect to the hypervisor

error: no connection driver available forqemu:///system

[root@compute1 system]# virsh -cqemu+tcp://10.192.44.149/system

error: failed to connect to the hypervisor

error: no connection driver available forqemu:///system

[root@compute1 system]# virsh -c qemu+tcp://10.192.44.149:16509/system

error: failed to connect to the hypervisor

error: no connection driver available forqemu:///system

手动启动:

[root@compute1 system]# /usr/sbin/libvirtd-l -d

[root@compute1 system]# ps -A |greplibvirt

26161 ? 00:00:00 libvirtd

[root@compute1 system]#

[root@compute1 system]# ss -nalp |grep16509

tcpLISTEN 0 30 *:16509 *:* users:(("libvirtd",26161,14))

tcpLISTEN 0 30 :::16509 :::* users:(("libvirtd",26161,15))

还是不行:

[root@compute1 system]# virsh -cqemu+tcp://10.192.44.149:16509/system

error: failed to connect to the hypervisor

error: no connection driver available forqemu:///system

不以守护进程启动:

[root@compute1 system]# /usr/sbin/libvirtd-l

2016-05-28 02:41:07.632+0000: 31004: info :libvirt version: 1.2.17, package: 13.el7 (CentOS BuildSystem, 2015-11-20-16:24:10, worker1.bsys.centos.org)

2016-05-28 02:41:07.632+0000: 31004: error: virDriverLoadModule:73 : failed to load module/usr/lib64/libvirt/connection-driver/libvirt_driver_storage.so/usr/lib64/libvirt/connection-driver/libvirt_driver_storage.so: symboldm_task_get_info_with_deferred_remove, version Base not defined in file libdevmapper.so.1.02with link time reference

2016-05-28 02:41:07.633+0000: 31004: error: virDriverLoadModule:73 : failed to load module/usr/lib64/libvirt/connection-driver/libvirt_driver_qemu.so/usr/lib64/libvirt/connection-driver/libvirt_driver_qemu.so: undefined symbol:virStorageFileCreate

OK,这就是问题

正常的系统:

[root@localhost nwfilter]#/usr/sbin/libvirtd -l

2016-05-28 02:44:06.983+0000: 95407: info :libvirt version: 1.2.17, package: 13.el7 (CentOS BuildSystem, 2015-11-20-16:24:10, worker1.bsys.centos.org)

2016-05-28 02:44:06.983+0000: 95407:warning : virDriverLoadModule:65 : Module/usr/lib64/libvirt/connection-driver/libvirt_driver_lxc.so not accessible

这里虽然没地方报错,但库还是有依赖问题的

再次验证StorOS原生的Libvirt(148、151):

原生的也有问题:

[root@controller1 ~]# /usr/sbin/libvirtd-l

2016-05-28 02:46:43.181+0000: 12424: info :libvirt version: 1.1.1, package: 29.el7 (CentOS BuildSystem, 2014-06-17-17:13:31, worker1.bsys.centos.org)

2016-05-28 02:46:43.181+0000: 12424: error: virDriverLoadModule:79 : failed to load module/usr/lib64/libvirt/connection-driver/libvirt_driver_storage.so libgfapi.so.0:cannot open shared object file: No such file or directory

2016-05-28 02:46:43.183+0000: 12424: error: virDriverLoadModule:79 : failed to load module/usr/lib64/libvirt/connection-driver/libvirt_driver_qemu.so/usr/lib64/libvirt/connection-driver/libvirt_driver_qemu.so: undefined symbol:virStorageFileStat

2016-05-28 02:46:43.184+0000: 12424: error: virNetTLSContextCheckCertFile:117 : Cannot read CA certificate'/etc/pki/CA/cacert.pem': No such file or directory

无论原生的StorOS的libvirtd,还是更新了的libvirtd,都存在(5.13:排查libvirtd启动失败的问题),即使看起来启动成功,但实际上还是异常的。

解决:

手动安装一些包,在packstack查看这个符号表在哪里:

另外,先把刚才Libvirt rpm包安装时的依赖包都装上:

[root@compute1 libvirt]# rpm -aq|greplibvirt

libvirt-daemon-driver-lxc-1.2.17-13.el7.x86_64

libvirt-daemon-config-nwfilter-1.2.17-13.el7.x86_64

libvirt-daemon-1.2.17-13.el7.x86_64

libvirt-daemon-driver-secret-1.2.17-13.el7.x86_64

libvirt-daemon-driver-nodedev-1.2.17-13.el7.x86_64

libvirt-daemon-config-network-1.2.17-13.el7.x86_64

libvirt-daemon-driver-storage-1.2.17-13.el7.x86_64

libvirt-daemon-driver-nwfilter-1.2.17-13.el7.x86_64

libvirt-client-1.2.17-13.el7.x86_64

libvirt-daemon-driver-interface-1.2.17-13.el7.x86_64

libvirt-daemon-driver-network-1.2.17-13.el7.x86_64

libvirt-daemon-driver-qemu-1.2.17-13.el7.x86_64

libvirt-1.2.17-13.el7.x86_64

继续安装:

# rpm -ivhlibvirt-daemon-kvm-1.2.17-13.el7.x86_64.rpm

# rpm -ivhlibvirt-docs-1.2.17-13.el7.x86_64.rpm

# rpm -ivhlibvirt-devel-1.2.17-13.el7.x86_64.rpm

# rpm -ivhlibvirt-python-1.2.17-2.el7.x86_64.rpm

# rpm -ivh dracut-033-359.el7.x86_64.rpm

rpm -ivhdracut-config-rescue-033-359.el7.x86_64.rpm

# rpm -ivhdracut-network-033-359.el7.x86_64.rpm

# rpm -ivh initscripts-9.49.30-1.el7.x86_64.rpm

# rpm -ivh kmod-20-5.el7.x86_64.rpm

# rpm -ivh libgudev1-219-19.el7.x86_64.rpm

# rpm -ivhlibgudev1-devel-219-19.el7.x86_64.rpm

还是不行

在packstack上也没搜到:

[root@localhost usr]# grep'virStorageFileStat' ./ -r

Binary file ./lib64/libvirt/connection-driver/libvirt_driver_qemu.somatches

Binary file./lib64/libvirt/connection-driver/libvirt_driver_storage.so matches

进一步解决:

device-mapper-libs

对于这个包,先下载,不安装:

yum install --downloadonly--downloaddir=/root/device-mapper-libs device-mapper-libs

[root@localhost device-mapper-libs]# ls -l

total 916

-rw-r--r-- 1 root root 257444 Nov 25 2015 device-mapper-1.02.107-5.el7.x86_64.rpm

-rw-r--r-- 1 root root 170732 Nov 25 2015device-mapper-event-1.02.107-5.el7.x86_64.rpm

-rw-r--r-- 1 root root 172676 Nov 25 2015device-mapper-event-libs-1.02.107-5.el7.x86_64.rpm

-rw-r--r-- 1 root root 311392 Nov 25 2015device-mapper-libs-1.02.107-5.el7.x86_64.rpm

解压这些包,看里面有没有:

查看网页:

https://osdir.com/ml/fedora-virt-maint/2014-11/msg00310.html

--- Comment #12from Michal Privoznik ---

(In reply to Kashyap Chamarthy from comment #11)

>Okay, we found (thanks to DanPB for thehint to take a look at`journalctl`

>libvirt logs ) the root cause :device-mapper RPM version should be this:

>device-mapper-1.02.90-1.fc21.x86_64(instead of:

>device-mapper-1.02.88-2.fc21.x86_64)

>

>From `journalctl`:

>

>$ journalctl -u libvirtd --since=yesterday-p err

>[. . .] failed to load module

>/usr/lib64/libvirt/connection-driver/libvirt_driver_storage.so

>/usr/lib64/libvirt/connection-driver/libvirt_driver_storage.so:symbol

>dm_task_get_info_with_deferred_remove,version Base not defined in file

>libdevmapper.so.1.02 with link timereference

So libdevmapper.so (from device-mapper-libs)hasn't provided the symbol. Hence,

storage driver has failed to load.

在150上更新试一下:

[root@localhostdevice-mapper-libs]# /usr/sbin/libvirtd -l

2016-05-2804:51:03.750+0000: 488: info : libvirt version: 1.2.17, package: 13.el7 (CentOSBuildSystem , 2015-11-20-16:24:10,worker1.bsys.centos.org)

2016-05-2804:51:03.750+0000: 488: error : virDriverLoadModule:73 : failed to load module/usr/lib64/libvirt/connection-driver/libvirt_driver_storage.so/usr/lib64/libvirt/connection-driver/libvirt_driver_storage.so: symboldm_task_get_info_with_deferred_remove, version Base not defined in filelibdevmapper.so.1.02 with link time reference

2016-05-2804:51:03.751+0000: 488: error : virDriverLoadModule:73 : failed to load module/usr/lib64/libvirt/connection-driver/libvirt_driver_qemu.so/usr/lib64/libvirt/connection-driver/libvirt_driver_qemu.so: undefined symbol:virStorageFileCreate

2016-05-2804:51:03.752+0000: 488: error : virNetTLSContextCheckCertFile:120 : Cannot readCA certificate '/etc/pki/CA/cacert.pem': No such file or directory

问题依旧会出现

看来,也不一定是device-mapper的问题

# /usr/sbin/libvirtd -l

2016-05-28 03:27:07.435+0000: 6930: info : libvirtversion: 1.2.17, package: 13.el7 (CentOS BuildSystem, 2015-11-20-16:24:10, worker1.bsys.centos.org)

2016-05-28 03:27:07.435+0000: 6930: error :virDriverLoadModule:73 : failed to load module/usr/lib64/libvirt/connection-driver/libvirt_driver_storage.so /usr/lib64/libvirt/connection-driver/libvirt_driver_storage.so:symbol dm_task_get_info_with_deferred_remove, version Base not defined in filelibdevmapper.so.1.02 with link time reference

2016-05-28 03:27:07.436+0000: 6930: error :virDriverLoadModule:73 : failed to load module/usr/lib64/libvirt/connection-driver/libvirt_driver_qemu.so/usr/lib64/libvirt/connection-driver/libvirt_driver_qemu.so: undefined symbol:virStorageFileCreate

设置fedora的源:

这里的device-mapper包是1.02.84的

现在下载更新的包试试

这里是107的

https://mirrors.hikvision.com.cn/centos/7.2.1511/updates/x86_64/Packages/

[updates-7.2-device-mapper]

name=CentOS-7.2-local

baseurl=https://mirrors.hikvision.com.cn/centos/7.2.1511/updates/x86_64/

gpgcheck=0

升级:

升级到107同样会产生改问题,应该换个思路了

先卸载,再升级试试:

# yum remove device-mapper

# yum remove device-mapper-libs

升级:

Transaction check error:

file /usr/lib/systemd/system/blk-availability.service from install ofdevice-mapper-7:1.02.107-5.el7_2.2.x86_64 conflicts with file from packagelvm2-7:2.02.105-14.el7.x86_64

file /usr/sbin/blkdeactivate from install ofdevice-mapper-7:1.02.107-5.el7_2.2.x86_64 conflicts with file from packagelvm2-7:2.02.105-14.el7.x86_64

file /usr/share/man/man8/blkdeactivate.8.gz from install ofdevice-mapper-7:1.02.107-5.el7_2.2.x86_64 conflicts with file from packagelvm2-7:2.02.105-14.el7.x86_64

安装:

Yum install python

Yum出现问题

Yum clean all

Rpm先删除原来的device-mapper包

# yum install device-mapper

再次验证:

2016-05-28 05:27:15.342+0000: 31813: error: virDriverLoadModule:73 : failed to load module/usr/lib64/libvirt/connection-driver/libvirt_driver_qemu.so /usr/lib64/libvirt/connection-driver/libvirt_driver_qemu.so:undefined symbol: virStorageFileCreate

依旧有错

所以目前原因还不是太明确

通过和packstack标准环境对比,再安装几个包:

libvirt-glib-0.1.7-3.el7.x86_64

libvirt-gobject-0.1.7-3.el7.x86_64

libvirt-gconfig-0.1.7-3.el7.x86_64

[root@compute1 etc]# /usr/sbin/

Display all 810 possibilities? (y or n)

[root@compute1 etc]# /usr/sbin/libvirtd -l

2016-05-28 04:20:36.938+0000: 11713: info :libvirt version: 1.2.17, package: 13.el7 (CentOS BuildSystem, 2015-11-20-16:24:10, worker1.bsys.centos.org)

2016-05-28 04:20:36.938+0000: 11713: error: virDriverLoadModule:73 : failed to load module/usr/lib64/libvirt/connection-driver/libvirt_driver_storage.so/usr/lib64/libvirt/connection-driver/libvirt_driver_storage.so: symboldm_task_get_info_with_deferred_remove, version Base not defined in filelibdevmapper.so.1.02 with link time reference

2016-05-28 04:20:36.940+0000: 11713: error: virDriverLoadModule:73 : failed to load module/usr/lib64/libvirt/connection-driver/libvirt_driver_qemu.so /usr/lib64/libvirt/connection-driver/libvirt_driver_qemu.so:undefined symbol: virStorageFileCreate

依旧出现

> >

> > Updating todevice-mapper-1.02.90-1.fc21.x86_64 solved the issue:

>

> Exactly! this is a device-mapper-libsbug where they just didn't export some

> symbol(s) for a several versions.

6.2.8libvirtd启动问题目前的处理

目前从标准centos拷贝一个libdevmapper.so.1.02即可解决

6.2.9 重新验证虚拟机功能

# systemctl restartlibvirtd.service openstack-nova-compute.service

[root@controller1 nova(keystone_admin)]#tail -f nova-conductor.log

Failed to compute_task_build_instances: Novalid host was found. There are not enough hosts available.

Traceback (most recent call last):

File "/usr/lib/python2.7/site-packages/oslo_messaging/rpc/server.py",line 142, in inner

return func(*args, **kwargs)

File"/usr/lib/python2.7/site-packages/nova/scheduler/manager.py", line84, in select_destinations

filter_properties)

File "/usr/lib/python2.7/site-packages/nova/scheduler/filter_scheduler.py",line 90, in select_destinations

raise exception.NoValidHost(reason=reason)

NoValidHost: No valid host was found. Thereare not enough hosts available.

调度不到?

首先,可以确定,这是控制节点的问题,根本还没有执行到nova-compute

修改配置后,有所进步:

错误:实例 "cs1" 执行所请求操作失败,实例处于错误状态。: 请稍后再试 [错误:Exceeded maximum number of retries. Exceeded max scheduling attempts 3 forinstance 21976cef-af5f-495c-8265-1468a52da7f9. Last exception: [u'Traceback(most recent call last):\n', u' File "/usr/lib/python2.7/site-packages/nova/compute/manager.py",line 1].

现在已经可以执行到nova-compute部分,不过在nova-compute中出现错误:

[-]Instance failed network setup after 1 attempt(s)

Traceback (most recent call last):

File"/usr/lib/python2.7/site-packages/nova/compute/manager.py", line1564, in _allocate_network_async

dhcp_options=dhcp_options)

File"/usr/lib/python2.7/site-packages/nova/network/neutronv2/api.py",line 727, in allocate_for_instance

self._delete_ports(neutron, instance, created_port_ids)

File "/usr/lib/python2.7/site-packages/oslo_utils/excutils.py",line 195, in __exit__

six.reraise(self.type_, self.value, self.tb)

File"/usr/lib/python2.7/site-packages/nova/network/neutronv2/api.py",line 719, in allocate_for_instance

security_group_ids, available_macs, dhcp_opts)

File"/usr/lib/python2.7/site-packages/nova/network/neutronv2/api.py",line 342, in _create_port

raise exception.PortBindingFailed(port_id=port_id)

PortBindingFailed: Binding failed for port016ad6b1-c0e2-41f3-8111-35c95acf369a, please check neutron logs for moreinformation.

这里可能是因为ovs的问题

控制节点:neutron.log报错:

2016-05-2814:08:50.559 2498 INFO neutron.wsgi [req-bcc7fd6e-9d5f-4f49-a3c7-4c2fd8d12a59 cfca3361950644de990b52ad341a06f0617e98e151b245d081203adcbb0ce7a4 - - -] 10.192.44.149 - - [28/May/201614:08:50] "GET/v2.0/security-groups.json?tenant_id=617e98e151b245d081203adcbb0ce7a4HTTP/1.1" 200 1765 0.011616

2016-05-2814:08:50.653 2498 ERRORneutron.plugins.ml2.managers [req-e0781543-8bec-4b46-9c03-1d872b98535a2398cfe405ac4480b27d3dfba36b64b4 165f6edf748d4bff957beada1f2a728e - - -] Failed to bind port2799fb4c-d513-425d-becb-4947a5c8bfdd on host compute1

2016-05-2814:08:50.653 2498 ERROR neutron.plugins.ml2.managers[req-e0781543-8bec-4b46-9c03-1d872b98535a 2398cfe405ac4480b27d3dfba36b64b4165f6edf748d4bff957beada1f2a728e - - -] Failed to bind port2799fb4c-d513-425d-becb-4947a5c8bfdd on host compute1

2016-05-2814:08:50.653 2498 INFO neutron.plugins.ml2.plugin[req-e0781543-8bec-4b46-9c03-1d872b98535a 2398cfe405ac4480b27d3dfba36b64b4165f6edf748d4bff957beada1f2a728e - - -] Attempt 2 to bind port2799fb4c-d513-425d-becb-4947a5c8bfdd

2016-05-2814:08:50.656 2498 ERROR neutron.plugins.ml2.managers [req-e0781543-8bec-4b46-9c03-1d872b98535a2398cfe405ac4480b27d3dfba36b64b4 165f6edf748d4bff957beada1f2a728e - - -] Failedto bind port 2799fb4c-d513-425d-becb-4947a5c8bfdd on host compute1

2016-05-2814:08:50.657 2498 ERROR neutron.plugins.ml2.managers [req-e0781543-8bec-4b46-9c03-1d872b98535a2398cfe405ac4480b27d3dfba36b64b4 165f6edf748d4bff957beada1f2a728e - - -] Failedto bind port 2799fb4c-d513-425d-becb-4947a5c8bfdd on host compute1

2016-05-2814:08:50.658 2498 INFO neutron.plugins.ml2.plugin [req-e0781543-8bec-4b46-9c03-1d872b98535a2398cfe405ac4480b27d3dfba36b64b4 165f6edf748d4bff957beada1f2a728e - - -]Attempt 3 to bind port 2799fb4c-d513-425d-becb-4947a5c8bfdd

2016-05-2814:08:50.663 2498 ERROR neutron.plugins.ml2.managers[req-e0781543-8bec-4b46-9c03-1d872b98535a 2398cfe405ac4480b27d3dfba36b64b4165f6edf748d4bff957beada1f2a728e - - -] Failed to bind port2799fb4c-d513-425d-becb-4947a5c8bfdd on host compute1

2016-05-2814:08:50.663 2498 ERROR neutron.plugins.ml2.managers[req-e0781543-8bec-4b46-9c03-1d872b98535a 2398cfe405ac4480b27d3dfba36b64b4165f6edf748d4bff957beada1f2a728e - - -] Failed to bind port2799fb4c-d513-425d-becb-4947a5c8bfdd on host compute1

2016-05-2814:08:50.663 2498 INFO neutron.plugins.ml2.plugin[req-e0781543-8bec-4b46-9c03-1d872b98535a 2398cfe405ac4480b27d3dfba36b64b4165f6edf748d4bff957beada1f2a728e - - -] Attempt 4 to bind port2799fb4c-d513-425d-becb-4947a5c8bfdd

2016-05-2814:08:50.667 2498 ERROR neutron.plugins.ml2.managers[req-e0781543-8bec-4b46-9c03-1d872b98535a 2398cfe405ac4480b27d3dfba36b64b4165f6edf748d4bff957beada1f2a728e - - -] Failed to bind port2799fb4c-d513-425d-becb-4947a5c8bfdd on host compute1

2016-05-2814:08:50.667 2498 ERROR neutron.plugins.ml2.managers[req-e0781543-8bec-4b46-9c03-1d872b98535a 2398cfe405ac4480b27d3dfba36b64b4165f6edf748d4bff957beada1f2a728e - - -] Failed to bind port2799fb4c-d513-425d-becb-4947a5c8bfdd on host compute1

2016-05-2814:08:50.667 2498 INFO neutron.plugins.ml2.plugin[req-e0781543-8bec-4b46-9c03-1d872b98535a 2398cfe405ac4480b27d3dfba36b64b4165f6edf748d4bff957beada1f2a728e - - -] Attempt 5 to bind port2799fb4c-d513-425d-becb-4947a5c8bfdd

2016-05-2814:08:50.671 2498 ERROR neutron.plugins.ml2.managers[req-e0781543-8bec-4b46-9c03-1d872b98535a 2398cfe405ac4480b27d3dfba36b64b4 165f6edf748d4bff957beada1f2a728e- - -] Failed to bind port 2799fb4c-d513-425d-becb-4947a5c8bfdd on hostcompute1

2016-05-2814:08:50.672 2498 ERROR neutron.plugins.ml2.managers[req-e0781543-8bec-4b46-9c03-1d872b98535a 2398cfe405ac4480b27d3dfba36b64b4165f6edf748d4bff957beada1f2a728e - - -] Failed to bind port2799fb4c-d513-425d-becb-4947a5c8bfdd on host compute1

2016-05-2814:08:50.672 2498 INFO neutron.plugins.ml2.plugin[req-e0781543-8bec-4b46-9c03-1d872b98535a 2398cfe405ac4480b27d3dfba36b64b4165f6edf748d4bff957beada1f2a728e - - -] Attempt 6 to bind port2799fb4c-d513-425d-becb-4947a5c8bfdd

2016-05-2814:08:50.675 2498 ERROR neutron.plugins.ml2.managers[req-e0781543-8bec-4b46-9c03-1d872b98535a 2398cfe405ac4480b27d3dfba36b64b4165f6edf748d4bff957beada1f2a728e - - -] Failed to bind port2799fb4c-d513-425d-becb-4947a5c8bfdd on host compute1

2016-05-2814:08:50.676 2498 ERROR neutron.plugins.ml2.managers[req-e0781543-8bec-4b46-9c03-1d872b98535a 2398cfe405ac4480b27d3dfba36b64b4165f6edf748d4bff957beada1f2a728e - - -] Failed to bind port2799fb4c-d513-425d-becb-4947a5c8bfdd on host compute1

2016-05-2814:08:50.676 2498 INFO neutron.plugins.ml2.plugin[req-e0781543-8bec-4b46-9c03-1d872b98535a 2398cfe405ac4480b27d3dfba36b64b4165f6edf748d4bff957beada1f2a728e - - -] Attempt 7 to bind port2799fb4c-d513-425d-becb-4947a5c8bfdd

2016-05-2814:08:50.679 2498 ERROR neutron.plugins.ml2.managers[req-e0781543-8bec-4b46-9c03-1d872b98535a 2398cfe405ac4480b27d3dfba36b64b4165f6edf748d4bff957beada1f2a728e - - -] Failed to bind port2799fb4c-d513-425d-becb-4947a5c8bfdd on host compute1

2016-05-2814:08:50.680 2498 ERROR neutron.plugins.ml2.managers[req-e0781543-8bec-4b46-9c03-1d872b98535a 2398cfe405ac4480b27d3dfba36b64b4165f6edf748d4bff957beada1f2a728e - - -] Failed to bind port2799fb4c-d513-425d-becb-4947a5c8bfdd on host compute1

2016-05-2814:08:50.680 2498 INFO neutron.plugins.ml2.plugin[req-e0781543-8bec-4b46-9c03-1d872b98535a 2398cfe405ac4480b27d3dfba36b64b4165f6edf748d4bff957beada1f2a728e - - -] Attempt 8 to bind port 2799fb4c-d513-425d-becb-4947a5c8bfdd

2016-05-2814:08:50.683 2498 ERROR neutron.plugins.ml2.managers[req-e0781543-8bec-4b46-9c03-1d872b98535a 2398cfe405ac4480b27d3dfba36b64b4165f6edf748d4bff957beada1f2a728e - - -] Failed to bind port2799fb4c-d513-425d-becb-4947a5c8bfdd on host compute1

2016-05-2814:08:50.684 2498 ERROR neutron.plugins.ml2.managers[req-e0781543-8bec-4b46-9c03-1d872b98535a 2398cfe405ac4480b27d3dfba36b64b4165f6edf748d4bff957beada1f2a728e - - -] Failed to bind port2799fb4c-d513-425d-becb-4947a5c8bfdd on host compute1

2016-05-2814:08:50.684 2498 INFO neutron.plugins.ml2.plugin[req-e0781543-8bec-4b46-9c03-1d872b98535a 2398cfe405ac4480b27d3dfba36b64b4165f6edf748d4bff957beada1f2a728e - - -] Attempt 9 to bind port2799fb4c-d513-425d-becb-4947a5c8bfdd

2016-05-2814:08:50.688 2498 ERROR neutron.plugins.ml2.managers[req-e0781543-8bec-4b46-9c03-1d872b98535a 2398cfe405ac4480b27d3dfba36b64b4165f6edf748d4bff957beada1f2a728e - - -] Failed to bind port2799fb4c-d513-425d-becb-4947a5c8bfdd on host compute1

2016-05-2814:08:50.688 2498 ERROR neutron.plugins.ml2.managers[req-e0781543-8bec-4b46-9c03-1d872b98535a 2398cfe405ac4480b27d3dfba36b64b4165f6edf748d4bff957beada1f2a728e - - -] Failed to bind port2799fb4c-d513-425d-becb-4947a5c8bfdd on host compute1

2016-05-2814:08:50.688 2498 INFO neutron.plugins.ml2.plugin[req-e0781543-8bec-4b46-9c03-1d872b98535a 2398cfe405ac4480b27d3dfba36b64b4165f6edf748d4bff957beada1f2a728e - - -] Attempt 10 to bind port2799fb4c-d513-425d-becb-4947a5c8bfdd

2016-05-28 14:08:50.6922498 ERROR neutron.plugins.ml2.managers[req-e0781543-8bec-4b46-9c03-1d872b98535a 2398cfe405ac4480b27d3dfba36b64b4165f6edf748d4bff957beada1f2a728e - - -] Failed to bind port2799fb4c-d513-425d-becb-4947a5c8bfdd on host compute1

2016-05-28 14:08:50.6922498 ERROR neutron.plugins.ml2.managers[req-e0781543-8bec-4b46-9c03-1d872b98535a 2398cfe405ac4480b27d3dfba36b64b4165f6edf748d4bff957beada1f2a728e - - -] Failed to bind port2799fb4c-d513-425d-becb-4947a5c8bfdd on host compute1

2016-05-2814:08:50.727 2498 INFO neutron.wsgi [req-e0781543-8bec-4b46-9c03-1d872b98535a2398cfe405ac4480b27d3dfba36b64b4 165f6edf748d4bff957beada1f2a728e - - -]10.192.44.149 - - [28/May/2016 14:08:50] "POST /v2.0/ports.jsonHTTP/1.1" 201 933 0.165769

2016-05-2814:08:50.831 2498 INFO neutron.wsgi [req-42438771-5f5d-484e-a13f-e7d50766cd2a2398cfe405ac4480b27d3dfba36b64b4 165f6edf748d4bff957beada1f2a728e - - -]10.192.44.149 - - [28/May/2016 14:08:50] "DELETE/v2.0/ports/2799fb4c-d513-425d-becb-4947a5c8bfdd.json HTTP/1.1" 204 1730.101503

在控制节点:

# ovs-vsctldel-br br-ex

# ovs-vsctldel-br br-int

在计算节点:

# ovs-vsctl del-br br-int

重新配置neutron的ml2

在控制节点:

ovs-vsctl add-br br-ex

ovs-vsctl add-port br-exeth3

ethtool -K eth3 grooff

修改配置:

重启服务:

控制节点(网路节点):

systemctl restart openvswitch.service

systemctl restartneutron-openvswitch-agent.service neutron-l3-agent.serviceneutron-dhcp-agent.service neutron-metadata-agent.service

计算节点:

systemctl restart openvswitch.service

systemctl restart neutron-openvswitch-agent.service

systemctl restartopenstack-nova-compute.service

状态都是DOWN:

29: br-ex: mtu1500 qdisc noop state DOWN

link/ether 88:00:00:01:02:13 brd ff:ff:ff:ff:ff:ff

30: br-int: mtu1500 qdisc noop state DOWN

link/ether3e:92:f4:94:22:4c brd ff:ff:ff:ff:ff:ff

# ifconfig br-int up

# ifconfig br-int up

# ifconfig br-int up

目前状态:

[root@controller1openvswitch(keystone_admin)]# ifconfig

br-ex:flags=4163 mtu 1500

inet6 fe80::8a00:ff:fe01:213prefixlen 64 scopeid0x20

ether 88:00:00:01:02:13txqueuelen 0 (Ethernet)

RX packets 0 bytes 0 (0.0 B)

RX errors 0 dropped 0 overruns 0frame 0

TX packets 8 bytes 648 (648.0 B)

TX errors 0 dropped 0 overruns0 carrier 0 collisions 0

br-int:flags=4163 mtu 1500

inet6 fe80::3c92:f4ff:fe94:224cprefixlen 64 scopeid0x20

ether 3e:92:f4:94:22:4ctxqueuelen 0 (Ethernet)

RX packets 36 bytes 3024 (2.9KiB)

RX errors 0 dropped 0 overruns 0frame 0

TX packets 8 bytes 648 (648.0 B)

TX errors 0 dropped 0 overruns0 carrier 0 collisions 0

再次创建虚拟机:

systemctl restart openvswitch.service

systemctl restartneutron-openvswitch-agent.service neutron-l3-agent.serviceneutron-dhcp-agent.service neutron-metadata-agent.service

systemctl restartopenstack-nova-api.service openstack-nova-cert.service openstack-nova-consoleauth.serviceopenstack-nova-scheduler.service openstack-nova-conductor.serviceopenstack-nova-novncproxy.service

计算节点:

systemctl restart openvswitch.service

systemctl restartneutron-openvswitch-agent.service

systemctl restart openstack-nova-compute.service

再次尝试创建虚拟机:

还是挂在孵化中

py:215

[-]Instance failed network setup after 1 attempt(s)

Traceback (most recent call last):

File"/usr/lib/python2.7/site-packages/nova/compute/manager.py", line1564, in _allocate_network_async

dhcp_options=dhcp_options)

File"/usr/lib/python2.7/site-packages/nova/network/neutronv2/api.py",line 727, in allocate_for_instance

self._delete_ports(neutron, instance, created_port_ids)

File "/usr/lib/python2.7/site-packages/oslo_utils/excutils.py",line 195, in __exit__

six.reraise(self.type_, self.value, self.tb)

File"/usr/lib/python2.7/site-packages/nova/network/neutronv2/api.py",line 719, in allocate_for_instance

security_group_ids, available_macs, dhcp_opts)

File "/usr/lib/python2.7/site-packages/nova/network/neutronv2/api.py",line 342, in _create_port

raise exception.PortBindingFailed(port_id=port_id)

PortBindingFailed: Binding failed for port29b7a877-a2c5-4418-b28b-9f4bcf10661e, please check neutron logs for moreinformation.

还是有错误

#tail -f ovs-vswitchd.log

2016-05-28T06:52:26.272Z|00059|connmgr|INFO|br-int<->unix:1 flow_mods in the last 0 s (1 deletes)

2016-05-28T06:52:40.259Z|00060|connmgr|INFO|br-int<->unix:1 flow_mods in the last 0 s (1 deletes)

2016-05-28T06:52:40.262Z|00061|ofp_util|INFO|normalizationchanged ofp_match, details:

2016-05-28T06:52:40.262Z|00062|ofp_util|INFO|pre: in_port=2,nw_proto=58,tp_src=136

2016-05-28T06:52:40.262Z|00063|ofp_util|INFO|post:in_port=2

2016-05-28T06:52:40.262Z|00064|connmgr|INFO|br-int<->unix:1 flow_mods in the last 0 s (1 deletes)

2016-05-28T06:52:40.265Z|00065|connmgr|INFO|br-int<->unix:1 flow_mods in the last 0 s (1 deletes)

2016-05-28T06:55:54.779Z|00066|bridge|INFO|bridgebr-int: added interface tapee0d7e7c-7e on port 5

2016-05-28T06:55:54.880Z|00067|netdev_linux|INFO|ioctl(SIOCGIFHWADDR)on tapee0d7e7c-7e device failed: No such device

2016-05-28T06:55:54.883Z|00068|netdev_linux|WARN|ioctl(SIOCGIFINDEX)on tapee0d7e7c-7e device failed: No such device

# tail -f openvswitch-agent.log

neutron.agent.common.ovs_lib[req-d7fbc1ca-da8c-48b8-8c7e-4928586a05ba - - - - -] Port7cbb1fd7-f6cb-4ba2-be89-1313637afa91 not present in bridge br-int

neutron.plugins.ml2.drivers.openvswitch.agent.ovs_neutron_agent[req-d7fbc1ca-da8c-48b8-8c7e-4928586a05ba - - - - -] port_unbound(): net_uuidNone not in local_vlan_map

neutron.agent.common.ovs_lib[req-d7fbc1ca-da8c-48b8-8c7e-4928586a05ba - - - - -] Port31566604-5199-4b35-978d-d57cb9458236 not present in bridge br-int

neutron.plugins.ml2.drivers.openvswitch.agent.ovs_neutron_agent[req-d7fbc1ca-da8c-48b8-8c7e-4928586a05ba - - - - -] port_unbound(): net_uuidNone not in local_vlan_map

应该是创建br-int,然后br-int和eth0:1绑定,这里需要手动操作

ovs-vsctl add-port br-inteth0:1

[root@controller1 neutron(keystone_admin)]#ovs-vsctl add-port br-int eth0\:1

ovs-vsctl: cannot create a port namedeth0:1 because a port named eth0:1 already exists on bridge br-int

这里已经绑定

那上面报错是什么原因?

[root@compute1 ml2]# ovs-vsctl add-portbr-int eth0:1

ovs-vsctl: cannot create a port namedeth0:1 because a port named eth0:1 already exists on bridge br-int

[root@compute1 ml2]# ovs-vsctl show

08399ed1-bb6a-4841-aca5-12a202ebd473

Bridge br-int

fail_mode: secure

Port "eth0:1"

Interface "eth0:1"

Port br-int

Interface br-int

type: internal

ovs_version:"2.4.0"

[root@controller1neutron(keystone_admin)]# ovs-vsctl show

ba32f48c-535c-4edd-b366-9c3ca159d756

Bridge br-ex

Port br-ex

Interface br-ex

type: internal

Port "eth3"

Interface "eth3"

Bridge br-int

fail_mode: secure

Port br-int

Interface br-int

type: internal

Port "eth0:1"

Interface "eth0:1"

Port "tapee0d7e7c-7e"

Interface"tapee0d7e7c-7e"

type: internal

ovs_version: "2.4.0"

目前的错误:[root@controller1 neutron]# tail -f openvswitch-agent.log一直刷,网络和计算节点都在刷:

Found failed openvswitch port: eth0:1

Found failed openvswitch port: eth0:1

Found failed openvswitch port: eth0:1

Found failed openvswitch port: eth0:1

Found failed openvswitch port: eth0:1

Found failed openvswitch port: eth0:1

Found failed openvswitch port: eth0:1

Found failed openvswitch port: eth0:1

Found failed openvswitch port: eth0:1

这里讲eth0:1改为eth2,反正目前所有的eth2都没有在用

删除br-int:

# ovs-vsctl del-br br-int

ovs-vsctl del-br br-int

然后控制节点使用eth3:172.16.2.148

计算节点使用eth1:172.16.2.149

将ml2_conf.ini改掉:

计算节点:

[ovs]

local_ip = 172.16.2.149

tunnel_type = vxlan

enable_tunneling = True

控制节点(网络节点):

[ovs]

local_ip = 172.16.2.148

bridge_mappings = external:br-ex

tunnel_type = vxlan

enable_tunneling = True

重启服务:

目前控制节点的br-ex也是绑定到eth3的,这样两个都会到eth3,会不会有问题?

如果还有问题,尝试:br-int绑到不接网线的物理网口

先验证:

重启服务:

后台有报错

将子网口放到eth2,eth2没接网线,不过没人用

计算:

[root@compute1 network-scripts]# ifdowneth2

[root@compute1 network-scripts]# ifup eth2

[root@compute1 network-scripts]# catifcfg-eth2

DEVICE=eth2

ONBOOT=yes

STARTMODE=onboot

MTU=1500

BOOTPROTO=static

IPADDR=192.168.0.149

NETMASK=255.255.255.0

网络节点:

DEVICE=eth2

ONBOOT=yes

STARTMODE=onboot

MTU=1500

BOOTPROTO=static

IPADDR=192.168.0.148

NETMASK=255.255.255.0

将eth2 up起来:

然后修改ml2_conf.ini:

[ovs]

local_ip = 192.168.0.148

bridge_mappings = external:br-ex

tunnel_type = vxlan

enable_tunneling = True

[ovs]

local_ip = 192.168.0.149

tunnel_type = vxlan

enable_tunneling = True

删除br-int:

ovs-vsctl del-br br-int

重启服务:

systemctl restart openvswitch.service

systemctl restartneutron-openvswitch-agent.service neutron-l3-agent.serviceneutron-dhcp-agent.service neutron-metadata-agent.service

systemctl restartopenstack-nova-api.service openstack-nova-cert.serviceopenstack-nova-consoleauth.service openstack-nova-scheduler.serviceopenstack-nova-conductor.service openstack-nova-novncproxy.service

计算节点:

systemctl restart openvswitch.service

systemctl restartneutron-openvswitch-agent.service

systemctl restartopenstack-nova-compute.service

还是出错

手动将br-int up起来,重启服务:

依旧会出现:

Port 3f304402-80a8-4849-a09f-195546c3e1b8not present in bridge br-int

难道是因为没有插网线?

做一个大胆的尝试:

控制节点:

将eth0作为外网的网口br-ex

将eth3作为br-int

做如下修改:

[root@controller1 ~]# ovs-vsctl list-br

br-ex

br-int

[root@controller1 ~]# ovs-vsctl del-brbr-ex

[root@controller1 ~]# ovs-vsctl del-brbr-int

[root@controller1 ~]# ovs-vsctl add-brbr-ex

[root@controller1 ~]# ovs-vsctl add-portbr-ex eth0

然后修改配置:

[ovs]

local_ip = 172.16.2.148

bridge_mappings = external:br-ex

tunnel_type = vxlan

enable_tunneling = True

计算节点:

将eth1作为br-int

[ovs]

local_ip = 172.16.2.149

tunnel_type = vxlan

enable_tunneling = True

重启服务:

这样horizon就不能登录了

先将br-ex绑定到eth1上,否则就无法访问horizon了

[root@compute1 neutron]# ovs-vsctl del-br br-ex

[root@compute1 neutron]# ovs-vsctl add-brbr-ex

[root@compute1 neutron]# ovs-vsctl add-portbr-ex eth1

大不了不出外网

重启服务:

systemctl restart openvswitch.service

systemctl restartneutron-openvswitch-agent.service neutron-l3-agent.serviceneutron-dhcp-agent.service neutron-metadata-agent.service

计算节点:

systemctl restart openvswitch.service

systemctl restart neutron-openvswitch-agent.service

systemctl restartopenstack-nova-compute.service

同样的错误。

openvswitch-agent.log中打印:not present in bridge br-int

再检查配置

openvswitch正常的启动过程应该是怎样的?不会有nopresent in bridge br-int打印

6.2.10 安装两节点的环境对比一下:完全和packstack同步neutron和nova配置

使用packstack安装两节点的环境对比一下,理论上应该没问题了,梳理很清楚了

关于网络,只需要搞清楚那几个东西就可以:

br-ex、br-int、br-tun

控制节点:

4: br-ex: mtu 1500 qdisc noqueue state UNKNOWN

link/ether 7a:c8:ee:30:23:4c brd ff:ff:ff:ff:ff:ff

inet6 fe80::78c8:eeff:fe30:234c/64 scope link

valid_lft forever preferred_lft forever

5: br-int: mtu1500 qdisc noop state DOWN

link/ether 72:6a:10:e8:24:47 brd ff:ff:ff:ff:ff:ff

6: br-tun: mtu1500 qdisc noop state DOWN

link/ether9a:2f:55:03:db:40 brd ff:ff:ff:ff:ff:ff

计算节点:

4: br-ex: mtu 1500 qdisc noqueue state UNKNOWN

link/ether 7a:c8:ee:30:23:4c brd ff:ff:ff:ff:ff:ff

inet6 fe80::78c8:eeff:fe30:234c/64 scope link

valid_lft forever preferred_lft forever

5: br-int: mtu1500 qdisc noop state DOWN

link/ether 72:6a:10:e8:24:47 brd ff:ff:ff:ff:ff:ff

6: br-tun: mtu1500 qdisc noop state DOWN

link/ether9a:2f:55:03:db:40 brd ff:ff:ff:ff:ff:ff

并且这里br-int、br-tun状态即使为DOWN,也不影响虚拟机创建

主要是这里的br-ex、br-int、br-tun的创建和连接

对于br-ex:

创建:

ovs-vsctl add-br br-ex

和网口设备绑定:

ovs-vsctl add-port br-ex eth3

ethtool -K eth3 grooff

对于br-int的创建:

标准OK版本:

控制(网络)节点:

[root@controller ml2(keystone_admin)]# pwd

/etc/neutron/plugins/ml2

[root@controller ml2(keystone_admin)]# ls

ml2_conf_brocade_fi_ni.iniml2_conf_brocade.iniml2_conf_fslsdn.iniml2_conf.ini ml2_conf_ofa.ini ml2_conf_sriov.ini openvswitch_agent.ini restproxy.inisriov_agent.ini

这里是有各种配置的

# cat ml2_conf.ini|grep -v '^#' |grep -v '^$'

[ml2]

type_drivers = vxlan

tenant_network_types = vxlan

mechanism_drivers =openvswitch

path_mtu = 0

[ml2_type_flat]

[ml2_type_vlan]

[ml2_type_gre]

[ml2_type_vxlan]

vni_ranges =10:100

vxlan_group =224.0.0.1

[ml2_type_geneve]

[securitygroup]

enable_security_group = True

# cat openvswitch_agent.ini |grep -v '^#' |grep -v '^$'

[ovs]

integration_bridge = br-int

tunnel_bridge = br-tun

local_ip =192.168.129.130

enable_tunneling=True

[agent]

polling_interval = 2

tunnel_types =vxlan

vxlan_udp_port =4789

l2_population = False

arp_responder = False

prevent_arp_spoofing = True

enable_distributed_routing = False

drop_flows_on_start=False

[securitygroup]

firewall_driver =neutron.agent.linux.iptables_firewall.OVSHybridIptablesFirewallDriver

计算节点:

[root@compute ml2]# pwd

/etc/neutron/plugins/ml2

[root@compute ml2]# ls

openvswitch_agent.ini 这里只有一个openvswitch_agent.ini

# cat openvswitch_agent.ini |grep -v '^#' |grep -v '^$'

[ovs]

integration_bridge = br-int

tunnel_bridge = br-tun

local_ip =192.168.129.131

enable_tunneling=True

[agent]

polling_interval = 2

tunnel_types =vxlan

vxlan_udp_port =4789

l2_population = False

arp_responder = False

prevent_arp_spoofing = True

enable_distributed_routing = False

drop_flows_on_start=False

[securitygroup]

firewall_driver =neutron.agent.linux.iptables_firewall.OVSHybridIptablesFirewallDriver

计算节点、网络节点的配置,就是这么分布的

将这些配置原封拷贝到StorOS上验证一下

6.2.10.1 控制节点neutron配置同步

1.将(控制+网络)节点的配置拷贝到控制节点中,然后做如下修改:

(1)修改IP

[root@controller1neutron]# grep 192 ./ -r

./api-paste.ini:identity_uri=https://192.168.129.130:35357

./api-paste.ini:admin_password=db12219fd1924853

./api-paste.ini:auth_uri=https://192.168.129.130:5000/v2.0

./neutron.conf:#l3_ha_net_cidr = 169.254.192.0/18

./neutron.conf:nova_url= https://192.168.129.130:8774/v2

./neutron.conf:nova_admin_auth_url=https://192.168.129.130:5000/v2.0

./neutron.conf:auth_uri= https://192.168.129.130:5000/v2.0

./neutron.conf:identity_uri= https://192.168.129.130:35357

./neutron.conf:admin_password= db12219fd1924853

./neutron.conf:connection= mysql://neutron:f8a47932959444e5@192.168.129.130/neutron

./neutron.conf:rabbit_host= 192.168.129.130

./neutron.conf:rabbit_hosts= 192.168.129.130:5672

./metadata_agent.ini:auth_url= https://192.168.129.130:5000/v2.0

./metadata_agent.ini:admin_password= db12219fd1924853

./metadata_agent.ini:nova_metadata_ip= 192.168.129.130

./plugins/ml2/openvswitch_agent.ini:local_ip=192.168.129.130

(2)api-paste.ini改为:

[filter:authtoken]

identity_uri=https://10.192.44.148:35357

admin_user=neutron

admin_password=1

auth_uri=https://10.192.44.148:5000/v2.0

admin_tenant_name=service

(3)neutron.conf改为:

[root@controller1 neutron]# catneutron.conf

[DEFAULT]

verbose = True

router_distributed = False

debug = False

state_path = /var/lib/neutron

use_syslog = False

use_stderr = True

log_dir =/var/log/neutron

bind_host = 0.0.0.0

bind_port = 9696

core_plugin=neutron.plugins.ml2.plugin.Ml2Plugin

service_plugins =router

auth_strategy = keystone

mac_generation_retries = 16

dhcp_lease_duration = 86400

dhcp_agent_notification = True

allow_bulk = True

allow_pagination = False

allow_sorting = False

allow_overlapping_ips = True

advertise_mtu = False

agent_down_time = 75

router_scheduler_driver =neutron.scheduler.l3_agent_scheduler.ChanceScheduler

allow_automatic_l3agent_failover = False

dhcp_agents_per_network = 1

l3_ha = False

api_workers = 4

rpc_workers = 4

use_ssl = False

notify_nova_on_port_status_changes = True

notify_nova_on_port_data_changes = True

nova_url = https://10.192.44.148:8774/v2

nova_region_name =RegionOne

nova_admin_username =nova

nova_admin_tenant_name =service

nova_admin_password =1

nova_admin_auth_url =https://10.192.44.148:5000/v2.0

send_events_interval = 2

rpc_response_timeout=60

rpc_backend=rabbit

control_exchange=neutron

lock_path=/var/lib/neutron/lock

[matchmaker_redis]

[matchmaker_ring]

[quotas]

[agent]

root_helper = sudo neutron-rootwrap/etc/neutron/rootwrap.conf

report_interval = 30

[keystone_authtoken]

auth_uri = https://10.192.44.148:5000/v2.0

identity_uri = https://10.192.44.148:35357

admin_tenant_name = service

admin_user = neutron

admin_password = 1

[database]

connection = mysql://neutron:1@10.192.44.148/neutron

max_retries = 10

retry_interval = 10

min_pool_size = 1

max_pool_size = 10

idle_timeout = 3600

max_overflow = 20

[nova]

[oslo_concurrency]

[oslo_policy]

[oslo_messaging_amqp]

[oslo_messaging_qpid]

[oslo_messaging_rabbit]

kombu_reconnect_delay = 1.0

rabbit_host = 10.192.44.148

rabbit_port = 5672

rabbit_hosts = 10.192.44.148:5672

rabbit_use_ssl = False

rabbit_userid = openstack

rabbit_password = 1

rabbit_virtual_host = /

rabbit_ha_queues = False

heartbeat_rate=2

heartbeat_timeout_threshold=0

[qos]

(4)metadata_agent.ini改为

[root@controller1neutron]# cat metadata_agent.ini

[DEFAULT]

debug =False

auth_url= https://10.192.44.148:5000/v2.0

auth_region= RegionOne

auth_insecure= False

admin_tenant_name= service

admin_user= neutron

admin_password= 1

nova_metadata_ip= 10.192.44.148

nova_metadata_port= 8775

nova_metadata_protocol= http

metadata_proxy_shared_secret=1

metadata_workers=4

metadata_backlog= 4096

cache_url= memory://?default_ttl=5

[AGENT]

(5)./plugins/ml2/openvswitch_agent.ini改为:

[ovs]

integration_bridge= br-int

tunnel_bridge= br-tun

local_ip= 192.168.0.148

enable_tunneling=True

[agent]

polling_interval= 2

tunnel_types=vxlan

vxlan_udp_port=4789

l2_population= False

arp_responder= False

prevent_arp_spoofing= True

enable_distributed_routing= False

drop_flows_on_start=False

[securitygroup]

firewall_driver =neutron.agent.linux.iptables_firewall.OVSHybridIptablesFirewallDriver

保存:

6.2.10.2 计算节点neutron配置同步

[root@compute1 neutron]# grep 192 ./ -r

./neutron.conf:# l3_ha_net_cidr =169.254.192.0/18

./neutron.conf:rabbit_host =192.168.129.130

./neutron.conf:rabbit_hosts =192.168.129.130:5672

./plugins/ml2/openvswitch_agent.ini:local_ip=192.168.129.131

修改如下配置:

(1)neutron.conf

[root@compute1neutron]# cat neutron.conf

[DEFAULT]

verbose =True

debug =False

state_path= /var/lib/neutron

use_syslog= False

use_stderr= True

log_dir=/var/log/neutron

bind_host= 0.0.0.0

bind_port= 9696

core_plugin=neutron.plugins.ml2.plugin.Ml2Plugin

service_plugins=router

auth_strategy= keystone

mac_generation_retries= 16

dhcp_lease_duration= 86400

dhcp_agent_notification= True

allow_bulk= True

allow_pagination= False

allow_sorting= False

allow_overlapping_ips= True

advertise_mtu= False

dhcp_agents_per_network= 1

use_ssl =False

rpc_response_timeout=60

rpc_backend=rabbit

control_exchange=neutron

lock_path=/var/lib/neutron/lock

[matchmaker_redis]

[matchmaker_ring]

[quotas]

[agent]

root_helper= sudo neutron-rootwrap /etc/neutron/rootwrap.conf

report_interval= 30

[keystone_authtoken]

auth_uri =https://10.192.44.148:35357/v2.0/

identity_uri =https://10.192.44.148:5000

admin_tenant_name = service

admin_user = neutron

admin_password = 1

[database]

[nova]

[oslo_concurrency]

[oslo_policy]

[oslo_messaging_amqp]

[oslo_messaging_qpid]

[oslo_messaging_rabbit]

kombu_reconnect_delay= 1.0

rabbit_host = 10.192.44.148

rabbit_port= 5672

rabbit_hosts =10.192.44.148:5672

rabbit_use_ssl= False

rabbit_userid = openstack

rabbit_password = 1

rabbit_virtual_host= /

rabbit_ha_queues= False

heartbeat_rate=2

heartbeat_timeout_threshold=0

[qos]

(2)./plugins/ml2/openvswitch_agent.ini

[ovs]

integration_bridge = br-int

tunnel_bridge = br-tun

local_ip =192.168.0.149

enable_tunneling=True

[agent]

polling_interval = 2

tunnel_types =vxlan

vxlan_udp_port =4789

l2_population = False

arp_responder = False

prevent_arp_spoofing = True

enable_distributed_routing = False

drop_flows_on_start=False

[securitygroup]

firewall_driver =neutron.agent.linux.iptables_firewall.OVSHybridIptablesFirewallDriver

6.2.10.3 删除重建br,重启服务验证

删除所有br:

[root@compute1 ml2]# ovs-vsctl del-brbr-int

[root@compute1 ml2]# ovs-vsctl del-br br-ex

[root@compute1 ml2]# ovs-vsctl list-br

[root@compute1 ml2]# ovs-vsctl show

08399ed1-bb6a-4841-aca5-12a202ebd473

ovs_version: "2.4.0"

将控制节点eth1设置为10.192.44.152

DEVICE=eth3

ONBOOT=yes

STARTMODE=onboot

MTU=1500

BOOTPROTO=static

IPADDR=10.192.44.152

网络:

Eth0Eth1Eth2

Controller10.192.44.14810.192.44.152

(原172.6.2.148)192.168.0.148

Compute10.192.44.149172.6.2.149192.168.0.149

备注

br-ex

创建br-ex:

ovs-vsctl add-br br-ex

ovs-vsctl add-port br-ex eth1

# ifconfig br-exup

重启openvswitch和neutron服务:

控制节点(网路节点):

systemctl restart openvswitch.service

systemctl restartneutron-openvswitch-agent.service neutron-l3-agent.serviceneutron-dhcp-agent.service neutron-metadata-agent.service

计算节点:

systemctl restart openvswitch.service

systemctl restartneutron-openvswitch-agent.service

systemctl restartopenstack-nova-compute.service

创建vxlan网络

6.2.10.4 控制节点nova配置同步

修改前:

./nova.conf:metadata_host=192.168.129.130

./nova.conf:sql_connection=mysql://nova:fe14dd4229d44ecc@192.168.129.130/nova

./nova.conf:api_servers=192.168.129.130:9292

./nova.conf:auth_uri=https://192.168.129.130:5000/v2.0

./nova.conf:identity_uri=https://192.168.129.130:35357

./nova.conf:url=https://192.168.129.130:9696

./nova.conf:admin_password=db12219fd1924853

./nova.conf:admin_auth_url=https://192.168.129.130:5000/v2.0

./nova.conf:rabbit_host=192.168.129.130

./nova.conf:rabbit_hosts=192.168.129.130:5672

修改后:

[root@controller1 nova]# cat nova.conf

[DEFAULT]

novncproxy_host=0.0.0.0

novncproxy_port=6080

notify_api_faults=False

state_path=/var/lib/nova

report_interval=10

enabled_apis=ec2,osapi_compute,metadata

ec2_listen=0.0.0.0

ec2_listen_port=8773

ec2_workers=4

osapi_compute_listen=0.0.0.0

osapi_compute_listen_port=8774

osapi_compute_workers=4

metadata_listen=0.0.0.0

metadata_listen_port=8775

metadata_workers=4

service_down_time=60

rootwrap_config=/etc/nova/rootwrap.conf

volume_api_class=nova.volume.cinder.API

auth_strategy=keystone

use_forwarded_for=False

cpu_allocation_ratio=16.0

ram_allocation_ratio=1.5

network_api_class=nova.network.neutronv2.api.API

default_floating_pool=public

force_snat_range =0.0.0.0/0

metadata_host=10.192.44.148

dhcp_domain=novalocal

security_group_api=neutron

scheduler_default_filters=RetryFilter,AvailabilityZoneFilter,RamFilter,ComputeFilter,ComputeCapabilitiesFilter,ImagePropertiesFilter,CoreFilter

scheduler_driver=nova.scheduler.filter_scheduler.FilterScheduler

vif_plugging_is_fatal=True

vif_plugging_timeout=300

firewall_driver=nova.virt.firewall.NoopFirewallDriver

debug=False

verbose=True

log_dir=/var/log/nova

use_syslog=False

syslog_log_facility=LOG_USER

use_stderr=True

notification_topics=notifications

rpc_backend=rabbit

amqp_durable_queues=False

sql_connection=mysql://nova:1@10.192.44.148/nova

image_service=nova.image.glance.GlanceImageService

lock_path=/var/lib/nova/tmp

osapi_volume_listen=0.0.0.0

novncproxy_base_url=https://0.0.0.0:6080/vnc_auto.html

[api_database]

[barbican]

[cells]

[cinder]

catalog_info=volumev2:cinderv2:publicURL

[conductor]

[cors]

[cors.subdomain]

[database]

[ephemeral_storage_encryption]

[glance]

api_servers=10.192.44.148:9292

[guestfs]

[hyperv]

[image_file_url]

[ironic]

[keymgr]

[keystone_authtoken]

auth_uri=https://10.192.44.148:5000/v2.0

identity_uri=https://10.192.44.148:35357

admin_user=nova

admin_password=1

admin_tenant_name=service

[libvirt]

vif_driver=nova.virt.libvirt.vif.LibvirtGenericVIFDriver

[matchmaker_redis]

[matchmaker_ring]

[metrics]

[neutron]

service_metadata_proxy=True

metadata_proxy_shared_secret=1

url=https://10.192.44.148:9696

admin_username=neutron

admin_password=1

admin_tenant_name=service

region_name=RegionOne

admin_auth_url=https://10.192.44.148:5000/v2.0

auth_strategy=keystone

ovs_bridge=br-int

extension_sync_interval=600

timeout=30

default_tenant_id=default

[osapi_v21]

[oslo_concurrency]

[oslo_messaging_amqp]

[oslo_messaging_qpid]

[oslo_messaging_rabbit]

kombu_reconnect_delay=1.0

rabbit_host=10.192.44.148

rabbit_port=5672

rabbit_hosts=10.192.44.148:5672

rabbit_use_ssl=False

rabbit_userid=openstack

rabbit_password=1

rabbit_virtual_host=/

rabbit_ha_queues=False

heartbeat_timeout_threshold=0

heartbeat_rate=2

[oslo_middleware]

[rdp]

[serial_console]

[spice]

[ssl]

[trusted_computing]

[upgrade_levels]

[vmware]

[vnc]

[workarounds]

[xenserver]

[zookeeper]

[osapi_v3]

enabled=False

6.2.10.5 计算节点nova配置同步

修改前:

./nova.conf:metadata_host=192.168.129.130

./nova.conf:sql_connection=mysql://nova@192.168.129.130/nova

./nova.conf:novncproxy_base_url=https://192.168.129.130:6080/vnc_auto.html

./nova.conf:api_servers=192.168.129.130:9292

./nova.conf:url=https://192.168.129.130:9696

./nova.conf:admin_password=db12219fd1924853

./nova.conf:admin_auth_url=https://192.168.129.130:5000/v2.0

./nova.conf:rabbit_host=192.168.129.130

./nova.conf:rabbit_hosts=192.168.129.130:5672

[root@compute1 nova]#

修改后:

[root@compute1 nova]# cat nova.conf

[DEFAULT]

internal_service_availability_zone=internal

default_availability_zone=nova

notify_api_faults=False

state_path=/var/lib/nova

report_interval=10

compute_manager=nova.compute.manager.ComputeManager

service_down_time=60

rootwrap_config=/etc/nova/rootwrap.conf

volume_api_class=nova.volume.cinder.API

auth_strategy=keystone

heal_instance_info_cache_interval=60

reserved_host_memory_mb=512

network_api_class=nova.network.neutronv2.api.API

force_snat_range =0.0.0.0/0

metadata_host=10.192.44.148

dhcp_domain=novalocal

security_group_api=neutron

compute_driver=libvirt.LibvirtDriver

vif_plugging_is_fatal=True

vif_plugging_timeout=300

firewall_driver=nova.virt.firewall.NoopFirewallDriver

force_raw_images=True

debug=False

verbose=True

log_dir=/var/log/nova

use_syslog=False

syslog_log_facility=LOG_USER

use_stderr=True

notification_topics=notifications

rpc_backend=rabbit

amqp_durable_queues=False

vncserver_proxyclient_address=compute

vnc_keymap=en-us

sql_connection=mysql://nova@10.192.44.148/nova

vnc_enabled=True

image_service=nova.image.glance.GlanceImageService

lock_path=/var/lib/nova/tmp

vncserver_listen=0.0.0.0

novncproxy_base_url=https://10.192.44.148:6080/vnc_auto.html

[api_database]

[barbican]

[cells]

[cinder]

[conductor]

[cors]

[cors.subdomain]

[database]

[ephemeral_storage_encryption]

[glance]

api_servers=10.192.44.148:9292

[guestfs]

[hyperv]

[image_file_url]

[ironic]

[keymgr]

[keystone_authtoken]

auth_uri=https://192.168.129.130:5000/v2.0

identity_uri=https://192.168.129.130:35357

admin_user=nova

admin_password=1

admin_tenant_name=service

[libvirt]

virt_type=qemu

inject_password=False

inject_key=False

inject_partition=-1

live_migration_uri=qemu+tcp://nova@%s/system

cpu_mode=none

vif_driver=nova.virt.libvirt.vif.LibvirtGenericVIFDriver

[matchmaker_redis]

[matchmaker_ring]

[metrics]

[neutron]

url=https://10.192.44.148:9696

admin_username=neutron

admin_password=1

admin_tenant_name=service

region_name=RegionOne

admin_auth_url=https://10.192.44.148:5000/v2.0

auth_strategy=keystone

ovs_bridge=br-int

extension_sync_interval=600

timeout=30

default_tenant_id=default

[osapi_v21]

[oslo_concurrency]

[oslo_messaging_amqp]

[oslo_messaging_qpid]

[oslo_messaging_rabbit]

kombu_reconnect_delay=1.0

rabbit_host=10.192.44.148

rabbit_port=5672

rabbit_hosts=10.192.44.148:5672

rabbit_use_ssl=False

rabbit_userid=openstack

rabbit_password=1

rabbit_virtual_host=/

rabbit_ha_queues=False

heartbeat_timeout_threshold=0

heartbeat_rate=2

[oslo_middleware]

[rdp]

[serial_console]

[spice]

[ssl]

[trusted_computing]

[upgrade_levels]

[vmware]

[vnc]

[workarounds]

[xenserver]

[zookeeper]

6.2.10.6 重启nova服务,创建虚拟机

控制节点:

systemctl restartopenstack-nova-api.service openstack-nova-scheduler.service openstack-nova-conductor.service

计算节点:

systemctl restartlibvirtd.service openstack-nova-compute.service

[root@controller1~(keystone_admin)]# nova service-list

+----+------------------+-------------+----------+---------+-------+----------------------------+-----------------+

| Id | Binary | Host | Zone | Status| State | Updated_at| Disabled Reason |

+----+------------------+-------------+----------+---------+-------+----------------------------+-----------------+

| 1 | nova-cert | controller1 | internal | enabled |up | 2016-05-28T12:45:59.000000 |- |

| 2 | nova-consoleauth | controller1 | internal |enabled | up |2016-05-28T12:45:59.000000 | -|

| 3 | nova-conductor | controller1 | internal | enabled | up | 2016-05-28T12:45:59.000000 | - |

| 4 | nova-scheduler | controller1 | internal | enabled | up | 2016-05-28T12:46:00.000000 | - |

| 5 | nova-compute | compute1 | nova| enabled | up |2016-05-28T12:45:57.000000 | -|

+----+------------------+-------------+----------+---------+-------+----------------------------+-----------------+

[root@controller1~(keystone_admin)]#

6.2.11 虚拟机创建成功,保存配置

7 Ceph安装

今天计划:

(1)在controller1上安装nova-compute

(2)创建ceph集群,配置ceph后端

(3)验证虚拟机迁移功能

(4)高可用配置

备注:注意修改下网络配置文件,否则起来后路由有问题

将主网卡的网关信息保留,其他的都去掉

注意将hda4 dd掉,或者删除后重新分区,将之前的cinder-volumes vgs删除,将lvm.conf恢复

[root@controller1 ntp(keystone_admin)]#vgremove cinder-volumes

Volume group "cinder-volumes" successfully removed

[root@controller1 ntp(keystone_admin)]# pv

pvchangepvck pvcreate pvdisplaypvmove pvremove pvresizepvs pvscan

[root@controller1 ntp(keystone_admin)]#pvremove /dev/hda4

Labels on physical volume "/dev/hda4" successfully wiped

[root@controller1 ntp(keystone_admin)]# vgs

Novolume groups found

[root@controller1 ntp(keystone_admin)]# pvs

[root@controller1 ntp(keystone_admin)]#

7.1 创建ceph集群:磁盘准备

将现有的云硬盘删除,vgs:cinder-volumes删除,然后将hda4的信息清除

[root@compute1 network-scripts]# vgremovecinder-volumes

Volume group "cinder-volumes" successfully removed

[root@compute1 network-scripts]# pvremove/dev/hda4

Labels on physical volume "/dev/hda4" successfully wiped

将150、151的hda4做出来

磁盘不重启识别分区:

#partprobe

[root@localhost scsi_host]# lsblk

NAMEMAJ:MIN RM SIZE RO TYPEMOUNTPOINT

hda8:0 0 119.2G 0 disk

├─hda1 8:1 019.5G 0 part /

├─hda2 8:2 05.9G 0 part/dom/storoswd/b_iscsi/config

└─hda3 8:3 03.9G 0 part/dom/storoswd/b_iscsi/log

sdb8:16 0 1.8T0 disk

sdc8:32 0 1.8T0 disk

sdd8:48 0 1.8T0 disk

sde 8:640 1.8T 0 disk

[root@localhost scsi_host]# partprobe

[root@localhost scsi_host]# lsblk

NAMEMAJ:MIN RM SIZE RO TYPEMOUNTPOINT

hda8:0 0 119.2G 0 disk

├─hda1 8:1 019.5G 0 part /

├─hda2 8:2 05.9G 0 part /dom/storoswd/b_iscsi/config

├─hda3 8:3 03.9G 0 part/dom/storoswd/b_iscsi/log

└─hda4 8:40 90G 0 part

sdb8:16 0 1.8T0 disk

sdc8:32 0 1.8T0 disk

sdd8:48 0 1.8T0 disk

sde8:64 0 1.8T0 disk

7.2 创建ceph集群:ceph下载安装

7.2.1 下载安装

yum install ceph -y

yum install ceph-deploy -y

yum install yum-plugin-priorities -y

yum install snappy leveldb gdiskpython-argparse gperftools-libs -y

7.2.2 安装mon

关闭防火墙:

systemctl stop firewalld.service;systemctldisable firewalld.service

ceph-deploy new controller1 compute1controller2 compute2

[root@controller1 ceph]# cat ceph.conf

[global]

fsid = d62855a0-c03c-448d-b3c5-7518640060c9

mon_initial_members = controller1,compute1, controller2, compute2

mon_host = 10.192.44.148,10.192.44.149,10.192.44.150,10.192.44.151

auth_cluster_required = none

auth_service_required = none

auth_client_required = none

filestore_xattr_use_omap = true

osd_pool_default_size = 4

public network = 10.192.44.0/23

cluster network = 10.192.44.0/23

ceph-deploy install controller1 compute1controller2 compute2

ceph-deploy --overwrite-conf mon create-initial

ceph-deploy mon create controller1 compute1controller2 compute2

ceph-deploy gatherkeys controller1 compute1controller2 compute2

[root@controller1ceph]# scp * 10.192.44.149:/etc/ceph

[root@controller1ceph]# scp * 10.192.44.150:/etc/ceph

[root@controller1ceph]# scp * 10.192.44.151:/etc/ceph

7.2.3 安装osd

分区:

将had分为5678四个区,每个大概20G

先分一个扩展分区hda4,90G

然后分出4个逻辑分区

[root@controller1 ceph]# lsblk

NAMEMAJ:MIN RM SIZE RO TYPEMOUNTPOINT

hda8:0 0 119.2G 0 disk

├─hda1 8:1 019.5G 0 part /

├─hda2 8:2 05.9G 0 part/dom/storoswd/b_iscsi/config

├─hda3 8:3 03.9G 0 part/dom/storoswd/b_iscsi/log

├─hda4 8:40 1K 0 part

├─hda5 8:5 027.9G 0 part

├─hda6 8:6 028.6G 0 part

├─hda7 8:7 019.1G 0 part

└─hda8 8:8 014.3G 0 part

sdb8:16 0 1.8T0 disk

└─sdb1 8:17 01.8T 0 part/var/lib/ceph/osd/ceph-0

sdc8:32 0 1.8T0 disk

└─sdc1 8:33 01.8T 0 part/var/lib/ceph/osd/ceph-4

sdd8:48 0 1.8T0 disk

└─sdd1 8:49 01.8T 0 part/var/lib/ceph/osd/ceph-8

sde8:64 0 1.8T0 disk

└─sde1 8:65 01.8T 0 part /var/lib/ceph/osd/ceph-12

[root@controller1 ceph]#

[root@compute1 ntp]# lsblk

NAMEMAJ:MIN RM SIZE RO TYPEMOUNTPOINT

hda8:0 0 119.2G 0 disk

├─hda1 8:1 019.5G 0 part /

├─hda2 8:2 05.9G 0 part/dom/storoswd/b_iscsi/config

├─hda3 8:30 3.9G 0 part /dom/storoswd/b_iscsi/log

├─hda4 8:4 01K 0 part

├─hda5 8:5 023.2G 0 part

├─hda6 8:6 028.6G 0 part

├─hda7 8:7 014.3G 0 part

└─hda8 8:8 023.9G 0 part

sdb8:16 0 1.8T0 disk

└─sdb1 8:17 01.8T 0 part/var/lib/ceph/osd/ceph-2

sdc8:32 0 1.8T0 disk

└─sdc1 8:33 01.8T 0 part/var/lib/ceph/osd/ceph-6

sdd8:48 0 1.8T0 disk

└─sdd1 8:49 01.8T 0 part/var/lib/ceph/osd/ceph-10

sde8:64 0 1.8T0 disk

└─sde1 8:65 01.8T 0 part/var/lib/ceph/osd/ceph-14

[root@controller2 ntp]# lsblk

NAMEMAJ:MIN RM SIZE RO TYPEMOUNTPOINT

hda8:0 0 119.2G 0 disk

├─hda1 8:1 019.5G 0 part /

├─hda2 8:2 0 5.9G 0part /dom/storoswd/b_iscsi/config

├─hda3 8:3 03.9G 0 part/dom/storoswd/b_iscsi/log

├─hda4 8:4 01K 0 part

├─hda5 8:5 023.2G 0 part

├─hda6 8:6 019.1G 0 part

├─hda7 8:7 019.1G 0 part

└─hda8 8:8 028.7G 0 part

sdb8:16 0 1.8T0 disk

└─sdb1 8:17 01.8T 0 part/var/lib/ceph/osd/ceph-1

sdc8:32 0 1.8T0 disk

└─sdc1 8:33 01.8T 0 part/var/lib/ceph/osd/ceph-5

sdd8:48 0 1.8T0 disk

└─sdd1 8:49 01.8T 0 part/var/lib/ceph/osd/ceph-9

sde8:64 0 1.8T0 disk

└─sde1 8:65 01.8T 0 part/var/lib/ceph/osd/ceph-13

[root@compute2 ~]# partprobe

[root@compute2 ~]# lsblk

NAMEMAJ:MIN RM SIZE RO TYPEMOUNTPOINT

hda8:0 0 111.8G 0 disk

├─hda1 8:1 019.5G 0 part /

├─hda2 8:2 05.9G 0 part/dom/storoswd/b_iscsi/config

├─hda3 8:3 03.9G 0 part/dom/storoswd/b_iscsi/log

├─hda4 8:4 01K 0 part

├─hda5 8:5 023.2G 0 part

├─hda6 8:60 19.1G 0 part

├─hda7 8:7 019.1G 0 part

└─hda8 8:8 021.2G 0 part

sdb8:16 0 1.8T0 disk

└─sdb1 8:17 01.8T 0 part

sdc8:32 0 1.8T0 disk

└─sdc1 8:33 01.8T 0 part

sdd8:48 0 1.8T0 disk

└─sdd1 8:49 01.8T 0 part

sde8:64 0 1.8T0 disk

└─sde1 8:65 01.8T 0 part

#ceph-deploy osdprepare controller1:/dev/sdb1:/dev/hda5 controller2:/dev/sdb1:/dev/hda5compute1:/dev/sdb1:/dev/hda5 compute2:/dev/sdb1:/dev/hda5

ceph-deploy osd activatecontroller1:/dev/sdb1:/dev/hda5 controller2:/dev/sdb1:/dev/hda5compute1:/dev/sdb1:/dev/hda5 compute2:/dev/sdb1:/dev/hda5

相关TAG标签 环境
上一篇:Spark定制版:011~SparkStreaming源码解读之Driver中的ReceiverTracker架构设计以及具体实现彻底研究
下一篇:openstack高可用环境搭建(二):高可环境的搭建
相关文章
图文推荐
热门新闻

关于我们 | 联系我们 | 广告服务 | 投资合作 | 版权申明 | 在线帮助 | 网站地图 | 作品发布 | Vip技术培训 | 举报中心

版权所有: 红黑联盟--致力于做实用的IT技术学习网站