Brian,
Thanks so far - I seem to have forgotten to say that this is all on
CentOS 7.
My nova.conf and neutron.conf files both appear to be configured
correctly. I'm fairly certain that the problem lies in the ovs
configuration, but I don't know where. What other information do you need?
Regards,
Dan
On 12/04/2014 04:07 PM, Afshar, Brian wrote:
Hi Dan,
Take a look at your nova.conf file and make sure that your Controller name or IP address
is listed correctly. I need more information in order to figure out where the network
connection dropped on your systems. From your information it is hard to figure out what
went wrong. Are you using CentOS or RHEL and which version?
Regards,
Brian
-----Original Message-----
From: Dan Mossor [mailto:danofsatx@gmail.com]
Sent: Thursday, December 04, 2014 12:05 PM
To: Afshar, Brian; rdo-list(a)redhat.com
Subject: Re: [Rdo-list] Packstack, Neutron, and Openvswitch
I've already created the answer file -
http://fpaste.org/156624/
Packstack has already run, and deployed to my systems. My problem is that I still have no
network connectivity, other than the management network - packstack is not configuring ovs
to talk to the bond0 interface, or I'm doing something wrong. This is what I'm
trying to figure out.
Dan
On 12/04/2014 10:55 AM, Afshar, Brian wrote:
> As for your answers.txt file, if you haven't followed these steps, make sure that
you can ping your compute node(s) from your controller node first, then follow these
commands:
>
> # yum install openstack-packstack -y
> # packstack --gen-answer-file=openstack-answers.txt
>
> Once your answers.txt file is generated, you will need to edit it (vi) and provide
information about your node(s).
>
> Hope that gives you a running start...at least!
>
>
> Regards,
>
> Brian
>
> -----Original Message-----
> From: rdo-list-bounces(a)redhat.com [mailto:rdo-list-bounces@redhat.com] On Behalf Of
Dan Mossor
> Sent: Thursday, December 04, 2014 8:30 AM
> To: rdo-list(a)redhat.com
> Subject: [Rdo-list] Packstack, Neutron, and Openvswitch
>
> Howdy folks!
>
> I am still trying to get an Openstack deployment working using packstack. I've
done a lot of reading, but apparently not quite enough since I can't seem to get my
compute nodes to talk to the network. Any pointers anyone can give would be *greatly*
appreciated.
>
> Here's the setup:
> Controller - 1 NIC, enp0s25
> Compute Node node3: 3 NICs. enp0s25 mgmt, enp1s0 and enp3s0 slaved to bond0 Compute
Node node4: 3 NICs. enp0s25 mgmt, enp1s0 and enp2s0 slaved to bond0
>
> I wanted to deploy the neutron services to the compute nodes to take advantage of the
bonded interfaces. The trouble is, I don't think I have my answer file [1] set up
properly yet.
>
> After the packstack deployment, this is what I have on node3 (I'm going to
concentrate solely on this system, as the only difference in node4 is one of the physical
interface names).
>
> [root@node3 ~]# ip link show
> 1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN mode
DEFAULT
> link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
> 2: enp0s25: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state
UP mode DEFAULT qlen 1000
> link/ether 00:22:19:30:67:04 brd ff:ff:ff:ff:ff:ff
> 3: enp1s0: <BROADCAST,MULTICAST,SLAVE,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast
master bond0 state UP mode DEFAULT qlen 1000
> link/ether 00:1b:21:ab:d5:1a brd ff:ff:ff:ff:ff:ff
> 4: enp3s0: <BROADCAST,MULTICAST,SLAVE,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast
master bond0 state UP mode DEFAULT qlen 1000
> link/ether 00:1b:21:ab:d5:1a brd ff:ff:ff:ff:ff:ff
> 5: bond0: <BROADCAST,MULTICAST,MASTER,UP,LOWER_UP> mtu 1500 qdisc noqueue
master ovs-system state UP mode DEFAULT
> link/ether 00:1b:21:ab:d5:1a brd ff:ff:ff:ff:ff:ff
> 7: ovs-system: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN mode
DEFAULT
> link/ether 76:2d:a5:ea:77:58 brd ff:ff:ff:ff:ff:ff
> 8: br-int: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state
UNKNOWN mode DEFAULT
> link/ether e6:ff:b9:c0:85:47 brd ff:ff:ff:ff:ff:ff
> 11: br-ex: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state
UNKNOWN mode DEFAULT
> link/ether 7a:74:54:18:6d:45 brd ff:ff:ff:ff:ff:ff
> 12: br-bond0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state
UNKNOWN mode DEFAULT
> link/ether 00:1b:21:ab:d5:1a brd ff:ff:ff:ff:ff:ff
> 13: br-tun: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN mode DEFAULT
> link/ether 72:58:fa:b0:8c:45 brd ff:ff:ff:ff:ff:ff
> [root@node3 ~]# ovs-vsctl show
> ca6d23ad-c88e-48db-9ace-6a3aff767460
> Bridge br-ex
> Port br-ex
> Interface br-ex
> type: internal
> Bridge br-tun
> Port patch-int
> Interface patch-int
> type: patch
> options: {peer=patch-tun}
> Port br-tun
> Interface br-tun
> type: internal
> Port "vxlan-0a010168"
> Interface "vxlan-0a010168"
> type: vxlan
> options: {df_default="true", in_key=flow,
local_ip="10.1.1.103", out_key=flow, remote_ip="10.1.1.104"}
> Bridge "br-bond0"
> Port "phy-br-bond0"
> Interface "phy-br-bond0"
> type: patch
> options: {peer="int-br-bond0"}
> Port "bond0"
> Interface "bond0"
> Port "br-bond0"
> Interface "br-bond0"
> type: internal
> Bridge br-int
> fail_mode: secure
> Port "int-br-bond0"
> Interface "int-br-bond0"
> type: patch
> options: {peer="phy-br-bond0"}
> Port br-int
> Interface br-int
> type: internal
> Port patch-tun
> Interface patch-tun
> type: patch
> options: {peer=patch-int}
> ovs_version: "2.1.3"
>
>
> The trouble lies in the fact that I have NO IDEA how to use openvirtualswitch. None.
This ovs-vsctl output is foreign to me, and makes no sense.
>
> At the very least, I'm simply looking for a good reference - so far, I've not
been able to find decent documentation. Does it exist?
>
> Thanks,
> Dan
>
> [1]
http://fpaste.org/156624/
>
> --
> Dan Mossor, RHCSA
> Systems Engineer at Large
> Fedora Plasma Product WG | Fedora QA Team | Fedora Server SIG Fedora Infrastructure
Apprentice
> FAS: dmossor IRC: danofsatx
> San Antonio, Texas, USA
>
> _______________________________________________
> Rdo-list mailing list
> Rdo-list(a)redhat.com
>
https://www.redhat.com/mailman/listinfo/rdo-list
>
--
Dan Mossor, RHCSA
Systems Engineer at Large
Fedora Plasma Product WG | Fedora QA Team | Fedora Server SIG
Fedora Infrastructure Apprentice
FAS: dmossor IRC: danofsatx
San Antonio, Texas, USA