[Rdo-list] Packstack, Neutron, and Openvswitch
Veaceslav (Slava) Mindru
vmindru at redhat.com
Thu Dec 4 16:35:10 UTC 2014
Hi
did you try this one ?
https://openstack.redhat.com/Neutron_with_existing_external_network
I think you are missing your external NIC to be part of br-ex.
VM
On 04/12/14 10:29 -0600, Dan Mossor wrote:
>Howdy folks!
>
>I am still trying to get an Openstack deployment working using
>packstack. I've done a lot of reading, but apparently not quite enough
>since I can't seem to get my compute nodes to talk to the network. Any
>pointers anyone can give would be *greatly* appreciated.
>
>Here's the setup:
>Controller - 1 NIC, enp0s25
>Compute Node node3: 3 NICs. enp0s25 mgmt, enp1s0 and enp3s0 slaved to bond0
>Compute Node node4: 3 NICs. enp0s25 mgmt, enp1s0 and enp2s0 slaved to bond0
>
>I wanted to deploy the neutron services to the compute nodes to take
>advantage of the bonded interfaces. The trouble is, I don't think I
>have my answer file [1] set up properly yet.
>
>After the packstack deployment, this is what I have on node3 (I'm
>going to concentrate solely on this system, as the only difference in
>node4 is one of the physical interface names).
>
>[root at node3 ~]# ip link show
>1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN
>mode DEFAULT
> link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
>2: enp0s25: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc
>pfifo_fast state UP mode DEFAULT qlen 1000
> link/ether 00:22:19:30:67:04 brd ff:ff:ff:ff:ff:ff
>3: enp1s0: <BROADCAST,MULTICAST,SLAVE,UP,LOWER_UP> mtu 1500 qdisc
>pfifo_fast master bond0 state UP mode DEFAULT qlen 1000
> link/ether 00:1b:21:ab:d5:1a brd ff:ff:ff:ff:ff:ff
>4: enp3s0: <BROADCAST,MULTICAST,SLAVE,UP,LOWER_UP> mtu 1500 qdisc
>pfifo_fast master bond0 state UP mode DEFAULT qlen 1000
> link/ether 00:1b:21:ab:d5:1a brd ff:ff:ff:ff:ff:ff
>5: bond0: <BROADCAST,MULTICAST,MASTER,UP,LOWER_UP> mtu 1500 qdisc
>noqueue master ovs-system state UP mode DEFAULT
> link/ether 00:1b:21:ab:d5:1a brd ff:ff:ff:ff:ff:ff
>7: ovs-system: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN
>mode DEFAULT
> link/ether 76:2d:a5:ea:77:58 brd ff:ff:ff:ff:ff:ff
>8: br-int: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue
>state UNKNOWN mode DEFAULT
> link/ether e6:ff:b9:c0:85:47 brd ff:ff:ff:ff:ff:ff
>11: br-ex: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue
>state UNKNOWN mode DEFAULT
> link/ether 7a:74:54:18:6d:45 brd ff:ff:ff:ff:ff:ff
>12: br-bond0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue
>state UNKNOWN mode DEFAULT
> link/ether 00:1b:21:ab:d5:1a brd ff:ff:ff:ff:ff:ff
>13: br-tun: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN mode
>DEFAULT
> link/ether 72:58:fa:b0:8c:45 brd ff:ff:ff:ff:ff:ff
>[root at node3 ~]# ovs-vsctl show
>ca6d23ad-c88e-48db-9ace-6a3aff767460
> Bridge br-ex
> Port br-ex
> Interface br-ex
> type: internal
> Bridge br-tun
> Port patch-int
> Interface patch-int
> type: patch
> options: {peer=patch-tun}
> Port br-tun
> Interface br-tun
> type: internal
> Port "vxlan-0a010168"
> Interface "vxlan-0a010168"
> type: vxlan
> options: {df_default="true", in_key=flow,
>local_ip="10.1.1.103", out_key=flow, remote_ip="10.1.1.104"}
> Bridge "br-bond0"
> Port "phy-br-bond0"
> Interface "phy-br-bond0"
> type: patch
> options: {peer="int-br-bond0"}
> Port "bond0"
> Interface "bond0"
> Port "br-bond0"
> Interface "br-bond0"
> type: internal
> Bridge br-int
> fail_mode: secure
> Port "int-br-bond0"
> Interface "int-br-bond0"
> type: patch
> options: {peer="phy-br-bond0"}
> Port br-int
> Interface br-int
> type: internal
> Port patch-tun
> Interface patch-tun
> type: patch
> options: {peer=patch-int}
> ovs_version: "2.1.3"
>
>
>The trouble lies in the fact that I have NO IDEA how to use
>openvirtualswitch. None. This ovs-vsctl output is foreign to me, and
>makes no sense.
>
>At the very least, I'm simply looking for a good reference - so far,
>I've not been able to find decent documentation. Does it exist?
>
>Thanks,
>Dan
>
>[1] http://fpaste.org/156624/
>
>--
>Dan Mossor, RHCSA
>Systems Engineer at Large
>Fedora Plasma Product WG | Fedora QA Team | Fedora Server SIG
>Fedora Infrastructure Apprentice
>FAS: dmossor IRC: danofsatx
>San Antonio, Texas, USA
>
>_______________________________________________
>Rdo-list mailing list
>Rdo-list at redhat.com
>https://www.redhat.com/mailman/listinfo/rdo-list
More information about the dev
mailing list