I'm still trying to debug this but having issues.... :(
When I start an instance on a compute node, I see this in
/var/log/neutron/openvswitch-agent.log:
2014-04-27 12:03:02.009 1958 INFO neutron.agent.securitygroups_rpc [-]
Preparing filters for devices set([u'61e2f303-89b2-4b52-bbc1-25d97bb29d76'])
2014-04-27 12:03:02.117 1958 ERROR
neutron.plugins.openvswitch.agent.ovs_neutron_agent [-] Error while
processing VIF ports
2014-04-27 12:03:02.117 1958 TRACE
neutron.plugins.openvswitch.agent.ovs_neutron_agent Traceback (most
recent call last):
2014-04-27 12:03:02.117 1958 TRACE
neutron.plugins.openvswitch.agent.ovs_neutron_agent File
"/usr/lib/python2.6/site-packages/neutron/plugins/openvswitch/agent/ovs_neutron_agent.py",
line 1226, in rpc_loop
2014-04-27 12:03:02.117 1958 TRACE
neutron.plugins.openvswitch.agent.ovs_neutron_agent sync =
self.process_network_ports(port_info)
2014-04-27 12:03:02.117 1958 TRACE
neutron.plugins.openvswitch.agent.ovs_neutron_agent File
"/usr/lib/python2.6/site-packages/neutron/plugins/openvswitch/agent/ovs_neutron_agent.py",
line 1069, in process_network_ports
2014-04-27 12:03:02.117 1958 TRACE
neutron.plugins.openvswitch.agent.ovs_neutron_agent
port_info.get('updated', set()))
2014-04-27 12:03:02.117 1958 TRACE
neutron.plugins.openvswitch.agent.ovs_neutron_agent File
"/usr/lib/python2.6/site-packages/neutron/agent/securitygroups_rpc.py",
line 247, in setup_port_filters
2014-04-27 12:03:02.117 1958 TRACE
neutron.plugins.openvswitch.agent.ovs_neutron_agent
self.prepare_devices_filter(new_devices)
2014-04-27 12:03:02.117 1958 TRACE
neutron.plugins.openvswitch.agent.ovs_neutron_agent File
"/usr/lib/python2.6/site-packages/neutron/agent/securitygroups_rpc.py",
line 164, in prepare_devices_filter
2014-04-27 12:03:02.117 1958 TRACE
neutron.plugins.openvswitch.agent.ovs_neutron_agent
self.firewall.prepare_port_filter(device)
2014-04-27 12:03:02.117 1958 TRACE
neutron.plugins.openvswitch.agent.ovs_neutron_agent File
"/usr/lib64/python2.6/contextlib.py", line 23, in __exit__
2014-04-27 12:03:02.117 1958 TRACE
neutron.plugins.openvswitch.agent.ovs_neutron_agent self.gen.next()
2014-04-27 12:03:02.117 1958 TRACE
neutron.plugins.openvswitch.agent.ovs_neutron_agent File
"/usr/lib/python2.6/site-packages/neutron/agent/firewall.py", line 108,
in defer_apply
2014-04-27 12:03:02.117 1958 TRACE
neutron.plugins.openvswitch.agent.ovs_neutron_agent
self.filter_defer_apply_off()
2014-04-27 12:03:02.117 1958 TRACE
neutron.plugins.openvswitch.agent.ovs_neutron_agent File
"/usr/lib/python2.6/site-packages/neutron/agent/linux/iptables_firewall.py",
line 370, in filter_defer_apply_off
2014-04-27 12:03:02.117 1958 TRACE
neutron.plugins.openvswitch.agent.ovs_neutron_agent
self.iptables.defer_apply_off()
2014-04-27 12:03:02.117 1958 TRACE
neutron.plugins.openvswitch.agent.ovs_neutron_agent File
"/usr/lib/python2.6/site-packages/neutron/agent/linux/iptables_manager.py",
line 353, in defer_apply_off
2014-04-27 12:03:02.117 1958 TRACE
neutron.plugins.openvswitch.agent.ovs_neutron_agent self._apply()
2014-04-27 12:03:02.117 1958 TRACE
neutron.plugins.openvswitch.agent.ovs_neutron_agent File
"/usr/lib/python2.6/site-packages/neutron/agent/linux/iptables_manager.py",
line 367, in _apply
2014-04-27 12:03:02.117 1958 TRACE
neutron.plugins.openvswitch.agent.ovs_neutron_agent with
lockutils.lock(lock_name, utils.SYNCHRONIZED_PREFIX, True):
2014-04-27 12:03:02.117 1958 TRACE
neutron.plugins.openvswitch.agent.ovs_neutron_agent File
"/usr/lib64/python2.6/contextlib.py", line 16, in __enter__
2014-04-27 12:03:02.117 1958 TRACE
neutron.plugins.openvswitch.agent.ovs_neutron_agent return
self.gen.next()
2014-04-27 12:03:02.117 1958 TRACE
neutron.plugins.openvswitch.agent.ovs_neutron_agent File
"/usr/lib/python2.6/site-packages/neutron/openstack/common/lockutils.py", line
183, in lock
2014-04-27 12:03:02.117 1958 TRACE
neutron.plugins.openvswitch.agent.ovs_neutron_agent raise
cfg.RequiredOptError('lock_path')
2014-04-27 12:03:02.117 1958 TRACE
neutron.plugins.openvswitch.agent.ovs_neutron_agent RequiredOptError:
value required for option: lock_path
2014-04-27 12:03:02.117 1958 TRACE
neutron.plugins.openvswitch.agent.ovs_neutron_agent
I can't ping other hosts on the VLAN the VM is supposed to be on (when I
configure the VM for a static IP), but I also don't see traffic on the
OVS bridges at all. I'm using Icehouse RDO and ML2.
I'm using these rmp versions:
rpm -qa | grep neutron
python-neutronclient-2.3.4-1.el6.noarch
openstack-neutron-2014.1-10.el6.noarch
openstack-neutron-ml2-2014.1-10.el6.noarch
python-neutron-2014.1-10.el6.noarch
openstack-neutron-openvswitch-2014.1-10.el6.noarch
Does this ring a bell for anyone? This used to work when I was using
rc1 a while back, so I'm confused as to why it would change now...?
On 4/25/14, 12:11 PM, Erich Weiler wrote:
Actually I appear to have :
openstack-neutron-openvswitch-2014.1-10.el6.noarch
but there appears to be a newer one out there:
openstack-neutron-openvswitch-2014.1-11.el6.noarch.rpm
Is there by chance a bug fix in that one? (assuming this is a bug...)
On 04/25/14 11:50, Erich Weiler wrote:
> Hi Y'all,
>
> I recently began rebuilding my OpenStack installation under the latest
> RDO icehouse release (as of two days ago at least), and everything is
> almost working, but I'm having issues with Open vSwitch, at least on the
> compute nodes.
>
> I'm use the ML2 plugin and VLAN tenant isolation. I have this in my
> compute node's /etc/neutron/plugin.ini file
>
> ----------
> [ovs]
> bridge_mappings = physnet1:br-eth1
>
> [ml2]
> type_drivers = vlan
> tenant_network_types = vlan
> mechanism_drivers = openvswitch
>
> [ml2_type_flat]
>
> [ml2_type_vlan]
> network_vlan_ranges = physnet1:200:209
> ----------
>
> My switchports that the nodes connect to are configured as trunks,
> allowing VLANs 200-209 to flow over them.
>
> My network that the VMs should be connecting to is:
>
> # neutron net-show cbse-net
> +---------------------------+--------------------------------------+
> | Field | Value |
> +---------------------------+--------------------------------------+
> | admin_state_up | True |
> | id | 23028b15-fb12-4a9f-9fba-02f165a52d44 |
> | name | cbse-net |
> | provider:network_type | vlan |
> | provider:physical_network | physnet1 |
> | provider:segmentation_id | 200 |
> | router:external | False |
> | shared | False |
> | status | ACTIVE |
> | subnets | dd25433a-b21d-475d-91e4-156b00f25047 |
> | tenant_id | 7c1980078e044cb08250f628cbe73d29 |
> +---------------------------+--------------------------------------+
>
> # neutron subnet-show dd25433a-b21d-475d-91e4-156b00f25047
> +------------------+--------------------------------------------------+
> | Field | Value |
> +------------------+--------------------------------------------------+
> | allocation_pools | {"start": "10.200.0.2", "end":
"10.200.255.254"} |
> | cidr | 10.200.0.0/16 |
> | dns_nameservers | 121.43.52.1 |
> | enable_dhcp | True |
> | gateway_ip | 10.200.0.1 |
> | host_routes | |
> | id | dd25433a-b21d-475d-91e4-156b00f25047 |
> | ip_version | 4 |
> | name | |
> | network_id | 23028b15-fb12-4a9f-9fba-02f165a52d44 |
> | tenant_id | 7c1980078e044cb08250f628cbe73d29 |
> +------------------+--------------------------------------------------+
>
> So those VMs on that network should send packets that would be tagged
> with VLAN 200.
>
> I launch an instance, then look at the compute node with the instance on
> it. It doesn't get a DHCP address, so it can't talk to the neutron node
> with the dnsmasq server running on it. I configure the VM's interface
> to be a static IP on VLAN200, 10.200.0.30, and netmask 255.255.0.0. I
> have another node set up on VLAN 200 on my switch to test with
> (10.200.0.50) that is a real bare-metal server.
>
> I can't ping my bare-metal server. I see the packets getting to eth1 on
> my compute node, but stopping there. Then I figure out that the packets
> are *not being tagged* for VLAN 200 as they leave the compute node!! So
> the switch is dropping them. As a test I configure the switchport
> with "native vlan 200", and voila, the ping works.
>
> So, Open vSwitch is not getting that it needs to tag the packets for
> VLAN 200. A little diagnostics on the compute node:
>
> ovs-ofctl dump-flows br-int
> NXST_FLOW reply (xid=0x4):
> cookie=0x0, duration=966.803s, table=0, n_packets=0, n_bytes=0,
> idle_age=966, priority=0 actions=NORMAL
>
> Shouldn't that show some VLAN tagging?
>
> and a tcpdump on eth1 on the compute node:
>
> # tcpdump -e -n -vv -i eth1 | grep -i arp
> tcpdump: WARNING: eth1: no IPv4 address assigned
> tcpdump: listening on eth1, link-type EN10MB (Ethernet), capture size
> 65535 bytes
> 11:21:50.462447 fa:16:3e:94:b3:63 > Broadcast, ethertype ARP (0x0806),
> length 42: Ethernet (len 6), IPv4 (len 4), Request who-has 10.200.0.50
> tell 10.200.0.30, length 28
> 11:21:51.462968 fa:16:3e:94:b3:63 > Broadcast, ethertype ARP (0x0806),
> length 42: Ethernet (len 6), IPv4 (len 4), Request who-has 10.200.0.50
> tell 10.200.0.30, length 28
> 11:21:52.462330 fa:16:3e:94:b3:63 > Broadcast, ethertype ARP (0x0806),
> length 42: Ethernet (len 6), IPv4 (len 4), Request who-has 10.200.0.50
> tell 10.200.0.30, length 28
> 11:21:53.462311 fa:16:3e:94:b3:63 > Broadcast, ethertype ARP (0x0806),
> length 42: Ethernet (len 6), IPv4 (len 4), Request who-has 10.200.0.50
> tell 10.200.0.30, length 28
> 11:21:54.463169 fa:16:3e:94:b3:63 > Broadcast, ethertype ARP (0x0806),
> length 42: Ethernet (len 6), IPv4 (len 4), Request who-has 10.200.0.50
> tell 10.200.0.30, length 28
>
> That tcpdump also confirms the ARP packets are not being tagged 200 as
> they leave the physical interface.
>
> This worked before when I was testing icehouse RC1, I don't know what
> changed with Open vSwitch... Anyone have any ideas?
>
> Thanks as always for the help!! This list has been very helpful.
>
> cheers,
> erich