[Rdo-list] Trying out Neutron Quickstart running into issues with netns (l2 agent and dhcp agent)

Perry Myers pmyers at redhat.com
Tue Aug 6 05:57:58 UTC 2013


On 08/06/2013 12:50 AM, Perry Myers wrote:
> On 08/04/2013 09:57 AM, Perry Myers wrote:
>> Hi,
>>
>> I followed the instructions at:
>> http://openstack.redhat.com/Neutron-Quickstart
>> http://openstack.redhat.com/Running_an_instance_with_Neutron
>>
>> I ran this on a RHEL 6.4 VM with latest updates from 6.4.z.  I made sure
>> to install the netns enabled kernel from RDO repos and reboot with that
>> kernel before running packstack so that I didn't need to reboot the VM
>> after the packstack install (and have br-ex disappear)
>>
>> The packstack install went without incident.  And I was able to follow
>> the launch an instance instructions.
> 
> Ok, retried this but took advice from folks on this thread.
> 
> Since l3 agent and dhcp agent in RDO are not right (they comment out
> ovs_use_veth=True and veths are required for the netns support in RHEL
> kernels)
> 
> marun summarized this nicely:
> 
> "if ovs_use_veth is set to false, a regular interface and an internal
> ovs port will be used, and the regular interface will be moved to a
> namespace during setup. if ovs_use_veth is set to true, a veth pair will
> be used with one endpoint created in the namespace. it is a limitation
> of rhel's netns implementation that requires the second approach, as
> virtual interfaces can only be created in namespaces, not moved
> post-creation."
> 
> With manually enabling ovs_use_veth=True for l3 and dhcp agents, I was
> able to get cirros VM to get an ip address on launching.
> 
> What doesn't work now is pinging/sshing to the floating ip address from
> the host (which is itself a VM)
> 
> Yes, I did open those ports in the default security group, and I also
> made sure the instance was launched with the default security group.
> 
> But that being said, I wanted to check the logs to see if some of the
> previous errors went away.  dhcp-agent and l3 agent logs look clean now
> (aside from the amqp initial connection errors)
> 
> My next test will be to run this exact same scenario but with
> NetworkManager disabled.

Ok, ran the exact steps above but this time I started with a guest where
networkmanager was completely removed via:

yum remove *NetworkManager*
editing /etc/sysconfig/network-scripts/ifcfg-eth0 to set NM_CONTROLED=no
rebooting

I got the exact same results with and without NM on the system.
mainly...  cirros VM could get private IP from dhcp agent (10.0.0.3) but
I can't access the VM via the floating IP

Someone double check me, but here is what my default secgroup looks like:

> [admin at rdo-mgmt ~(keystone_demo)]$ nova secgroup-list-rules default
> +-------------+-----------+---------+-----------+--------------+
> | IP Protocol | From Port | To Port | IP Range  | Source Group |
> +-------------+-----------+---------+-----------+--------------+
> |             |           |         |           | default      |
> |             |           |         |           | default      |
> | icmp        | -1        | -1      | 0.0.0.0/0 |              |
> | tcp         | 22        | 22      | 0.0.0.0/0 |              |
> +-------------+-----------+---------+-----------+--------------+

And as you can see above, I'm using the demo tenant and networks created
for that tenant

Also, I noticed with NM enabled or disabled, I cannot access the
external network from the cirros VM.

Ok, so in case this info is useful:

> [admin at rdo-mgmt ~(keystone_demo)]$ sudo ovs-vsctl show
> 25588688-af82-4bc9-b053-8009d2718738
>     Bridge br-int
>         Port "tap1582253a-01"
>             Interface "tap1582253a-01"
>         Port "qr-1582253a-01"
>             tag: 1
>             Interface "qr-1582253a-01"
>                 type: internal
>         Port "tapdce36595-4c"
>             tag: 1
>             Interface "tapdce36595-4c"
>         Port br-int
>             Interface br-int
>                 type: internal
>         Port "qvocd93d0e2-69"
>             tag: 1
>             Interface "qvocd93d0e2-69"
>         Port "tap99bc4804-f3"
>             tag: 2
>             Interface "tap99bc4804-f3"
>     Bridge br-ex
>         Port br-ex
>             Interface br-ex
>                 type: internal
>         Port "qg-1448e7df-47"
>             Interface "qg-1448e7df-47"
>                 type: internal
>         Port "tap1448e7df-47"
>             Interface "tap1448e7df-47"
>     ovs_version: "1.10.0"

quamtum logs look relatively benign except I see this warning in server.log:

> 2013-08-06 01:39:32  WARNING [quantum.db.agentschedulers_db] Fail scheduling network {'status': u'ACTIVE', 'subnets': [u'585ec59b-005d-4460-a094-5394be2bb3a1'], 'name': u'private', 'provider:physical_network': None, 'admin_state_up': True, 'tenant_id': u'd297494482aa44ebb30243f624f9d5fc', 'provider:network_type': u'local', 'router:external': False, 'shared': False, 'id': u'7878056e-b4eb-4d26-a711-95ced35e7f98', 'provider:segmentation_id': None}

Appreciate any pointers on what to check/look for next... :)

Perry




More information about the dev mailing list