[Rdo-list] RDO + floating IPs

Miguel Ángel Ajo majopela at redhat.com
Thu May 28 09:50:36 UTC 2015


Ok, I jumped into Tomas host, to check what was going on.

ifcfg-br-ex and ifcfg-enxxxx was not properly configured, that explains a bit…


Then I stumbled into:

# neutron net-create ext_net --provider:network_type flat --provider:physical_network external1  --router:external
Invalid input for operation: network_type value 'flat' not supported.



that is the *correct* way to create an external network in flat, otherwise the default segmentation will be used
(vxlan in this case) which… although it works because we’re forcing br-ex as our external bridge, it wouldn’t  
work at all if we were using several external networks.

It seems that packstack is setting:

[root at dell-t5810ws-rdo-02 neutron(keystone_admin)]# grep vxlan * -R
plugin.ini:# type_drivers = local,flat,vlan,gre,vxlan
[ml2]
plugin.ini:type_drivers = vxlan
plugin.ini:# Example: type_drivers = flat,vlan,gre,vxlan
plugin.ini:tenant_network_types = vxlan



while it may be:

[ml2]
plugin.ini:type_drivers = vxlan, flat, vlan
plugin.ini:# Example: type_drivers = flat,vlan,gre,vxlan
plugin.ini:tenant_network_types = vxlan




to allow other network types also, while telling neutron that tenant segmentation is vxlan by default

we need to fix this in packstack/quickstack/puppet modules.

After setting that correctly I can do:

# source keystonerc_admin
# neutron net-create ext_net --provider:network_type flat --provider:physical_network external1  —router:external

# neutron subnet-create ext_net 10.40.128.44/20  --name extsubnet \
                                       --enable-dhcp=False --allocation_pool start=10.40.128.80,end=10.40.128.84 \
                                       --gateway 10.40.143.254


# source keystonerc_demo
# neutron neutron subnet-create private --gateway 192.168.123.1 192.168.123.0/24 --name private_subnet
# neutron subnet-create private --gateway 192.168.123.1 192.168.123.0/24 --name private_subnet
# neutron router-create router
# neutron router-gateway-set router ext_net
# neutron router-interface-add router private_subnet


I believe we may fix the type_drivers setting, and then we should fix packstack to deploy the demo with ext-net as “flat”, and not default segmentation.  

Miguel Ángel Ajo


On Wednesday, 27 de May de 2015 at 12:42, Tomas Sedovic wrote:

> On 05/27/2015 12:28 PM, Rhys Oxenham wrote:
> >  
> > > On 27 May 2015, at 11:13, Tomas Sedovic <tsedovic at redhat.com (mailto:tsedovic at redhat.com)> wrote:
> > >  
> > > On 05/26/2015 05:16 PM, Kashyap Chamarthy wrote:
> > > > On Tue, May 26, 2015 at 03:54:36PM +0200, Tomas Sedovic wrote:
> > > > > Hey everyone,
> > > > >  
> > > > > I tried to get RDO set up with floating IP addresses, but I'm running into
> > > > > problems I'm not sure how to debug (not that familiar with networking and
> > > > > Neutron).
> > > > >  
> > > > > I followed these guides on a clean Fedora 21 x86_64 server:
> > > > >  
> > > > > https://www.rdoproject.org/Quickstart
> > > > > https://www.rdoproject.org/Floating_IP_range
> > > > >  
> > > >  
> > > > [. . .]
> > > >  
> > > > > once all 20 requests failed, it got to a login screen, but I could not ping
> > > > > or SSH into it:
> > > > >  
> > > > > # ping 10.40.128.81
> > > > > PING 10.40.128.81 (10.40.128.81) 56(84) bytes of data.
> > > > > From 10.40.128.44 icmp_seq=1 Destination Host Unreachable
> > > > > From 10.40.128.44 icmp_seq=2 Destination Host Unreachable
> > > > > From 10.40.128.44 icmp_seq=3 Destination Host Unreachable
> > > > > From 10.40.128.44 icmp_seq=4 Destination Host Unreachable
> > > > >  
> > > > > # ssh cirros at 10.40.128.81 (mailto:cirros at 10.40.128.81)
> > > > > ssh: connect to host 10.40.128.81 port 22: No route to host
> > > > >  
> > > >  
> > > >  
> > > > It could be any no. of reasons, as I don't know what's going on in your
> > > > network. But, your steps sound reasonably correct. Just for comparision,
> > > > that's what I normally do:
> > > >  
> > > > # Create new private network:
> > > > $ neutron net-create $privnetname
> > > >  
> > > > # Create a subnet
> > > > neutron subnet-create $privnetname \
> > > > $subnetspace/24 \
> > > > --name $privsubnetname
> > > >  
> > > > # Create a router
> > > > neutron router-create $routername
> > > >  
> > > > # Associate the router to the external network by setting its gateway
> > > > # NOTE: This assumes the external network name is 'ext'
> > > >  
> > > > export EXT_NET=$(neutron net-list | grep ext | awk '{print $2;}')
> > > > export PRIV_NET=$(neutron subnet-list | grep $privsubnetname | awk '{print $2;}')
> > > > export ROUTER_ID=$(neutron router-list | grep $routername | awk '{print $2;}'
> > > >  
> > > > neutron router-gateway-set \
> > > > $ROUTER_ID $EXT_NET_ID
> > > >  
> > > > neutron router-interface-add \
> > > > $ROUTER_ID $PRIV_NET_ID
> > > >  
> > > >  
> > > > # Add Neutron security groups for this test tenant
> > > > neutron security-group-rule-create \
> > > > --protocol icmp \
> > > > --direction ingress \
> > > > --remote-ip-prefix 0.0.0.0/0 \
> > > > default
> > > >  
> > > > neutron security-group-rule-create \
> > > > --protocol tcp \
> > > > --port-range-min 22 \
> > > > --port-range-max 22 \
> > > > --direction ingress \
> > > > --remote-ip-prefix 0.0.0.0/0 \
> > > > default
> > > >  
> > > >  
> > > > On a related note, all the above, inlcuding creating the Keystone
> > > > tenant, user, etc is put together in this trivial script[1], which
> > > > allows me to create tenant networks this way:
> > > >  
> > > > $ ./create-new-tenant-network.sh (http://create-new-tenant-network.sh) \
> > > > demoten1 tuser1 \
> > > > 14.0.0.0 trouter1 \
> > > > priv-net1 priv-subnet1
> > > >  
> > > > It assumes your external network is named as "ext", but you can modify
> > > > the script trivially to change that.
> > > >  
> > > >  
> > > > [1] https://github.com/kashyapc/ostack-misc/blob/master/create-new-tenant-network.sh
> > >  
> > > Thanks Kashyab, much appreciated. I've tried all this out, but the result seems to be the same (timeouts in cloud-init, the VM is unreachable).
> >  
> > So it seems there’s two problems here, correct me if I’m wrong -
> >  
> > 1) VM’s getting access to the metadata service
> >  
> > 2) VM’s accessible via their floating IP’s from the outside
> >  
> > I would say that (2) is the more important one we need to fix right now. If it’s pingable from the namespace then your overlays (or inter-node communication) and DHCP is working OK. That means that it’s likely the link between the external network bridge and the outside. From the output below, it suggests that you’ve defined your external network as a non-provider network. Therefore, you have to tell the L3 agent the specific bridge it needs to use to route traffic.
>  
> Thanks. For the record, it's only pingable when I route my private  
> network to "public". When the router's gateway is set to "ext", the  
> floating IP isn't pingable at all (including from the namespace):
>  
> ip netns exec qrouter-f2bfd294-c90c-4c98-9b6d-b33e28b7c9ef ping 10.40.128.84
> connect: Network is unreachable
>  
> (f2bfd...9ef is the ID of the router between the private and external  
> network, 10.40.128.84 is the floting IP).
>  
>  
> >  
> > In your L3 agent configuration file you’ll have ‘external_network_bridge’ option. This will need to be set to ‘br-ex’ (or the name of your external bridge) for the flows to be set up correctly. If this is blank, you’ll need to recreate your external network as a provider network, and ensure that you have the correct bridge mappings enabled on your Neutron network node.
> >  
> > So I guess my question is this… what is ‘external_network_bridge’ set to in /etc/neutron/l3-agent.ini?
> >  
> > [root at stack-node1 ~]# grep external_network_bridge /etc/neutron/l3_agent.ini
> > # When external_network_bridge is set, each L3 agent can be associated
> > # networks, both the external_network_bridge and gateway_external_network_id
> > # external_network_bridge = br-ex
> > external_network_bridge = br-ex
> >  
>  
>  
> It is indeed set to br-ex.
>  
> >  
> > Cheers
> > Rhys
> >  
> > >  
> > >  
> > > When I switched the router's gateway from "ext" to "public" (a network created by packstack) and booted the VM in my private network, it got to the login screen immediately and the floating IP was pingable through `ip netns exec`. Changing the gateway back to "ext", I got the timeouts again. That seems to indicate that the issue is related to "ext" rather then the way I set up a private network or boot the VM.
> > >  
> > > There doesn't seem to be a significant difference between "ext" and "public" networks and their subnets:
> > >  
> > > # neutron net-show public
> > > +---------------------------+--------------------------------------+
> > > | Field | Value |
> > > +---------------------------+--------------------------------------+
> > > | admin_state_up | True |
> > > | id | 5d2a0846-4244-4d3b-ad68-033a18224459 |
> > > | mtu | 0 |
> > > | name | public |
> > > | provider:network_type | vxlan |
> > > | provider:physical_network | |
> > > | provider:segmentation_id | 10 |
> > > | router:external | True |
> > > | shared | True |
> > > | status | ACTIVE |
> > > | subnets | 5285ff33-1bed-449b-b629-8ecc5ec0f642 |
> > > | tenant_id | 3c7799abd0af430696428247d377ceaf |
> > > +---------------------------+--------------------------------------+
> > > # neutron net-show ext
> > > +---------------------------+--------------------------------------+
> > > | Field | Value |
> > > +---------------------------+--------------------------------------+
> > > | admin_state_up | True |
> > > | id | 376e6c88-4752-476b-8feb-ae3346a98006 |
> > > | mtu | 0 |
> > > | name | ext |
> > > | provider:network_type | vxlan |
> > > | provider:physical_network | |
> > > | provider:segmentation_id | 12 |
> > > | router:external | True |
> > > | shared | False |
> > > | status | ACTIVE |
> > > | subnets | db336afd-8d41-4938-97ac-39ec912597df |
> > > | tenant_id | 3c7799abd0af430696428247d377ceaf |
> > > +---------------------------+--------------------------------------+
> > > # neutron subnet-show public_subnet
> > > +-------------------+--------------------------------------------------+
> > > | Field | Value |
> > > +-------------------+--------------------------------------------------+
> > > | allocation_pools | {"start": "172.24.4.226", "end": "172.24.4.238"} |
> > > | cidr | 172.24.4.224/28 |
> > > | dns_nameservers | |
> > > | enable_dhcp | False |
> > > | gateway_ip | 172.24.4.225 |
> > > | host_routes | |
> > > | id | 5285ff33-1bed-449b-b629-8ecc5ec0f642 |
> > > | ip_version | 4 |
> > > | ipv6_address_mode | |
> > > | ipv6_ra_mode | |
> > > | name | public_subnet |
> > > | network_id | 5d2a0846-4244-4d3b-ad68-033a18224459 |
> > > | subnetpool_id | |
> > > | tenant_id | 3c7799abd0af430696428247d377ceaf |
> > > +-------------------+--------------------------------------------------+
> > > # neutron subnet-show ext_subnet
> > > +-------------------+--------------------------------------------------+
> > > | Field | Value |
> > > +-------------------+--------------------------------------------------+
> > > | allocation_pools | {"start": "10.40.128.80", "end": "10.40.128.84"} |
> > > | cidr | 10.40.128.0/20 |
> > > | dns_nameservers | |
> > > | enable_dhcp | False |
> > > | gateway_ip | 10.40.143.254 |
> > > | host_routes | |
> > > | id | db336afd-8d41-4938-97ac-39ec912597df |
> > > | ip_version | 4 |
> > > | ipv6_address_mode | |
> > > | ipv6_ra_mode | |
> > > | name | ext_subnet |
> > > | network_id | 376e6c88-4752-476b-8feb-ae3346a98006 |
> > > | subnetpool_id | |
> > > | tenant_id | 3c7799abd0af430696428247d377ceaf |
> > > +-------------------+--------------------------------------------------+
> > >  
> > >  
> > > I've also seen this: https://www.rdoproject.org/Neutron_with_existing_external_network
> > >  
> > > Tried to follow it some time ago, but whenever I got to the `service network restart`, I got disconnected from my box and it was unreachable even after reboot.
> > >  
> > >  
> > > Is there anything else that jumps at you? Or do you have any ideas how to investigate this further?
> > >  
> > > I was also thinking I could change "public"'s subnet to the floating IP range I have available, but I worry that may screw everything up. Is it worth a try?
> > >  
> > > Thanks,
> > > Tomas
> > >  
> > >  
> > >  
> > > _______________________________________________
> > > Rdo-list mailing list
> > > Rdo-list at redhat.com (mailto:Rdo-list at redhat.com)
> > > https://www.redhat.com/mailman/listinfo/rdo-list
> > >  
> > > To unsubscribe: rdo-list-unsubscribe at redhat.com (mailto:rdo-list-unsubscribe at redhat.com)
>  
> _______________________________________________
> Rdo-list mailing list
> Rdo-list at redhat.com (mailto:Rdo-list at redhat.com)
> https://www.redhat.com/mailman/listinfo/rdo-list
>  
> To unsubscribe: rdo-list-unsubscribe at redhat.com (mailto:rdo-list-unsubscribe at redhat.com)  

-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.rdoproject.org/pipermail/dev/attachments/20150528/e5929662/attachment.html>


More information about the dev mailing list