[Rdo-list] Attempt to reproduce https://github.com/beekhof/osp-ha-deploy/blob/master/HA-keepalived.md

Alessandro Vozza alessandro at namecheap.com
Sat Nov 14 09:26:06 UTC 2015


I happened to have deployed following the exact same guide a cloud on bare metal composed of 3 controllers, 2 haproxy nodes and N computes, with external provider networks. What I did was:

-) (at https://github.com/beekhof/osp-ha-deploy/blob/master/keepalived/neutron-config.md):
On controller and compute, define the external vlan interface and its bridge:

cat <<EOF > /etc/sysconfig/network-scripts/ifcfg-bond0.102
DEVICE=bond0.102
ONBOOT=yes
DEVICETYPE=ovs
TYPE=OVSPort
OVS_BRIDGE=br-bond0.102
ONBOOT=yes
BOOTPROTO=none
VLAN=yes
MTU="1500"
NM_CONTROLLED=no
EOF

cat <<EOF > /etc/sysconfig/network-scripts/ifcfg-br-bond0.102
DEVICE=br-bond0.102
DEVICETYPE=ovs
OVSBOOTPROTO=none
TYPE=OVSBridge
ONBOOT=yes
BOOTPROTO=static
MTU="1500"
NM_CONTROLLED=no
EOF

then, make sure that everywhere exists /etc/neutron/plugins/ml2/openvswitch_agent.ini as:

[ovs]
enable_tunneling = True
tunnel_id_ranges = 1:1000
tenant_network_type = vxlan
integration_bridge = br-int
tunnel_bridge = br-tun
local_ip = <IP on bond0>
bridge_mappings = physnet1:br-bond0.102
network_vlan_ranges = physnet1
[agent]
tunnel_types = vxlan
vxlan_udp_port = 4789
l2_population = False
[securitygroup]
firewall_driver = neutron.agent.linux.iptables_firewall.OVSHybridIptablesFirewallDriver

restart neutron-openvswitch-agent everywhere to make it work:


Hope it helps
Alessandro



> On 14 Nov 2015, at 09:35, Boris Derzhavets <bderzhavets at hotmail.com> wrote:
> 
> 
> 
> ________________________________________
> From: Dan Sneddon <dsneddon at redhat.com <mailto:dsneddon at redhat.com>>
> Sent: Friday, November 13, 2015 4:10 PM
> To: Boris Derzhavets; rdo-list at redhat.com <mailto:rdo-list at redhat.com>
> Subject: Re: [Rdo-list] Attempt to reproduce https://github.com/beekhof/osp-ha-deploy/blob/master/HA-keepalived.md <https://github.com/beekhof/osp-ha-deploy/blob/master/HA-keepalived.md>
> 
> On 11/13/2015 12:56 PM, Dan Sneddon wrote:
>> Hi Boris,
>> 
>> Let's keep this on-list, there may be others who are having similar
>> issues who could find this discussion useful.
>> 
>> Answers inline...
>> 
>> On 11/13/2015 12:17 PM, Boris Derzhavets wrote:
>>> 
>>> 
>>> ________________________________________
>>> From: Dan Sneddon <dsneddon at redhat.com>
>>> Sent: Friday, November 13, 2015 2:46 PM
>>> To: Boris Derzhavets; Javier Pena
>>> Cc: rdo-list at redhat.com
>>> Subject: Re: [Rdo-list] Attempt to reproduce https://github.com/beekhof/osp-ha-deploy/blob/master/HA-keepalived.md
>>> 
>>> On 11/13/2015 11:38 AM, Boris Derzhavets wrote:
>>>> I understand that in usual situation , creating ifcfg-br-ex and ifcfg-eth2 ( as OVS bridge and OVS port) ,
>>>> `service network restart` should be run to make eth2 (no IP) OVS port of br-ex (any IP which belongs ext net and is available)
>>>> What bad does NetworkManager when external network provider is used ?
>>>> Disabling it,  I break routing via eth0's interfaces of cluster nodes to 10.10.10.0/24 ( ext net),
>>>> so nothing is supposed to work :-
>>>>   http://blog.oddbit.com/2014/05/28/multiple-external-networks-wit/
>>>>   http://dbaxps.blogspot.com/2015/10/multiple-external-networks-with-single.html
>>>> Either I am missing something here.
>>>> ________________________________________
>>>> From: rdo-list-bounces at redhat.com <rdo-list-bounces at redhat.com> on behalf of Boris Derzhavets <bderzhavets at hotmail.com>
>>>> Sent: Friday, November 13, 2015 1:09 PM
>>>> To: Javier Pena
>>>> Cc: rdo-list at redhat.com
>>>> Subject: [Rdo-list] Attempt to reproduce https://github.com/beekhof/osp-ha-deploy/blob/master/HA-keepalived.md
>>>> 
>>>> Working on this task I was able to build 3 node HAProxy/Keepalived  Controller's cluster , create compute node , launch CirrOS VM,
>>>> However, I cannot ping floating IP of VM running on compute ( total 4 CentOS 7.1 VMs, nested kvm enabled )
>>>> Looks like provider external networks  doesn't work for me.
>>>> 
>>>> But , to have eth0 without IP (due to `ovs-vsctl add-port br-eth0 eth0 ) still allowing to ping 10.10.10.1,
>>>> I need NetworkManager active, rather then network.service
>>>> 
>>>> [root at hacontroller1 network-scripts]# systemctl status NetworkManager
>>>> NetworkManager.service - Network Manager
>>>>   Loaded: loaded (/usr/lib/systemd/system/NetworkManager.service; enabled)
>>>>   Active: active (running) since Fri 2015-11-13 20:39:21 MSK; 12min ago
>>>> Main PID: 808 (NetworkManager)
>>>>   CGroup: /system.slice/NetworkManager.service
>>>>           ├─ 808 /usr/sbin/NetworkManager --no-daemon
>>>>           └─2325 /sbin/dhclient -d -q -sf /usr/libexec/nm-dhcp-helper -pf /var/run/dhclient-eth0...
>>>> 
>>>> Nov 13 20:39:22 hacontroller1.example.com NetworkManager[808]: <info>  NetworkManager state is n...L
>>>> Nov 13 20:39:22 hacontroller1.example.com dhclient[2325]: bound to 10.10.10.216 -- renewal in 1...s.
>>>> Nov 13 20:39:22 hacontroller1.example.com NetworkManager[808]: <info>  (eth0): Activation: succe....
>>>> Nov 13 20:39:25 hacontroller1.example.com NetworkManager[808]: <info>  startup complete
>>>> 
>>>> [root at hacontroller1 network-scripts]# systemctl status network.service
>>>> network.service - LSB: Bring up/down networking
>>>>   Loaded: loaded (/etc/rc.d/init.d/network)
>>>>   Active: inactive (dead)
>>>> 
>>>> [root at hacontroller1 network-scripts]# cat ifcfg-eth0
>>>> TYPE="Ethernet"
>>>> BOOTPROTO="static"
>>>> NAME="eth0"
>>>> DEVICE=eth0
>>>> ONBOOT="yes"
>>>> 
>>>> [root at hacontroller1 network-scripts]# ping -c 3 10.10.10.1
>>>> PING 10.10.10.1 (10.10.10.1) 56(84) bytes of data.
>>>> 64 bytes from 10.10.10.1: icmp_seq=1 ttl=64 time=0.087 ms
>>>> 64 bytes from 10.10.10.1: icmp_seq=2 ttl=64 time=0.128 ms
>>>> 64 bytes from 10.10.10.1: icmp_seq=3 ttl=64 time=0.117 ms
>>>> 
>>>> --- 10.10.10.1 ping statistics ---
>>>> 3 packets transmitted, 3 received, 0% packet loss, time 1999ms
>>>> rtt min/avg/max/mdev = 0.087/0.110/0.128/0.021 ms
>>>> 
>>>> If I disable NetworkManager and enable network this feature will be lost. Eth0 would have to have static IP or dhcp lease,
>>>> to provide route to 10.10.10.0/24.
>>>> 
>>>> Thank you.
>>>> Boris.
>>>> 
>>>> _______________________________________________
>>>> Rdo-list mailing list
>>>> Rdo-list at redhat.com
>>>> https://www.redhat.com/mailman/listinfo/rdo-list
>>>> 
>>>> To unsubscribe: rdo-list-unsubscribe at redhat.com
>>>> 
>>> 
>>> OK, a few things here. First of all, you don't actually need to have an
>>> IP address on the host system to use a VLAN or interface as an external
>>> provider network. The Neutron router will have an IP on the right
>>> network, and within its namespace will be able to reach the 10.10.10.x
>>> network.
>>> 
>>>> It looks to me like NetworkManager is running dhclient for eth0, even
>>>> though you have BOOTPROTO="static". This is causing an IP address to be
>>>> added to eth0, so you are able to ping 10.10.10.x from the host. When
>>>> you turn off NetworkManager, this unexpected behavior goes away, *but
>>>> you should still be able to use provider networks*.
>>> 
>>>     Here I am quoting Lars Kellogg Stedman
>>>           http://blog.oddbit.com/2014/05/28/multiple-external-networks-wit/
>>>     The bottom statement in blog post above states :-
>>>     "This assumes that eth1 is connected to a network using 10.1.0.0/24 and eth2 is connected to a network using 10.2.0.0/24, and that each network has a gateway sitting at the corresponding .1 address."
>> 
>> Right, what Lars means is that eth1 is physically connected to a
>> network with the 10.1.0.0/24 subnet, and eth2 is physically connected
>> to a network with the 10.2.0.0/24 subnet.
>> 
>> You might notice that in Lars's instructions, he never puts a host IP
>> on either interface.
>> 
>>>> Try creating a Neutron router with an IP on 10.10.10.x, and then you
>>>> should be able to ping that network from the router namespace.
>>> 
>>>   " When I issue `neutron router-creater --ha True --tenant-id xxxxxx RouterHA`  , i cannot specify router's
>>>    IP "
>> 
>> Let me refer you to this page, which explains the basics of creating
>> and managing Neutron networks:
>> 
>> http://docs.openstack.org/user-guide/cli_create_and_manage_networks.html
>> 
>> You will have to create an external network, which you will associate
>> with a physical network via a bridge mapping. The default bridge
>> mapping for br-ex is datacentre:br-ex.
>> 
>> Using the name of the physical network "datacentre", we can create an
> 
> 1.  Javier is using external network provider ( and so  did I ,  following  him)
> 
> #.   /root/keystonerc_admin
> # neutron net-create public --provider:network_type flat --provider:physical_network physnet1 --router:external
> # neutron subnet-create --gateway 10.10.10.1 --allocation-pool start=10.10.10.100,end=10.10.10.150  --disable-dhcp --name public_subnet public 10.10.10.0/24
> 
> HA Neutron router and tenant's subnet have been created.
> Then interface to tenant's network was activated as well as gateway to public.
> Security rules were implemented as usual.
> Cloud VM was launched,  it obtained private IP and committed cloud-init OK.
> Then I assigned FIP from public to cloud VM , it should be ping able from from F23 Visualization
> Host
> 
> 2.  All  traffic to/from external network flows through br-int when provider external networks has been involved. No br-ex is needed.
> When in Javier does  `ovs-vsctl add-port br-eth0 eth0` , eth0 (which is inside VM ,running  Controller node)
> should be on 10.10.10.X/24. It doesn't happen when service network is active (and NM disabled) .
> In this case  eth0 doesn't have any kind of IP assigned to provide route to  Libvirt's  subnet 10.10.10.X/24 ( pre created by myself)
> 
> In meantime I am under impression that ovs bridge br-eth0 and OVS port eth0 
> would work when IP is assigned to port eth0, not to bridge. OVS release =>2.3.1 seems to allow that.
> Tested here (VM's case ) :- http://dbaxps.blogspot.com/2015/10/multiple-external-networks-with-single.html <http://dbaxps.blogspot.com/2015/10/multiple-external-networks-with-single.html>
> If neither one of br-eth0 and eth0 would have IP then packets won't be forwarded to external net
> 
> 
>> external network:
>> 
>> [If the external network is on VLAN 104]
>> neutron net-create ext-net --router:external \
>> --provider:physical_network datacentre \
>> --provider:network_type vlan \
>> --provider:segmentation_id 104
>> 
>> [If the external net is on the native VLAN (flat)]
>> neutron net-create ext-net --router:external \
>> --provider:physical_network datacentre \
>> --provider:network_type flat
>> 
>> Next, you must create a subnet for the network, including the range of
>> floating IPs (allocation pool):
>> 
>> neutron subnet-create --name ext-subnet \
>> --enable_dhcp=False \
>> --allocation-pool start=10.10.10.50,end=10.10.10.100 \
>> --gateway 10.10.10.1 \
>> ext-net 10.10.10.0/24
>> 
>> Next, you have to create a router:
>> 
>> neutron router-create ext-router
>> 
>> You then add an interface to the router. Since Neutron will assign the
>> first address in the subnet to the router by default (10.10.10.1), you
>> will want to first create a port with a specific IP, then assign that
>> port to the router.
>> 
>> neutron port-create ext-net --fixed-ip ip_address=10.10.10.254
>> 
>> You will need to note the UUID of the newly created port. You can also
>> see this with "neutron port-list". Now, create the router interface
>> with the port you just created:
>> 
>> neutron router-interface-add ext-router port=<UUID>
>> 
>>>> If you want to be able to ping 10.10.10.x from the host, then you
>>>> should put either a static IP or DHCP on the bridge, not on eth0. This
>>>> should work whether you are running NetworkManager or network.service.
>>> 
>>>   "I do can ping 10.0.0.x from F23 KVM Server (running cluster's VMs as Controller's nodes),
>>>     it's  just usual non-default libvirt subnet,matching exactly external network creating in Javier's  "Howto".
>>>     It was  created via `virsh net-define openstackvms.xml`, but I cannot ping FIPs belong to
>>>     cloud VM on this subnet."
>> 
>> I think you will have better luck once you create the external network
>> and router. You can then use namespaces to ping the network from the
>> router:
>> 
>> First, obtain the qrouter-<UUID> from the list of namespaces:
>> 
>> sudo ip netns list
>> 
>> Then, find the qrouter-<UUID> and ping from there:
>> 
>> ip netns exec qrouter-XXXX-XXXX-XXX-XXX ping 10.10.10.1
>> 
> 
> One more quick thing to note:
> 
> In order to use floating IPs, you will also have to attach the external
> router to the tenant networks where floating IPs will be used.
> 
> When you go through the steps to create a tenant network, also attach
> it to the router:
> 
> 1) Create the network:
> 
> neutron net-create tenant-net-1
> 
> 2) Create the subnet:
> 
> neutron subnet-create --name tenant-subnet-1 tenant-net-1 172.21.0.0/22
> 
> 3) Attach the external router to the network:
> 
> neutron router-interface-add tenant-router-1 subnet=tenant-subnet-1
> 
> (since no specific port was given in the router-interface-add command,
> Neutron will automatically choose the first address in the given
> subnet, so 172.21.0.1 in this example)
> 
> --
> Dan Sneddon         |  Principal OpenStack Engineer
> dsneddon at redhat.com |  redhat.com/openstack
> 650.254.4025        |  dsneddon:irc   @dxs:twitter
> 
> _______________________________________________
> Rdo-list mailing list
> Rdo-list at redhat.com
> https://www.redhat.com/mailman/listinfo/rdo-list
> 
> To unsubscribe: rdo-list-unsubscribe at redhat.com

-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.rdoproject.org/pipermail/dev/attachments/20151114/209b8288/attachment.html>
-------------- next part --------------
A non-text attachment was scrubbed...
Name: PastedGraphic-1.png
Type: image/png
Size: 25835 bytes
Desc: not available
URL: <http://lists.rdoproject.org/pipermail/dev/attachments/20151114/209b8288/attachment.png>


More information about the dev mailing list