Hi
cutting some html, responses inline:
Actually, using bondings and external network provider of VLAN type,
controller nodes may have 3 VLAN's : bond0.100(management network),
bond0.101 (tunnel network) and bond0.102(external-network).
Then tune haproxy with keepalived for a virtual ip setup on bond0.100
In my case, I even collapse tunnel network with the general/management/provisioning network (where my foreman smart proxy lives and provision bare metal); it’s a routed but secure network. My nodes thus have two interfaces:
-bond0 (untagged native vlan for provisioniong)
-bond0.112 (external traffic, no nodes have an IP)
My haproxys have two interfaces:
-bond0
-bond0.111 (external API access, routed)
This way I isolate and/or allow traffic from instances (and the rest of the organisation) to the openstack API’s, secured by terminating SSL at the loadbalancers (see diagram)
This excellent approach, but question which is my concern
is a bit different.
Thanks
Boris
On controller and compute, define the external vlan interface and its bridge:
cat <<EOF > /etc/sysconfig/network-scripts/ifcfg-bond0.102
DEVICE=bond0.102
ONBOOT=yes
DEVICETYPE=ovs
TYPE=OVSPort
OVS_BRIDGE=br-bond0.102
ONBOOT=yes
BOOTPROTO=none
VLAN=yes
MTU="1500"
NM_CONTROLLED=no
EOF
cat <<EOF > /etc/sysconfig/network-scripts/ifcfg-br-bond0.102
DEVICE=br-bond0.102
DEVICETYPE=ovs
OVSBOOTPROTO=none
TYPE=OVSBridge
ONBOOT=yes
BOOTPROTO=static
MTU="1500"
NM_CONTROLLED=no
EOF
then, make sure that everywhere exists /etc/neutron/plugins/ml2/openvswitch_agent.ini as:
[ovs]
enable_tunneling = True
tunnel_id_ranges = 1:1000
tenant_network_type = vxlan
integration_bridge = br-int
tunnel_bridge = br-tun
local_ip = <IP on bond0>
bridge_mappings = physnet1:br-bond0.102
network_vlan_ranges = physnet1
[agent]
tunnel_types = vxlan
vxlan_udp_port = 4789
l2_population = False
[securitygroup]
firewall_driver = neutron.agent.linux.iptables_firewall.OVSHybridIptablesFirewallDriver
restart neutron-openvswitch-agent everywhere to make it work:
<PastedGraphic-1.png>
Hope it helps
Alessandro
>
I would guess , that external net was created similar to this way
controller# neutron net-create --router:external=True \
--provider:network_type=vlan --provider:segmentation_id=102 ext-network
No, external provider network is a flat network (because it uses an already-tagged interface, bond0.112)
controller# neutron subnet-create --name ext-subnet --disable-dhcp \
--allocation-pool start 10.10.10.100,end=10.10.10.150 \
ext_network 10.10.10.0/24
I do use neutron DHCP: some instances have only one interface in the provider network, thus they won’t be accessible if I don’t provide them an IP+metadata
of VLAN type not FLAT
It would be a VLAN-provider network if you would add bond0 interface to the bridge, and let OVS tag packets. In this case it’s “flat”, as OVS is not aware of any VLAN tagging, which happens downstream at the interface. But there’s dozens of more skilled network dudes&gals on the list that may correct me :)
Could you be so kind to share yours ml2_conf.ini on controllers in cluster.
a very simple one:
[ml2]
type_drivers = flat,vxlan,vlan
tenant_network_types = vxlan
mechanism_drivers = openvswitch
[ml2_type_flat]
flat_networks = *
[ml2_type_vlan]
[ml2_type_gre]
[ml2_type_vxlan]
vni_ranges = 10:10000
vxlan_group = 224.0.0.1
[ml2_type_geneve]
[securitygroup]
enable_security_group = True
firewall_driver = neutron.agent.linux.iptables_firewall.OVSHybridIptablesFirewallDriver
I also believe that tuning your switch contains a kind of :-
switchport trunk allowed vlan 100,102,104
Indeed, switch ports are trunked.
Thank you
Boris.
>
14 Nov 2015, at 09:35, Boris Derzhavets <bderzhavets@hotmail.com> wrote:
________________________________________
From: Dan Sneddon <dsneddon@redhat.com>
Sent: Friday, November 13, 2015 4:10 PM
To: Boris Derzhavets; rdo-list@redhat.com
Subject: Re: [Rdo-list] Attempt to reproduce https://github.com/beekhof/osp-ha-deploy/blob/master/HA-keepalived.md
On 11/13/2015 12:56 PM, Dan Sneddon wrote:
Hi Boris,
Let's keep this on-list, there may be others who are having similar
issues who could find this discussion useful.
Answers inline...
On 11/13/2015 12:17 PM, Boris Derzhavets wrote:
________________________________________
From: Dan Sneddon <dsneddon@redhat.com>
Sent: Friday, November 13, 2015 2:46 PM
To: Boris Derzhavets; Javier Pena
Cc: rdo-list@redhat.com
Subject: Re: [Rdo-list] Attempt to reproduce https://github.com/beekhof/osp-ha-deploy/blob/master/HA-keepalived.md
On 11/13/2015 11:38 AM, Boris Derzhavets wrote:
I understand that in usual situation , creating ifcfg-br-ex and ifcfg-eth2 ( as OVS bridge and OVS port) ,
`service network restart` should be run to make eth2 (no IP) OVS port of br-ex (any IP which belongs ext net and is available)
What bad does NetworkManager when external network provider is used ?
Disabling it, I break routing via eth0's interfaces of cluster nodes to 10.10.10.0/24 ( ext net),
so nothing is supposed to work :-
http://blog.oddbit.com/2014/05/28/multiple-external-networks-wit/
http://dbaxps.blogspot.com/2015/10/multiple-external-networks-with-single.html
Either I am missing something here.
________________________________________
From: rdo-list-bounces@redhat.com <rdo-list-bounces@redhat.com> on behalf of Boris Derzhavets <bderzhavets@hotmail.com>
Sent: Friday, November 13, 2015 1:09 PM
To: Javier Pena
Cc: rdo-list@redhat.com
Subject: [Rdo-list] Attempt to reproduce https://github.com/beekhof/osp-ha-deploy/blob/master/HA-keepalived.md
Working on this task I was able to build 3 node HAProxy/Keepalived Controller's cluster , create compute node , launch CirrOS VM,
However, I cannot ping floating IP of VM running on compute ( total 4 CentOS 7.1 VMs, nested kvm enabled )
Looks like provider external networks doesn't work for me.
But , to have eth0 without IP (due to `ovs-vsctl add-port br-eth0 eth0 ) still allowing to ping 10.10.10.1,
I need NetworkManager active, rather then network.service
[root@hacontroller1 network-scripts]# systemctl status NetworkManager
NetworkManager.service - Network Manager
Loaded: loaded (/usr/lib/systemd/system/NetworkManager.service; enabled)
Active: active (running) since Fri 2015-11-13 20:39:21 MSK; 12min ago
Main PID: 808 (NetworkManager)
CGroup: /system.slice/NetworkManager.service
├─ 808 /usr/sbin/NetworkManager --no-daemon
└─2325 /sbin/dhclient -d -q -sf /usr/libexec/nm-dhcp-helper -pf /var/run/dhclient-eth0...
Nov 13 20:39:22 hacontroller1.example.com NetworkManager[808]: <info> NetworkManager state is n...L
Nov 13 20:39:22 hacontroller1.example.com dhclient[2325]: bound to 10.10.10.216 -- renewal in 1...s.
Nov 13 20:39:22 hacontroller1.example.com NetworkManager[808]: <info> (eth0): Activation: succe....
Nov 13 20:39:25 hacontroller1.example.com NetworkManager[808]: <info> startup complete
[root@hacontroller1 network-scripts]# systemctl status network.service
network.service - LSB: Bring up/down networking
Loaded: loaded (/etc/rc.d/init.d/network)
Active: inactive (dead)
[root@hacontroller1 network-scripts]# cat ifcfg-eth0
TYPE="Ethernet"
BOOTPROTO="static"
NAME="eth0"
DEVICE=eth0
ONBOOT="yes"
[root@hacontroller1 network-scripts]# ping -c 3 10.10.10.1
PING 10.10.10.1 (10.10.10.1) 56(84) bytes of data.
64 bytes from 10.10.10.1: icmp_seq=1 ttl=64 time=0.087 ms
64 bytes from 10.10.10.1: icmp_seq=2 ttl=64 time=0.128 ms
64 bytes from 10.10.10.1: icmp_seq=3 ttl=64 time=0.117 ms
--- 10.10.10.1 ping statistics ---
3 packets transmitted, 3 received, 0% packet loss, time 1999ms
rtt min/avg/max/mdev = 0.087/0.110/0.128/0.021 ms
If I disable NetworkManager and enable network this feature will be lost. Eth0 would have to have static IP or dhcp lease,
to provide route to 10.10.10.0/24.
Thank you.
Boris.
_______________________________________________
Rdo-list mailing list
Rdo-list@redhat.com
https://www.redhat.com/mailman/listinfo/rdo-list
To unsubscribe: rdo-list-unsubscribe@redhat.com
OK, a few things here. First of all, you don't actually need to have an
IP address on the host system to use a VLAN or interface as an external
provider network. The Neutron router will have an IP on the right
network, and within its namespace will be able to reach the 10.10.10.x
network.
It looks to me like NetworkManager is running dhclient for eth0, even
though you have BOOTPROTO="static". This is causing an IP address to be
added to eth0, so you are able to ping 10.10.10.x from the host. When
you turn off NetworkManager, this unexpected behavior goes away, *but
you should still be able to use provider networks*.
Here I am quoting Lars Kellogg Stedman
http://blog.oddbit.com/2014/05/28/multiple-external-networks-wit/
The bottom statement in blog post above states :-
"This assumes that eth1 is connected to a network using 10.1.0.0/24 and eth2 is connected to a network using 10.2.0.0/24, and that each network has a gateway sitting at the corresponding .1 address."
Right, what Lars means is that eth1 is physically connected to a
network with the 10.1.0.0/24 subnet, and eth2 is physically connected
to a network with the 10.2.0.0/24 subnet.
You might notice that in Lars's instructions, he never puts a host IP
on either interface.
Try creating a Neutron router with an IP on 10.10.10.x, and then you
should be able to ping that network from the router namespace.
" When I issue `neutron router-creater --ha True --tenant-id xxxxxx RouterHA` , i cannot specify router's
IP "
Let me refer you to this page, which explains the basics of creating
and managing Neutron networks:
http://docs.openstack.org/user-guide/cli_create_and_manage_networks.html
You will have to create an external network, which you will associate
with a physical network via a bridge mapping. The default bridge
mapping for br-ex is datacentre:br-ex.
Using the name of the physical network "datacentre", we can create an
1. Javier is using external network provider ( and so did I , following him)
#. /root/keystonerc_admin
# neutron net-create public --provider:network_type flat --provider:physical_network physnet1 --router:external
# neutron subnet-create --gateway 10.10.10.1 --allocation-pool start=10.10.10.100,end=10.10.10.150 --disable-dhcp --name public_subnet public 10.10.10.0/24
HA Neutron router and tenant's subnet have been created.
Then interface to tenant's network was activated as well as gateway to public.
Security rules were implemented as usual.
Cloud VM was launched, it obtained private IP and committed cloud-init OK.
Then I assigned FIP from public to cloud VM , it should be ping able from from F23 Visualization
Host
2. All traffic to/from external network flows through br-int when provider external networks has been involved. No br-ex is needed.
When in Javier does `ovs-vsctl add-port br-eth0 eth0` , eth0 (which is inside VM ,running Controller node)
should be on 10.10.10.X/24. It doesn't happen when service network is active (and NM disabled) .
In this case eth0 doesn't have any kind of IP assigned to provide route to Libvirt's subnet 10.10.10.X/24 ( pre created by myself)
In meantime I am under impression that ovs bridge br-eth0 and OVS port eth0
would work when IP is assigned to port eth0, not to bridge. OVS release =>2.3.1 seems to allow that.
Tested here (VM's case ) :- http://dbaxps.blogspot.com/2015/10/multiple-external-networks-with-single.html
If neither one of br-eth0 and eth0 would have IP then packets won't be forwarded to external net
external network:
[If the external network is on VLAN 104]
neutron net-create ext-net --router:external \
--provider:physical_network datacentre \
--provider:network_type vlan \
--provider:segmentation_id 104
[If the external net is on the native VLAN (flat)]
neutron net-create ext-net --router:external \
--provider:physical_network datacentre \
--provider:network_type flat
Next, you must create a subnet for the network, including the range of
floating IPs (allocation pool):
neutron subnet-create --name ext-subnet \
--enable_dhcp=False \
--allocation-pool start=10.10.10.50,end=10.10.10.100 \
--gateway 10.10.10.1 \
ext-net 10.10.10.0/24
Next, you have to create a router:
neutron router-create ext-router
You then add an interface to the router. Since Neutron will assign the
first address in the subnet to the router by default (10.10.10.1), you
will want to first create a port with a specific IP, then assign that
port to the router.
neutron port-create ext-net --fixed-ip ip_address=10.10.10.254
You will need to note the UUID of the newly created port. You can also
see this with "neutron port-list". Now, create the router interface
with the port you just created:
neutron router-interface-add ext-router port=<UUID>
If you want to be able to ping 10.10.10.x from the host, then you
should put either a static IP or DHCP on the bridge, not on eth0. This
should work whether you are running NetworkManager or network.service.
"I do can ping 10.0.0.x from F23 KVM Server (running cluster's VMs as Controller's nodes),
it's just usual non-default libvirt subnet,matching exactly external network creating in Javier's "Howto".
It was created via `virsh net-define openstackvms.xml`, but I cannot ping FIPs belong to
cloud VM on this subnet."
I think you will have better luck once you create the external network
and router. You can then use namespaces to ping the network from the
router:
First, obtain the qrouter-<UUID> from the list of namespaces:
sudo ip netns list
Then, find the qrouter-<UUID> and ping from there:
ip netns exec qrouter-XXXX-XXXX-XXX-XXX ping 10.10.10.1
One more quick thing to note:
In order to use floating IPs, you will also have to attach the external
router to the tenant networks where floating IPs will be used.
When you go through the steps to create a tenant network, also attach
it to the router:
1) Create the network:
neutron net-create tenant-net-1
2) Create the subnet:
neutron subnet-create --name tenant-subnet-1 tenant-net-1 172.21.0.0/22
3) Attach the external router to the network:
neutron router-interface-add tenant-router-1 subnet=tenant-subnet-1
(since no specific port was given in the router-interface-add command,
Neutron will automatically choose the first address in the given
subnet, so 172.21.0.1 in this example)
--
Dan Sneddon | Principal OpenStack Engineer
dsneddon@redhat.com | redhat.com/openstack
650.254.4025 | dsneddon:irc @dxs:twitter
_______________________________________________
Rdo-list mailing list
Rdo-list@redhat.com
https://www.redhat.com/mailman/listinfo/rdo-list
To unsubscribe: rdo-list-unsubscribe@redhat.com