[Rdo-list] Network Isolation setup check

Marius Cornea marius at remote-lab.net
Sat Oct 31 13:26:55 UTC 2015


Hi Raoul,

A couple of notes for the controller.yaml template. You're adding both
interfaces to br-ex bridge and you're assigning the IP addresses on
top of the nics themselves. While this might work in terms of
connectivity, when adding the physical nics to the ovs bridge you
should leverage the ovs internal ports for IP address assignment. Also
be careful when bridging 2 unbonded physical nics as you might create
loops in the network.

Here's my approach: create 2 bridges: br-ex containing nic1 with the
external network IP set on the untagged port and br-ctlplane
containing nic2 with the ctlplane network IP set on untagged and the
other vlans on tagged ports:
http://paste.openstack.org/show/477729/

On Fri, Oct 30, 2015 at 10:44 AM, Raoul Scarazzini <rasca at redhat.com> wrote:
> Hi everybody,
> I'm trying to deploy a tripleo environment with network isolation using
> a pool of 8 machines: 3 controller, 2 compute and 3 storage.
> Each of those machine has got 2 network interfaces, the first one (em1)
> connected to the lan, the second one (em2) used for the undercloud
> provisioning.
>
> The ultimate goal of the setup is to have the ExternalNet on the em1 (so
> to be able to put instances with floating Ips in the LAN) and all the
> other networks (InternalApi, Storage and StorageMgmt) on the em2.
>
> To produce what described I created this network-environment.yaml
> configuration:
>
> resource_registry:
>   OS::TripleO::BlockStorage::Net::SoftwareConfig:
> /home/stack/nic-configs/cinder-storage.yaml
>   OS::TripleO::Compute::Net::SoftwareConfig:
> /home/stack/nic-configs/compute.yaml
>   OS::TripleO::Controller::Net::SoftwareConfig:
> /home/stack/nic-configs/controller.yaml
>   OS::TripleO::ObjectStorage::Net::SoftwareConfig:
> /home/stack/nic-configs/swift-storage.yaml
>   OS::TripleO::CephStorage::Net::SoftwareConfig:
> /home/stack/nic-configs/ceph-storage.yaml
>
> parameter_defaults:
>   # Customize the IP subnets to match the local environment
>   InternalApiNetCidr: 172.17.0.0/24
>   StorageNetCidr: 172.18.0.0/24
>   StorageMgmtNetCidr: 172.19.0.0/24
>   TenantNetCidr: 172.16.0.0/24
>   ExternalNetCidr: 10.1.240.0/24
>   ControlPlaneSubnetCidr: '24'
>   InternalApiAllocationPools: [{'start': '172.17.0.10', 'end':
> '172.17.0.200'}]
>   StorageAllocationPools: [{'start': '172.18.0.10', 'end': '172.18.0.200'}]
>   StorageMgmtAllocationPools: [{'start': '172.19.0.10', 'end':
> '172.19.0.200'}]
>   TenantAllocationPools: [{'start': '172.16.0.10', 'end': '172.16.0.200'}]
>   ExternalAllocationPools: [{'start': '10.1.240.10', 'end': '10.1.240.200'}]
>   # Specify the gateway on the external network.
>   ExternalInterfaceDefaultRoute: 10.1.240.254
>   # Gateway router for the provisioning network (or Undercloud IP)
>   ControlPlaneDefaultRoute: 192.0.2.1
>   # Generally the IP of the Undercloud
>   EC2MetadataIp: 192.0.2.1
>   DnsServers: ["10.1.241.2"]
>   InternalApiNetworkVlanID: 2201
>   StorageNetworkVlanID: 2203
>   StorageMgmtNetworkVlanID: 2204
>   TenantNetworkVlanID: 2202
>   # This won't actually be used since external is on native VLAN, just
> here for reference
>   #ExternalNetworkVlanID: 38
>   # Floating IP networks do not have to use br-ex, they can use any
> bridge as long as the NeutronExternalNetworkBridge is set to "''".
>   NeutronExternalNetworkBridge: "''"
>
> And modified the controller.yaml file in this way (default parts are
> omitted, nic1 == em1 and nic2 == em2):
>
> ...
> ...
> resources:
>   OsNetConfigImpl:
>     type: OS::Heat::StructuredConfig
>     properties:
>       group: os-apply-config
>       config:
>         os_net_config:
>           network_config:
>             -
>               type: ovs_bridge
>               name: {get_input: bridge_name}
>               use_dhcp: false
>               dns_servers: {get_param: DnsServers}
>               addresses:
>                 -
>                   ip_netmask:
>                     list_join:
>                       - '/'
>                       - - {get_param: ControlPlaneIp}
>                         - {get_param: ControlPlaneSubnetCidr}
>               routes:
>                 -
>                   ip_netmask: 169.254.169.254/32
>                   next_hop: {get_param: EC2MetadataIp}
>               members:
>                 -
>                   type: interface
>                   name: nic1
>                   addresses:
>                   -
>                     ip_netmask: {get_param: ExternalIpSubnet}
>                   routes:
>                     -
>                       ip_netmask: 0.0.0.0/0
>                       next_hop: {get_param: ExternalInterfaceDefaultRoute}
>                 -
>                   type: interface
>                   name: nic2
>                   # force the MAC address of the bridge to this interface
>                   primary: true
>                 -
>                   type: vlan
>                   vlan_id: {get_param: InternalApiNetworkVlanID}
>                   addresses:
>                   -
>                     ip_netmask: {get_param: InternalApiIpSubnet}
>                 -
>                   type: vlan
>                   vlan_id: {get_param: StorageNetworkVlanID}
>                   addresses:
>                   -
>                     ip_netmask: {get_param: StorageIpSubnet}
>                 -
>                   type: vlan
>                   vlan_id: {get_param: StorageMgmtNetworkVlanID}
>                   addresses:
>                   -
>                     ip_netmask: {get_param: StorageMgmtIpSubnet}
>                 -
>                   type: vlan
>                   vlan_id: {get_param: TenantNetworkVlanID}
>                   addresses:
>                   -
>                     ip_netmask: {get_param: TenantIpSubnet}
>
> outputs:
>   OS::stack_id:
>     description: The OsNetConfigImpl resource.
>     value: {get_resource: OsNetConfigImpl}
>
> The deploy of the overcloud was invoked with this command:
>
> openstack overcloud deploy --templates --libvirt-type=kvm --ntp-server
> 10.5.26.10 --control-scale 3 --compute-scale 2 --ceph-storage-scale 3
> --block-storage-scale 0 --swift-storage-scale 0 --control-flavor
> baremetal --compute-flavor baremetal --ceph-storage-flavor baremetal
> --block-storage-flavor baremetal --swift-storage-flavor baremetal
> --templates -e
> /usr/share/openstack-tripleo-heat-templates/environments/network-isolation.yaml
> -e
> /usr/share/openstack-tripleo-heat-templates/environments/net-single-nic-with-vlans.yaml
> -e /home/stack/network-environment.yaml
>
> Now the point is that I need to know if my configurations are formally
> correct since I got some network problems once the post deployment
> status were done.
> I still don't know (we're investigating) if those problems are related
> to the switch configurations (so hardware side) but for some reason
> everything exploded.
> What I saw until the machines were reachable was what I was expecting:
> the external address assigned to em1 and the vlans correctly assigned to
> the em2. With all the external IPs address pingable one to each other.
> But I was not able to do further tests.
>
> >From your point of view, do I miss something?
>
> Many thanks,
>
> --
> Raoul Scarazzini
> rasca at redhat.com
>
> _______________________________________________
> Rdo-list mailing list
> Rdo-list at redhat.com
> https://www.redhat.com/mailman/listinfo/rdo-list
>
> To unsubscribe: rdo-list-unsubscribe at redhat.com




More information about the dev mailing list