[rdo-users] [tripleo]network isolation

qinglong.dong at horebdata.cn qinglong.dong at horebdata.cn
Wed Jan 3 08:02:11 UTC 2018


Thanks for the reply. I want to access openstack dashboard from my pc via "192.168.1.0/24" not "192.168.24.0/24". So I think I should set external network on the bridge and remove the corresponding VLAN interface. Maybe I misunderstand the external network?
By the way, I use linux bridge because it is easier to understand than ovs.


qinglong.dong at horebdata.cn
 
From: Dan Sneddon
Date: 2018-01-03 05:24
To: qinglong.dong at horebdata.cn; users
Subject: Re: [rdo-users] [tripleo]network isolation
On 12/24/2017 10:55 PM, qinglong.dong at horebdata.cn wrote:
> Hi, all
>         I want to deploy an baremetal environment(pike)
> with network isolation. I have three controller nodes and one compute
> node. Each node has 3 nics. If I set external network as a vlan I
> succeed. But If I set external network on the bridge(using native
> vlan on the trunked interface) I fail. Anyone can help? Thanks!
>         Here are some config of controller nodes. Compute node does not
> have external network and storage management network.
> 
> 
>     *Controller NICs*
> 
> *Bonded Interface * *Bond Slaves* 
> bond1 eth1, eth2 
> 
> *Networks* 
> *NIC*
> Provisioning 
> eth0
> External 
> bond1 / br-ex
> Internal 
> bond1 / vlan201
> Tenant 
> bond1 / vlan204
> Storage 
> bond1 / vlan202
> Storage Management 
> bond1 / vlan203
> 
> *network-environment.yaml*
> resource_registry:
>   OS::TripleO::Compute::Net::SoftwareConfig:
>     ../network/config/bond-with-vlans/compute.yaml
>   OS::TripleO::Controller::Net::SoftwareConfig:
>     ../network/config/bond-with-vlans/controller.yaml
> parameter_defaults:
>   ControlPlaneSubnetCidr: '24'
>   ControlPlaneDefaultRoute: 192.168.24.1
>   EC2MetadataIp: 192.168.24.1 
>   InternalApiNetCidr: 172.17.0.0/24
>   StorageNetCidr: 172.18.0.0/24
>   StorageMgmtNetCidr: 172.19.0.0/24
>   TenantNetCidr: 172.16.0.0/24
>   ExternalNetCidr: 192.168.1.0/24
>   InternalApiNetworkVlanID: 201
>   StorageNetworkVlanID: 202
>   StorageMgmtNetworkVlanID: 203
>   TenantNetworkVlanID: 204
>   InternalApiAllocationPools: [{'start': '172.17.0.10', 'end': '172.17.0.200'}]
>   StorageAllocationPools: [{'start': '172.18.0.10', 'end': '172.18.0.200'}]
>   StorageMgmtAllocationPools: [{'start': '172.19.0.10', 'end': '172.19.0.200'}]
>   TenantAllocationPools: [{'start': '172.16.0.10', 'end': '172.16.0.200'}]
>   ExternalAllocationPools: [{'start': '192.168.1.223', 'end': '192.168.1.235'}]
>   ExternalInterfaceDefaultRoute: 192.168.1.1
>   DnsServers: ["192.168.1.1"]
>   NeutronNetworkType: 'vlan'
>   NeutronTunnelTypes: ''
>   NeutronNetworkVLANRanges: 'datacentre:1:1000'
>   BondInterfaceOvsOptions: "bond_mode=active-backup"
>   NeutronMechanismDrivers: linuxbridge
> 
> *controller.yaml *
> [...]
> resources:
>   OsNetConfigImpl:
>     type: OS::Heat::SoftwareConfig
>     properties:
>       group: script
>       config:
>         str_replace:
>           template:
>             get_file: ../../scripts/run-os-net-config.sh
>           params:
>             $network_config:
>               network_config:
>               - type: interface
>                 name: nic1
>                 use_dhcp: false
>                 addresses:
>                 - ip_netmask:
>                     list_join:
>                     - /
>                     - - get_param: ControlPlaneIp
>                       - get_param: ControlPlaneSubnetCidr
>                 routes:
>                 - ip_netmask: 169.254.169.254/32
>                   next_hop:
>                     get_param: EC2MetadataIp
>               - type: linux_bridge
>                 name: bridge_name
>                 dns_servers:
>                   get_param: DnsServers
>                 use_dhcp: false
>                 addresses:
>                 - ip_netmask:
>                     get_param: ExternalIpSubnet
>                 routes:
>                 - default: true
>                   next_hop:
>                     get_param: ExternalInterfaceDefaultRoute
>                 members:
>                 - type: linux_bond
>                   name: bond1
>                   bonding_options: mode=1
>                   members:
>                   - type: interface
>                     name: nic2
>                     primary: true
>                   - type: interface
>                     name: nic3
>                 - type: vlan
>                   device: bond1
>                   vlan_id:
>                     get_param: InternalApiNetworkVlanID
>                   addresses:
>                   - ip_netmask:
>                       get_param: InternalApiIpSubnet
>                 - type: vlan
>                   device: bond1
>                   vlan_id:
>                     get_param: StorageNetworkVlanID
>                   addresses:
>                   - ip_netmask:
>                       get_param: StorageIpSubnet
>                 - type: vlan
>                   device: bond1
>                   vlan_id:
>                     get_param: StorageMgmtNetworkVlanID
>                   addresses:
>                   - ip_netmask:
>                       get_param: StorageMgmtIpSubnet
>                 - type: vlan
>                   device: bond1
>                   vlan_id:
>                     get_param: TenantNetworkVlanID
>                   addresses:
>                   - ip_netmask:
>                       get_param: TenantIpSubnet
> outputs:
>   OS::stack_id:
>     description: The OsNetConfigImpl resource.
>     value:
>       get_resource: OsNetConfigImpl
> 
> 
> _______________________________________________
> users mailing list
> users at lists.rdoproject.org
> http://lists.rdoproject.org/mailman/listinfo/users
> 
> To unsubscribe: users-unsubscribe at lists.rdoproject.org
> 
 
The NIC config looks correct for putting the External network on the
native VLAN. If I had to guess what the problem is, I would start at the
switch. The switch configuration will be different when hosting the
External network as a native VLAN rather than a trunked (tagged) VLAN.
Are you certain that the External network was being delivered only as a
native VLAN, and that the switch wasn't adding VLAN tags for the
External network?
 
What is the reason you would prefer to have the External network on the
native VLAN? The External network is used for hosting the public APIs,
so it should function the same on a tagged VLAN as it does on a native
VLAN. In any case, it should work either way, provided the switch is set
up correctly. You can always use a different VLAN/subnet for Neutron
external network(s) than you do for the public API, if you have separate
IP space. Of course, when you create the Neutron external network, you
would use type 'flat' for native VLAN, or type 'vlan' with the VLAN ID
specified as the 'segmentation_id' for tagged networks.
 
I also wonder why you are using a Linux bridge? I know the OVS driver
gets a lot more testing, and should have roughly equivalent performance
these days. I know that the Linux bridge worked fine with the External
network on native VLAN back in Icehouse/Juno timeframe, but I've
personally only been testing OVS bridges in recent releases.
 
-- 
Dan Sneddon         |  Senior Principal Software Engineer
dsneddon at redhat.com |  redhat.com/openstack
dsneddon:irc        |  @dxs:twitter
 
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.rdoproject.org/pipermail/users/attachments/20180103/53503b65/attachment-0001.html>
-------------- next part --------------
A non-text attachment was scrubbed...
Name: CatchB4FD.jpg
Type: image/jpeg
Size: 180014 bytes
Desc: not available
URL: <http://lists.rdoproject.org/pipermail/users/attachments/20180103/53503b65/attachment-0001.jpg>


More information about the users mailing list