The OpenStack dashboard will be available on the External interface if
(and only if) the routing works for remote access to that IP. That
typically means that you need to use the ExternalInterfaceDefaultRoute
on your Controllers, but *not* use ControlPlaneDefaultRoute. This
appears to be what you are doing on the Controllers. It won't matter if
the External network (and default route) are on a tagged VLAN or on the
native VLAN, as long as the default route pointing to the gateway on
that network is there.
You would connect to the dashboard on the PublicVirtualIP, whatever that
IP address is. You can see all the virtual IPs in
/etc/puppet/hieradata/vip_data.yaml on the controllers. If you can't
reach the PublicVirtualIP, then work backwards (Can you access the
controller IP remotely? Can you access the controller IP from the same
subnet? Can the controller reach the default gateway on the External
network?).
I don't think it should matter if you are using Linux bridge or OVS. For
a while there were features supported in OVS that weren't implemented in
Linux bridge, but since there is now mostly feature parity it's a matter
of personal preference.
--
Dan Sneddon | Senior Principal Software Engineer
dsneddon(a)redhat.com |
dsneddon:irc | @dxs:twitter
On 01/03/2018 12:02 AM, qinglong.dong(a)horebdata.cn wrote:
Thanks for the reply. I want to access openstack dashboard from my
pc
via "192.168.1.0/24" not "192.168.24.0/24". So I think I should set
external network on the bridge
and remove the corresponding VLAN interface. Maybe I misunderstand the
external network?
By the way, I use linux bridge because it is easier to understand than ovs.
------------------------------------------------------------------------
qinglong.dong(a)horebdata.cn
*From:* Dan Sneddon <mailto:dsneddon@redhat.com>
*Date:* 2018-01-03 05:24
*To:* qinglong.dong@horebdata.cn
<mailto:qinglong.dong@horebdata.cn>; users
<mailto:users@lists.rdoproject.org>
*Subject:* Re: [rdo-users] [tripleo]network isolation
On 12/24/2017 10:55 PM, qinglong.dong(a)horebdata.cn wrote:
> Hi, all
> I want to deploy an baremetal environment(pike)
> with network isolation. I have three controller nodes and one compute
> node. Each node has 3 nics. If I set external network as a vlan I
> succeed. But If I set external network on the bridge(using native
> vlan on the trunked interface) I fail. Anyone can help? Thanks!
> Here are some config of controller nodes. Compute node
does not
> have external network and storage management network.
>
>
> *Controller NICs*
>
> *Bonded Interface * *Bond Slaves*
> bond1 eth1, eth2
>
> *Networks*
> *NIC*
> Provisioning
> eth0
> External
> bond1 / br-ex
> Internal
> bond1 / vlan201
> Tenant
> bond1 / vlan204
> Storage
> bond1 / vlan202
> Storage Management
> bond1 / vlan203
>
> *network-environment.yaml*
> resource_registry:
> OS::TripleO::Compute::Net::SoftwareConfig:
> ../network/config/bond-with-vlans/compute.yaml
> OS::TripleO::Controller::Net::SoftwareConfig:
> ../network/config/bond-with-vlans/controller.yaml
> parameter_defaults:
> ControlPlaneSubnetCidr: '24'
> ControlPlaneDefaultRoute: 192.168.24.1
> EC2MetadataIp: 192.168.24.1
> InternalApiNetCidr: 172.17.0.0/24
> StorageNetCidr: 172.18.0.0/24
> StorageMgmtNetCidr: 172.19.0.0/24
> TenantNetCidr: 172.16.0.0/24
> ExternalNetCidr: 192.168.1.0/24
> InternalApiNetworkVlanID: 201
> StorageNetworkVlanID: 202
> StorageMgmtNetworkVlanID: 203
> TenantNetworkVlanID: 204
>
InternalApiAllocationPools: [{'start': '172.17.0.10', 'end': '172.17.0.200'}]
>
StorageAllocationPools: [{'start': '172.18.0.10', 'end': '172.18.0.200'}]
>
StorageMgmtAllocationPools: [{'start': '172.19.0.10', 'end': '172.19.0.200'}]
>
TenantAllocationPools: [{'start': '172.16.0.10', 'end': '172.16.0.200'}]
>
ExternalAllocationPools: [{'start': '192.168.1.223', 'end': '192.168.1.235'}]
> ExternalInterfaceDefaultRoute: 192.168.1.1
> DnsServers: ["192.168.1.1"]
> NeutronNetworkType: 'vlan'
> NeutronTunnelTypes: ''
> NeutronNetworkVLANRanges: 'datacentre:1:1000'
> BondInterfaceOvsOptions: "bond_mode=active-backup"
> NeutronMechanismDrivers: linuxbridge
>
> *controller.yaml *
> [...]
> resources:
> OsNetConfigImpl:
> type: OS::Heat::SoftwareConfig
> properties:
> group: script
> config:
> str_replace:
> template:
> get_file: ../../scripts/run-os-net-config.sh
> params:
> $network_config:
> network_config:
> - type: interface
> name: nic1
> use_dhcp: false
> addresses:
> - ip_netmask:
> list_join:
> - /
> - - get_param: ControlPlaneIp
> - get_param: ControlPlaneSubnetCidr
> routes:
> - ip_netmask: 169.254.169.254/32
> next_hop:
> get_param: EC2MetadataIp
> - type: linux_bridge
> name: bridge_name
> dns_servers:
> get_param: DnsServers
> use_dhcp: false
> addresses:
> - ip_netmask:
> get_param: ExternalIpSubnet
> routes:
> - default: true
> next_hop:
> get_param: ExternalInterfaceDefaultRoute
> members:
> - type: linux_bond
> name: bond1
> bonding_options: mode=1
> members:
> - type: interface
> name: nic2
> primary: true
> - type: interface
> name: nic3
> - type: vlan
> device: bond1
> vlan_id:
> get_param: InternalApiNetworkVlanID
> addresses:
> - ip_netmask:
> get_param: InternalApiIpSubnet
> - type: vlan
> device: bond1
> vlan_id:
> get_param: StorageNetworkVlanID
> addresses:
> - ip_netmask:
> get_param: StorageIpSubnet
> - type: vlan
> device: bond1
> vlan_id:
> get_param: StorageMgmtNetworkVlanID
> addresses:
> - ip_netmask:
> get_param: StorageMgmtIpSubnet
> - type: vlan
> device: bond1
> vlan_id:
> get_param: TenantNetworkVlanID
> addresses:
> - ip_netmask:
> get_param: TenantIpSubnet
> outputs:
> OS::stack_id:
> description: The OsNetConfigImpl resource.
> value:
> get_resource: OsNetConfigImpl
>
>
> _______________________________________________
> users mailing list
> users(a)lists.rdoproject.org
>
http://lists.rdoproject.org/mailman/listinfo/users
>
> To unsubscribe: users-unsubscribe(a)lists.rdoproject.org
>
The NIC config looks correct for putting the External network on the
native VLAN. If I had to guess what the problem is, I would start at the
switch. The switch configuration will be different when hosting the
External network as a native VLAN rather than a trunked (tagged) VLAN.
Are you certain that the External network was being delivered only as a
native VLAN, and that the switch wasn't adding VLAN tags for the
External network?
What is the reason you would prefer to have the External network on the
native VLAN? The External network is used for hosting the public APIs,
so it should function the same on a tagged VLAN as it does on a native
VLAN. In any case, it should work either way, provided the switch is set
up correctly. You can always use a different VLAN/subnet for Neutron
external network(s) than you do for the public API, if you have separate
IP space. Of course, when you create the Neutron external network, you
would use type 'flat' for native VLAN, or type 'vlan' with the VLAN
ID
specified as the 'segmentation_id' for tagged networks.
I also wonder why you are using a Linux bridge? I know the OVS driver
gets a lot more testing, and should have roughly equivalent performance
these days. I know that the Linux bridge worked fine with the External
network on native VLAN back in Icehouse/Juno timeframe, but I've
personally only been testing OVS bridges in recent releases.
--
Dan Sneddon | Senior Principal Software Engineer
dsneddon(a)redhat.com |
redhat.com/openstack
dsneddon:irc | @dxs:twitter