My desired architecture includes a Linux VM which runs the controller
parts of OpenStack (including all of the API endpoints), one (for now)
bare-metal network node (L3, DHCP and maybe metadata) and many
bare-metal compute nodes. I'm trying to install thusly:
CONTROLLER=10.0.1.10
NETNODE=10.0.2.10
COMPUTENODES=10.0.2.20,10.0.2.21,10.0.2.22,...
TENANT_NET_TYPE=vlan
VLAN_RANGES=pubphysnet01,priphysnet01:100:199
BRIDGE_MAPPINGS=priphysnet01:br-bond0
BRIDGE_INTERFACES=br-bond0:bond0
packstack \
--neutron-l2-plugin=ml2 \
--neutron-ml2-type-drivers=flat,${TENANT_NET_TYPE} \
--neutron-ml2-tenant-network-types=${TENANT_NET_TYPE} \
--neutron-ml2-mechanism-drivers=openvswitch \
--neutron-ml2-flat-networks=pubphysnet01 \
--neutron-ml2-vlan-ranges="${VLAN_RANGES}" \
--neutron-ovs-bridge-mappings="${BRIDGE_MAPPINGS}" \
--neutron-ovs-bridge-interfaces="${BRIDGE_INTERFACES}" \
--neutron-l3-hosts=${NETNODE} \
--neutron-dhcp-hosts=${NETNODE} \
--neutron-metadata-hosts=${NETNODE} \
--install-hosts=${CONTROLLER},${COMPUTENODES}
This presents two problems:
1) (less intrusive, but annoying) neutron-openvswitch-agent is
installed and enabled on the controller VM, which I don't want (I can
disable the service, 'neutron agent-delete', and unwind the ovs config
that it did)
2) neutron-server gets launched on the network node (10.0.2.10) as
well as the controller (10.0.1.10). Curiously, the service is not
enabled on the network node (chkconfig --list shows 'off'), but it is
running when packstack completes. This causes mass confusion for the
agents, as there are two neutron-server's running at the same time. It
results in obscure problems, including apparent messaging failures,
and agents reporting tables missing from the database (neutron.conf on
the network node doesn't have the SQL connection info).
Am I doing something wrong here? Should it work as above? I can file a
bug or two, but thought I'd check here first....
Tnx,
~iain
Show replies by date
Am I doing something wrong here? Should it work as above? I can file
a
bug or two, but thought I'd check here first....
I've noticed that the openvswitch agent gets installed (and
configured) on the controller. This can actually cause the install to
fail if the controller does not have an interface matching
CONFIG_NEUTRON_OVS_TUNNEL_IF.
I think you should open bugs on both of these issues.
--
Lars Kellogg-Stedman <lars(a)redhat.com> | larsks @ irc
Cloud Engineering / OpenStack | " " @ twitter