Thanks for the follow up. I think I am getting it. When you say multi host,
you are refereing to the setting in nova.conf multi_host, correct?
If I want the traffic to be routed through the controller, I should set
that to false, and not install the nova-network on the compute hosts.
--Brian
On Fri, Jul 24, 2015 at 12:26 PM, Brent Eagles <beagles(a)redhat.com> wrote:
On Thu, Jul 23, 2015 at 04:26:08PM -0500, brian lee wrote:
> So I have made headway on this problem. It was related to my networking.
In
> order to get nova networking working you have to install the
> openstack-nova-network and openstack-nova-api packages on your compute
> nodes as well. You did not have to do this in Icehouse.
>
> Once that is installed, you then need to configure the nova.conf per the
> doc:
>
http://docs.openstack.org/kilo/install-guide/install/yum/content/nova-net...
Note that if you are using multi-host networks then each compute
node is also a network controller, so nova-network and nova-api will be
required on each compute node.
> Once I have done that I am now able to get the instances started. On the
> compute node it does create the br100 bridge device. But it does not
create
> it on the controller.
You can check if openstack-nova-network running on the controller - but
if you are using multi-host networking this is probably irrelevant.
> Now I am stuck where I can get the instance up, but I can not ping it
from
> the conrtoller/outside network. Any idea what needs to be done to get the
> controller to start its bridge so they can talk together?
>
> --Brian
IIRC, if you are using multi-host networks you need to keep in mind that
while each compute node is a network controller - the reverse is also
true. i.e. a node is only a network controller for an instance's tenant
network IF that node has an instance for that tenant running on it. If
there isn't an instance for a particular tenant on a given node, there
may be no bridge for that tenant network, etc. on it. This has to do
with how the networks are provisioned - the bridges are setup where a
tenant network is required, i.e. where an instance has been booted. Also
of course, there is no standalone network-controller.
To get access to your guest, try going through the multi-host node
instead of the "controller" (which isn't a network controller in this
case).
If you *don't* use multi-host then you the network service should only
be required on one host.
Cheers,
Brent