[rdo-list] Overcloud pacemaker services restart behavior causes downtime

Pedro Sousa pgsousa at gmail.com
Thu Aug 4 13:29:50 UTC 2016


Hi Raoul,

this only happens when the node comes back online after booting. When I
stop the node with "pcs cluster stop", everything works fine, even if VIP
is active on that node.

Anyway I will file a bugzilla.

Thanks




On Thu, Aug 4, 2016 at 1:51 PM, Raoul Scarazzini <rasca at redhat.com> wrote:

> Ok, so we are on mitaka. Here we have VIPs that are a (Optional)
> dependency for haproxy, which is a (Mandatory) dependency for
> openstack-core from which all the others (nova, neutron, cinder and so
> on) depends.
> This means that if you are rebooting a controller in which a VIP is
> active you will NOT have a restart of openstack-core since haproxy will
> not be restarted, because of the OPTIONAL constraint.
> So the behavior you're describing is quite strange.
> Maybe other components are in the game here. Can you open a bugzilla
> with the exact steps you're using to reproduce the problem and share the
> sosreports of your systems?
>
> Thanks,
>
> --
> Raoul Scarazzini
> rasca at redhat.com
>
> On 04/08/2016 12:34, Pedro Sousa wrote:
> > Hi,
> >
> > I use mitaka from centos sig repos:
> >
> > Centos 7.2
> > centos-release-openstack-mitaka-1-3.el7.noarch
> > pacemaker-cli-1.1.13-10.el7_2.2.x86_64
> > pacemaker-1.1.13-10.el7_2.2.x86_64
> > pacemaker-remote-1.1.13-10.el7_2.2.x86_64
> > pacemaker-cluster-libs-1.1.13-10.el7_2.2.x86_64
> > pacemaker-libs-1.1.13-10.el7_2.2.x86_64
> > corosynclib-2.3.4-7.el7_2.3.x86_64
> > corosync-2.3.4-7.el7_2.3.x86_64
> > resource-agents-3.9.5-54.el7_2.10.x86_64
> >
> > Let me know if you need more info.
> >
> > Thanks
> >
> >
> >
> > On Thu, Aug 4, 2016 at 11:21 AM, Raoul Scarazzini <rasca at redhat.com
> > <mailto:rasca at redhat.com>> wrote:
> >
> >     Hi,
> >     can you please give us more information about the environment you are
> >     using? Release, package versions and so on.
> >
> >     --
> >     Raoul Scarazzini
> >     rasca at redhat.com <mailto:rasca at redhat.com>
> >
> >     On 04/08/2016 11:34, Pedro Sousa wrote:
> >     > Hi all,
> >     >
> >     > I have an overcloud with 3 controller nodes, everything is working
> fine,
> >     > the problem is when I reboot one of the controllers. When the node
> comes
> >     > online, all the services (nova-api, neutron-server) on the other
> nodes
> >     > are also restarted, causing a couple of minutes of downtime until
> >     > everything is recovered.
> >     >
> >     > In the example below I restarted controller2 and I see these
> messages on
> >     > controller0. My question is if this is the expected behavior,
> because in
> >     > my opinion it shouldn't happen.
> >     >
> >     > *Authorization Failed: Service Unavailable (HTTP 503)*
> >     > *== Glance images ==*
> >     > *Service Unavailable (HTTP 503)*
> >     > *== Nova managed services ==*
> >     > *No handlers could be found for logger
> >     "keystoneauth.identity.generic.base"*
> >     > *ERROR (ServiceUnavailable): Service Unavailable (HTTP 503)*
> >     > *== Nova networks ==*
> >     > *No handlers could be found for logger
> >     "keystoneauth.identity.generic.base"*
> >     > *ERROR (ServiceUnavailable): Service Unavailable (HTTP 503)*
> >     > *== Nova instance flavors ==*
> >     > *No handlers could be found for logger
> >     "keystoneauth.identity.generic.base"*
> >     > *ERROR (ServiceUnavailable): Service Unavailable (HTTP 503)*
> >     > *== Nova instances ==*
> >     > *No handlers could be found for logger
> >     "keystoneauth.identity.generic.base"*
> >     > *ERROR (ServiceUnavailable): Service Unavailable (HTTP 503)*
> >     > *[root at overcloud-controller-0 ~]# openstack-status *
> >     > *Broadcast message from
> >     > systemd-journald at overcloud-controller-0.localdomain (Thu
> 2016-08-04
> >     > 09:22:31 UTC):*
> >     > *
> >     > *
> >     > *haproxy[2816]: proxy neutron has no server available!*
> >     >
> >     > Thanks,
> >     > Pedro Sousa
> >     >
> >     >
> >     >
> >     >
> >     > _______________________________________________
> >     > rdo-list mailing list
> >     > rdo-list at redhat.com <mailto:rdo-list at redhat.com>
> >     > https://www.redhat.com/mailman/listinfo/rdo-list
> >     >
> >     > To unsubscribe: rdo-list-unsubscribe at redhat.com
> >     <mailto:rdo-list-unsubscribe at redhat.com>
> >     >
> >
> >
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.rdoproject.org/pipermail/dev/attachments/20160804/44b8074d/attachment.html>


More information about the dev mailing list