[rdo-users] Possible regression for podman starting containers after a hard shutdown
Benjamin Zapiec
zapiec at gonicus.de
Thu Jul 8 09:47:24 UTC 2021
Hello again,
i was looking for the update to victoria to hopefully apply the patch
to the tripleo-puppet package. But it doesn't look like
the following package:
https://trunk.rdoproject.org/centos8-victoria/component/tripleo/current/puppet-tripleo-13.6.3-0.20210707165349.a29d7cb.el8.noarch.rpm
contains the fix. Is vicotria using another package/mechanism to
deploy/install the puppet recepies to the Overcloud nodes?
Or is this just a minor flaw i need to create a merge request for
somewhere?
Best regards
Benjamin Zapiec
Am 07.07.21 um 15:31 schrieb Alex Schultz:
> It's likely that the puppet-tripleo patch needs to be applied in Ussuri.
> I think we switched back to the puppet version at some point which would
> explain why this is still a problem.
> https://review.opendev.org/c/openstack/puppet-tripleo/+/799827
> <https://review.opendev.org/c/openstack/puppet-tripleo/+/799827>
>
> On Wed, Jul 7, 2021 at 7:01 AM Benjamin Zapiec <zapiec at gonicus.de
> <mailto:zapiec at gonicus.de>> wrote:
>
> Hello *,
>
> i have encountered an error after a hard power loss on
> nearly each compute node. The neutron-haproxy-ovnmeta container didn't
> came up after the hard shutdown. Down below under "error message" i've
> posted the exact error message.
>
> After a quick search i found the following bug reported
> in the red hat bug tracker. It looks like the exact same issue.
>
> https://bugzilla.redhat.com/show_bug.cgi?id=1797892
> <https://bugzilla.redhat.com/show_bug.cgi?id=1797892>
>
> But it is supposed to fixed in the "Train" release.
> https://bugzilla.redhat.com/show_bug.cgi?id=1797892
> <https://bugzilla.redhat.com/show_bug.cgi?id=1797892>
>
> Any ideas on this? Is it the same issue or something similar?
> Would an update to Victoria solve this issue?
>
> I think it's non of a big deal so since we try to avoid
> unexpected shutdowns but it may happen due to power loss.
>
>
> Now some details on the Openstack setup.
> We are using Ussuri release and did a pretty basic
> tirpleO Setup. We are using 3 Controller Nodes
> and 8 Computes with Ceph storage attached.
>
> I think there is nothing special in our configuration
> that may favor this type of issue but if you think
> so i will post the necessary configuration details.
>
>
> Error message:
> 2021-06-23 08:11:28.928 4823 ERROR ovsdbapp.event [-] Unexpected
> exception in notify_loop: neutron_lib.exceptions.ProcessExecutionError:
> Exit code: 125; Stdin: ; Stdout: Starting a new child container
> neutron-haproxy-ovnmeta-4c8e69e6-3e1a-4d7e-bde0-33241c0d383e
> ; Stderr: Error: error creating container storage: the container name
> "neutron-haproxy-ovnmeta-4c8e69e6-3e1a-4d7e-bde0-33241c0d383e" is
> already in use by
> "37b687a1d31adf73275ffdcadc3c5fd5ed1a72fe0c2547f38ec571556480c852". You
> have to remove that container to be able to reuse that name.: that name
> is already in use
> 2021-06-23 08:11:28.928 4823 ERROR ovsdbapp.event Traceback (most
> recent
> call last):
> 2021-06-23 08:11:28.928 4823 ERROR ovsdbapp.event File
> "/usr/lib/python3.6/site-packages/ovsdbapp/event.py", line 143, in
> notify_loop
> 2021-06-23 08:11:28.928 4823 ERROR ovsdbapp.event match.run(event,
> row, updates)
> 2021-06-23 08:11:28.928 4823 ERROR ovsdbapp.event File
> "/usr/lib/python3.6/site-packages/neutron/agent/ovn/metadata/agent.py",
> line 83, in run
> 2021-06-23 08:11:28.928 4823 ERROR ovsdbapp.event
> self.agent.update_datapath(str(row.datapath.uuid))
> 2021-06-23 08:11:28.928 4823 ERROR ovsdbapp.event File
> "/usr/lib/python3.6/site-packages/neutron/agent/ovn/metadata/agent.py",
> line 342, in update_datapath
> 2021-06-23 08:11:28.928 4823 ERROR ovsdbapp.event
> self.provision_datapath(datapath)
> 2021-06-23 08:11:28.928 4823 ERROR ovsdbapp.event File
> "/usr/lib/python3.6/site-packages/neutron/agent/ovn/metadata/agent.py",
> line 457, in provision_datapath
> 2021-06-23 08:11:28.928 4823 ERROR ovsdbapp.event
> network_id=datapath)
> 2021-06-23 08:11:28.928 4823 ERROR ovsdbapp.event File
> "/usr/lib/python3.6/site-packages/neutron/agent/ovn/metadata/driver.py",
>
> line 200, in spawn_monitored_metadata_proxy
> 2021-06-23 08:11:28.928 4823 ERROR ovsdbapp.event pm.enable()
> 2021-06-23 08:11:28.928 4823 ERROR ovsdbapp.event File
> "/usr/lib/python3.6/site-packages/neutron/agent/linux/external_process.py",
>
> line 90, in enable
> 2021-06-23 08:11:28.928 4823 ERROR ovsdbapp.event
> run_as_root=self.run_as_root)
> 2021-06-23 08:11:28.928 4823 ERROR ovsdbapp.event File
> "/usr/lib/python3.6/site-packages/neutron/agent/linux/ip_lib.py", line
> 724, in execute
> 2021-06-23 08:11:28.928 4823 ERROR ovsdbapp.event
> run_as_root=run_as_root)
> 2021-06-23 08:11:28.928 4823 ERROR ovsdbapp.event File
> "/usr/lib/python3.6/site-packages/neutron/agent/linux/utils.py", line
> 147, in execute
> 2021-06-23 08:11:28.928 4823 ERROR ovsdbapp.event
> returncode=returncode)
> 2021-06-23 08:11:28.928 4823 ERROR ovsdbapp.event
> neutron_lib.exceptions.ProcessExecutionError: Exit code: 125; Stdin: ;
> Stdout: Starting a new child container
> neutron-haproxy-ovnmeta-4c8e69e6-3e1a-4d7e-bde0-33241c0d383e
> 2021-06-23 08:11:28.928 4823 ERROR ovsdbapp.event ; Stderr: Error:
> error
> creating container storage: the container name
> "neutron-haproxy-ovnmeta-4c8e69e6-3e1a-4d7e-bde0-33241c0d383e" is
> already in use by
> "37b687a1d31adf73275ffdcadc3c5fd5ed1a72fe0c2547f38ec571556480c852". You
> have to remove that container to be able to reuse that name.: that name
> is already in use
> 2021-06-23 08:11:28.928 4823 ERROR ovsdbapp.event
> 2021-06-23 08:11:28.928 4823 ERROR ovsdbapp.event
> 2021-06-23 08:16:34.975 4823 INFO neutron.agent.ovn.metadata.agent [-]
> Port 983831dd-8aae-4b01-9953-8cbce86f89a8 in datapath
> d98d35e0-765e-4128-bf7f-ab71b2701986 bound to our chassis
> 2021-06-23 08:16:35.965 4823 ERROR neutron.agent.linux.utils [-] Exit
> code: 125; Stdin: ; Stdout: Starting a new child container
> neutron-haproxy-ovnmeta-d98d35e0-765e-4128-bf7f-ab71b2701986
> ; Stderr: Error: error creating container storage: the container name
> "neutron-haproxy-ovnmeta-d98d35e0-765e-4128-bf7f-ab71b2701986" is
> already in use by
> "bd037c9c5688efd56ccc0521bdebc0bcdc261ffd584e151da6a8e690aefdfb22". You
> have to remove that container to be able to reuse that name.: that name
> is already in use
>
>
>
>
> --
> Benjamin Zapiec <benjamin.zapiec at gonicus.de
> <mailto:benjamin.zapiec at gonicus.de>> (System Engineer)
> * GONICUS GmbH * Moehnestrasse 55 (Kaiserhaus) * D-59755 Arnsberg
> * Tel.: +49 2932 916-0 * Fax: +49 2932 916-245
> * http://www.GONICUS.de <http://www.GONICUS.de>
>
> * Sitz der Gesellschaft: Moehnestrasse 55 * D-59755 Arnsberg
> * Geschaeftsfuehrer: Rainer Luelsdorf, Alfred Schroeder
> * Vorsitzender des Beirats: Juergen Michels
> * Amtsgericht Arnsberg * HRB 1968
>
> Wir erfüllen unsere Informationspflichten zum Datenschutz gem. der
> Artikel 13
> und 14 DS-GVO durch Veröffentlichung auf unserer Internetseite unter:
> https://www.gonicus.de/datenschutz
> <https://www.gonicus.de/datenschutz> oder durch Zusendung auf Ihre
> formlose Anfrage.
> _______________________________________________
> users mailing list
> users at lists.rdoproject.org <mailto:users at lists.rdoproject.org>
> http://lists.rdoproject.org/mailman/listinfo/users
> <http://lists.rdoproject.org/mailman/listinfo/users>
>
> To unsubscribe: users-unsubscribe at lists.rdoproject.org
> <mailto:users-unsubscribe at lists.rdoproject.org>
>
--
Benjamin Zapiec <benjamin.zapiec at gonicus.de> (System Engineer)
* GONICUS GmbH * Moehnestrasse 55 (Kaiserhaus) * D-59755 Arnsberg
* Tel.: +49 2932 916-0 * Fax: +49 2932 916-245
* http://www.GONICUS.de
* Sitz der Gesellschaft: Moehnestrasse 55 * D-59755 Arnsberg
* Geschaeftsfuehrer: Rainer Luelsdorf, Alfred Schroeder
* Vorsitzender des Beirats: Juergen Michels
* Amtsgericht Arnsberg * HRB 1968
Wir erfüllen unsere Informationspflichten zum Datenschutz gem. der
Artikel 13
und 14 DS-GVO durch Veröffentlichung auf unserer Internetseite unter:
https://www.gonicus.de/datenschutz oder durch Zusendung auf Ihre
formlose Anfrage.
More information about the users
mailing list