[Reminder] No RDO meeting until January, 9 2019
by Haïkel
Hi,
As a reminder, there will be no RDO meetings for the next two weeks as
we cancelled those.
Dec, 26: cancelled
Jan, 2: cancelled
So next meeting is January, 9 in 2019!
Have fun during the end of year celebrations!
Regards,
H.
5 years, 11 months
[Meeting] RDO meeting (2018-12-19) minutes
by Haïkel
==============================
#rdo: RDO meeting - 2018-12-19
==============================
Meeting started by number80 at 15:04:15 UTC. The full logs are
available at
http://eavesdrop.openstack.org/meetings/rdo_meeting___2018_12_19/2018/rdo...
.
Meeting summary
---------------
* roll call (number80, 15:04:28)
* Revisit haproxy situation (number80, 15:08:47)
* AGREED: use F28 and/or PaaS SIG build of haproxy 1.8 (number80,
15:22:28)
* bandini submitted patches for tripleo to support haproxy 1.8 without
breaking compat with 1.5 (number80, 15:23:15)
* ML migration (number80, 15:32:04)
* ping leanderthal about ML migration (number80, 15:35:27)
* open floor (number80, 15:35:39)
* baha requested reviewers for ppc64le container build job (number80,
15:37:47)
* LINK: https://review.rdoproject.org/r/#/c/17741/ (number80,
15:37:55)
* ACTION: number80 review it (number80, 15:38:06)
* amoralej will chair Jan, 9 meeting (number80, 15:39:04)
Meeting ended at 15:40:35 UTC.
Action items, by person
-----------------------
* number80
* number80 review it
People present (lines said)
---------------------------
* number80 (47)
* dciabrin_ (23)
* amoralej (22)
* Duck (15)
* ykarel (10)
* openstack (6)
* jpena (5)
* baha (3)
* bandini (2)
* PagliaccisCloud (1)
* rdogerrit (1)
Generated by `MeetBot`_ 0.1.4
5 years, 11 months
[Meeting] RDO meeting (2018-12-12) minutes
by Haïkel
==============================
#rdo: RDO meeting - 2018-12-12
==============================
Meeting started by number80 at 15:02:11 UTC. The full logs are
available at
http://eavesdrop.openstack.org/meetings/rdo_meeting___2018_12_12/2018/rdo...
.
Meeting summary
---------------
* roll call (number80, 15:02:43)
* haproxy 1.8 packaging (number80, 15:04:55)
* LINK: https://github.com/mwhahaha/kolla/tree/fedora (mwhahaha,
15:18:19)
* AGREED: revisit haproxy 1.8 situation next week (number80,
15:26:48)
* What to do with upcoming meetings (number80, 15:28:02)
* AGREED: Cancelling december, 26 and January, 2 RDO meeting
(number80, 15:29:57)
* AGREED: Cancelling december, 26 and January, 2 RDO meetings
(number80, 15:30:05)
* ACTION: number80 Notify the list about meetings cancelled
(number80, 15:32:11)
* FOSDEM and DevConf.CZ booths and swag (number80, 15:32:17)
* we need volunteers for openstack and RDO booths at FOSDEM/devconf.cz
(number80, 15:34:23)
* submit ideas for RDO goodies to leanderthal (number80, 15:34:32)
* next meeting chair (number80, 15:35:14)
* ACTION: number80 to chair next week (number80, 15:36:52)
* open floor (number80, 15:36:59)
Meeting ended at 16:00:16 UTC.
Action items, by person
-----------------------
* number80
* number80 Notify the list about meetings cancelled
* number80 to chair next week
People present (lines said)
---------------------------
* number80 (62)
* amoralej (56)
* mwhahaha (29)
* ykarel (23)
* dciabrin (17)
* rdogerrit (8)
* Vorrtex (8)
* openstack (7)
* Duck (6)
* bandini (5)
* quiquell (4)
* jpena (2)
* moguimar (1)
Generated by `MeetBot`_ 0.1.4
5 years, 11 months
openstack queens magnum swarm error
by Ignazio Cassano
Hello Everyone,
I installed queens on centos with magnum and I am trying to create a swarm
cluster with one muster and one node. The image I used is fedora-atomic 27
update 04
The stack generated end with an error and magnum conductor reports:
ec 12 09:51:17 tst2-osctrl01 magnum-conductor: 2018-12-12 09:51:17.304
17964 WARNING magnum.drivers.heat.template_def
[req-bfa19294-5671-47a0-b0ac-9e544f0e5e38 - - - - -] stack does not have
output_key api_address
Dec 12 09:51:17 tst2-osctrl01 magnum-conductor: 2018-12-12 09:51:17.305
17964 WARNING magnum.drivers.heat.template_def
[req-bfa19294-5671-47a0-b0ac-9e544f0e5e38 - - - - -] stack does not have
output_key swarm_masters
Dec 12 09:51:17 tst2-osctrl01 magnum-conductor: 2018-12-12 09:51:17.306
17964 WARNING magnum.drivers.heat.template_def
[req-bfa19294-5671-47a0-b0ac-9e544f0e5e38 - - - - -] stack does not have
output_key swarm_nodes
Dec 12 09:51:17 tst2-osctrl01 magnum-conductor: 2018-12-12 09:51:17.306
17964 WARNING magnum.drivers.heat.template_def
[req-bfa19294-5671-47a0-b0ac-9e544f0e5e38 - - - - -] stack does not have
output_key discovery_url
Dec 12 09:51:17 tst2-osctrl01 magnum-conductor: 2018-12-12 09:51:17.317
17964 ERROR magnum.drivers.heat.driver
[req-bfa19294-5671-47a0-b0ac-9e544f0e5e38 - - - - -] Cluster error, stack
status: CREATE_FAILED, stack_id: 306bd83a-7878-4d94-8ed0-1d297eec9768,
reason: Resource CREATE failed: WaitConditionFailure:
resources.swarm_nodes.resources[0].resources.node_wait_condition:
swarm-agent service failed to start.
I connected to the master node for verifyng if swarm agent is running.
In the cloud init log I found:
requests.exceptions.ConnectionError:
HTTPConnectionPool(host='10.102.184.190', port=5000): Max retries exceeded
with url: /v3//auth/tokens (Caused by
NewConnectionError('<urllib3.connection.HTTPConnection object at
0x7f0814d4d250>: Failed to establish a new connection: [Errno 110]
Connection timed out',))
Cloud-init v. 0.7.9 running 'modules:final' at Wed, 12 Dec 2018 08:45:31
+0000. Up 55.54 seconds.
2018-12-12 08:47:45,858 - util.py[WARNING]: Failed running
/var/lib/cloud/instance/scripts/part-005 [1]
/var/lib/cloud/instance/scripts/part-006: line 13: /etc/etcd/etcd.conf: No
such file or directory
/var/lib/cloud/instance/scripts/part-006: line 26: /etc/etcd/etcd.conf: No
such file or directory
/var/lib/cloud/instance/scripts/part-006: line 38: /etc/etcd/etcd.conf: No
such file or directory
2018-12-12 08:47:45,870 - util.py[WARNING]: Failed running
/var/lib/cloud/instance/scripts/part-006 [1]
Configuring docker network ...
Configuring docker network service ...
Removed
/etc/systemd/system/multi-user.target.wants/docker-storage-setup.service.
New size given (1280 extents) not larger than existing size (4863 extents)
ERROR: There is not enough free space in volume group atomicos to create
data volume of size MIN_DATA_SIZE=2G.
2018-12-12 08:47:46,206 - util.py[WARNING]: Failed running
/var/lib/cloud/instance/scripts/part-010 [1]
+ systemctl stop docker
+ echo 'starting services'
starting services
+ systemctl daemon-reload
+ for service in etcd docker.socket docker swarm-manager
+ echo 'activating service etcd'
activating service etcd
+ systemctl enable etcd
Failed to enable unit: Unit file etcd.service does not exist.
+ systemctl --no-block start etcd
Failed to start etcd.service: Unit etcd.service not found.
+ for service in etcd docker.socket docker swarm-manager
+ echo 'activating service docker.socket'
activating service docker.socket
+ systemctl enable docker.socket
1) Seems etcd service is not installed ,
2) the instance required to contact controller on port 5000 (is it correct
?)
Please help me.
Regards
Ignazio
5 years, 11 months
openvswitch -> dpdk -> ... dependency
by iain MacDonnell
I just tried to apply the latest Rocky updates, and found that
python2-openvswitch-2.9.0-3.el7.noarch.rpm has been replaced by
python-openvswitch-2.10.1-1.el7.x86_64.rpm, and that has dependencies
on a bunch of dpdk libraries. After Googling around, I found dpdk in
the Extras repo (I have not had to install anything from Extras until
now), but after installing that, I'm getting spew like this:
PMD: net_mlx5: cannot load glue library: libibverbs.so.1: cannot open
shared object file: No such file or directory
PMD: net_mlx5: cannot initialize PMD due to missing run-time
dependency on rdma-core libraries (libibverbs, libmlx5)
PMD: net_mlx4: cannot load glue library: libibverbs.so.1: cannot open
shared object file: No such file or directory
PMD: net_mlx4: cannot initialize PMD due to missing run-time
dependency on rdma-core libraries (libibverbs, libmlx4)
I don't even use openvswitch! I use linuxbridge... but because
openstack-neutron requires python2-ovsdbapp, I'm getting dragged into
this dependency hell.
I'm a bit miffed about having to deal with this when updating a
supposedly stable release.
Could (all of) the packages required by the new python-openvswitch be
added to the openstack-rocky repo ?
Or could the requirement for python2-ovsdbapp be dropped, since it's
not actually required for all deployments?
~iain
5 years, 11 months