From bipul.gogoi at gmail.com Sun Dec 1 03:22:33 2019 From: bipul.gogoi at gmail.com (Bipul) Date: Sun, 1 Dec 2019 08:52:33 +0530 Subject: [rdo-users] Issue while creating an instance in nova and error log while restarting Nova compute service Message-ID: Dear users, I am getting problem while creating an instance . It not able to determine valid host. I have noticed when i restart nova compute service , it restarted successfully BUT It logs error on nova-compute.log 2019-11-30 06:29:30.324 145061 ERROR nova.compute.resource_tracker [req-6a4d7b4f-9ab1-4669-8c7e-99e3adb5cb18 - - - - -] Skipping removal of allocations for deleted instances: Failed to retrieve allocations for resource provider 27a39914-a509-4261-90f5-8135ad471843: : ResourceProviderAllocationRetrievalFailed: Failed to retrieve allocations for resource provider 27a39914-a509-4261-90f5-8135ad471843: 2019-11-30 06:29:30.388 145061 ERROR nova.scheduler.client.report [req-6a4d7b4f-9ab1-4669-8c7e-99e3adb5cb18 - - - - -] [None] *Failed to retrieve resource provider tree from placement API* for UUID 27a39914-a509-4261-90f5-8135ad471843. Got 500: UUID is correct MariaDB [(none)]> select uuid from nova.compute_nodes where host=' openstack.bipul.com'; +--------------------------------------+ | uuid | +--------------------------------------+ | 27a39914-a509-4261-90f5-8135ad471843 | +--------------------------------------+ 1 row in set (0.000 sec) MariaDB [(none)]> 1) nova.conf is not changed , It just the same which comes with the distribution 2) Openstack overall health seems OK, all services are in running state 3) Problem : placement url running on port 8778 ( URL : http://:8778/placement ) is showing internal server error (500) while accessing via web browser or curl . 4) nova-status upgrade check also showing error InternalServerError: Internal Server Error (HTTP 500) 5) followed the standard method of installation described in https://www.rdoproject.org/install/packstack/ 6) Attached o/p of log during a nove compute service restart and nova service status Appreciate all your help Thanks Bipul -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- << Nova log >> 2019-11-30 06:29:27.391 145061 INFO os_vif [-] Loaded VIF plugins: ovs, linux_bridge, noop 2019-11-30 06:29:28.560 145061 INFO nova.virt.driver [req-0631d103-d97e-48c7-974d-7fc399a0e0ee - - - - -] Loading compute driver 'libvirt.LibvirtDriver' 2019-11-30 06:29:29.228 145061 WARNING os_brick.initiator.connectors.remotefs [req-0631d103-d97e-48c7-974d-7fc399a0e0ee - - - - -] Connection details not present. RemoteFsClient may not initialize properly. 2019-11-30 06:29:29.244 145061 WARNING oslo_config.cfg [req-0631d103-d97e-48c7-974d-7fc399a0e0ee - - - - -] Deprecated: Option "dhcpbridge" from group "DEFAULT" is deprecated for removal ( nova-network is deprecated, as are any related configuration options. ). Its value may be silently ignored in the future. 2019-11-30 06:29:29.244 145061 WARNING oslo_config.cfg [req-0631d103-d97e-48c7-974d-7fc399a0e0ee - - - - -] Deprecated: Option "dhcpbridge_flagfile" from group "DEFAULT" is deprecated for removal ( nova-network is deprecated, as are any related configuration options. ). Its value may be silently ignored in the future. 2019-11-30 06:29:29.246 145061 WARNING oslo_config.cfg [req-0631d103-d97e-48c7-974d-7fc399a0e0ee - - - - -] Deprecated: Option "force_dhcp_release" from group "DEFAULT" is deprecated for removal ( nova-network is deprecated, as are any related configuration options. ). Its value may be silently ignored in the future. 2019-11-30 06:29:29.250 145061 WARNING oslo_config.cfg [req-0631d103-d97e-48c7-974d-7fc399a0e0ee - - - - -] Deprecated: Option "force_snat_range" from group "DEFAULT" is deprecated for removal ( nova-network is deprecated, as are any related configuration options. ). Its value may be silently ignored in the future. 2019-11-30 06:29:29.272 145061 WARNING oslo_config.cfg [req-0631d103-d97e-48c7-974d-7fc399a0e0ee - - - - -] Deprecated: Option "url" from group "neutron" is deprecated for removal (Endpoint lookup uses the service catalog via common keystoneauth1 Adapter configuration options. In the current release, "url" will override this behavior, but will be ignored and/or removed in a future release. To achieve the same result, use the endpoint_override option instead.). Its value may be silently ignored in the future. 2019-11-30 06:29:29.274 145061 WARNING oslo_config.cfg [req-0631d103-d97e-48c7-974d-7fc399a0e0ee - - - - -] Deprecated: Option "vncserver_listen" from group "vnc" is deprecated. Use option "server_listen" from group "vnc". 2019-11-30 06:29:29.275 145061 WARNING oslo_config.cfg [req-0631d103-d97e-48c7-974d-7fc399a0e0ee - - - - -] Deprecated: Option "vncserver_proxyclient_address" from group "vnc" is deprecated. Use option "server_proxyclient_address" from group "vnc". 2019-11-30 06:29:29.279 145061 WARNING oslo_config.cfg [req-0631d103-d97e-48c7-974d-7fc399a0e0ee - - - - -] Deprecated: Option "live_migration_uri" from group "libvirt" is deprecated for removal ( live_migration_uri is deprecated for removal in favor of two other options that allow to change live migration scheme and target URI: ``live_migration_scheme`` and ``live_migration_inbound_addr`` respectively. ). Its value may be silently ignored in the future. 2019-11-30 06:29:29.296 145061 INFO nova.service [-] Starting compute node (version 19.0.3-1.el7) 2019-11-30 06:29:29.373 145061 INFO nova.virt.libvirt.driver [-] Connection event '1' reason 'None' 2019-11-30 06:29:29.398 145061 INFO nova.virt.libvirt.host [-] Libvirt host capabilities 98761abc-dd6f-450a-8f2f-13db228bd2ba x86_64 Westmere-IBRS Intel tcp rdma 5242332 1310583 0 none 0 dac 0 +107:+107 +107:+107 hvm 32 /usr/libexec/qemu-kvm pc-i440fx-rhel7.6.0 pc pc-i440fx-rhel7.0.0 pc-q35-rhel7.6.0 q35 rhel6.3.0 rhel6.4.0 rhel6.0.0 pc-i440fx-rhel7.5.0 pc-i440fx-rhel7.1.0 pc-i440fx-rhel7.2.0 pc-q35-rhel7.3.0 rhel6.5.0 pc-q35-rhel7.4.0 rhel6.6.0 rhel6.1.0 rhel6.2.0 pc-i440fx-rhel7.3.0 pc-i440fx-rhel7.4.0 pc-q35-rhel7.5.0 hvm 64 /usr/libexec/qemu-kvm pc-i440fx-rhel7.6.0 pc pc-i440fx-rhel7.0.0 pc-q35-rhel7.6.0 q35 rhel6.3.0 rhel6.4.0 rhel6.0.0 pc-i440fx-rhel7.5.0 pc-i440fx-rhel7.1.0 pc-i440fx-rhel7.2.0 pc-q35-rhel7.3.0 rhel6.5.0 pc-q35-rhel7.4.0 rhel6.6.0 rhel6.1.0 rhel6.2.0 pc-i440fx-rhel7.3.0 pc-i440fx-rhel7.4.0 pc-q35-rhel7.5.0 2019-11-30 06:29:30.324 145061 ERROR nova.compute.resource_tracker [req-6a4d7b4f-9ab1-4669-8c7e-99e3adb5cb18 - - - - -] Skipping removal of allocations for deleted instances: Failed to retrieve allocations for resource provider 27a39914-a509-4261-90f5-8135ad471843: 500 Internal Server Error

Internal Server Error

The server encountered an internal error or misconfiguration and was unable to complete your request.

Please contact the server administrator at [no address given] to inform them of the time this error occurred, and the actions you performed just before this error.

More information about this error may be available in the server error log.

: ResourceProviderAllocationRetrievalFailed: Failed to retrieve allocations for resource provider 27a39914-a509-4261-90f5-8135ad471843: 2019-11-30 06:29:30.388 145061 ERROR nova.scheduler.client.report [req-6a4d7b4f-9ab1-4669-8c7e-99e3adb5cb18 - - - - -] [None] Failed to retrieve resource provider tree from placement API for UUID 27a39914-a509-4261-90f5-8135ad471843. Got 500: 500 Internal Server Error

Internal Server Error

The server encountered an internal error or misconfiguration and was unable to complete your request.

Please contact the server administrator at [no address given] to inform them of the time this error occurred, and the actions you performed just before this error.

More information about this error may be available in the server error log.

. 2019-11-30 06:29:30.388 145061 ERROR nova.compute.manager [req-6a4d7b4f-9ab1-4669-8c7e-99e3adb5cb18 - - - - -] Error updating resources for node openstack.bipul.com.: ResourceProviderRetrievalFailed: Failed to get resource provider with UUID 27a39914-a509-4261-90f5-8135ad471843 2019-11-30 06:29:30.388 145061 ERROR nova.compute.manager Traceback (most recent call last): 2019-11-30 06:29:30.388 145061 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/compute/manager.py", line 8148, in _update_available_resource_for_node 2019-11-30 06:29:30.388 145061 ERROR nova.compute.manager startup=startup) 2019-11-30 06:29:30.388 145061 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py", line 748, in update_available_resource 2019-11-30 06:29:30.388 145061 ERROR nova.compute.manager self._update_available_resource(context, resources, startup=startup) 2019-11-30 06:29:30.388 145061 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/oslo_concurrency/lockutils.py", line 328, in inner 2019-11-30 06:29:30.388 145061 ERROR nova.compute.manager return f(*args, **kwargs) 2019-11-30 06:29:30.388 145061 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py", line 829, in _update_available_resource 2019-11-30 06:29:30.388 145061 ERROR nova.compute.manager self._update(context, cn, startup=startup) 2019-11-30 06:29:30.388 145061 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py", line 1036, in _update 2019-11-30 06:29:30.388 145061 ERROR nova.compute.manager self._update_to_placement(context, compute_node, startup) 2019-11-30 06:29:30.388 145061 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/retrying.py", line 68, in wrapped_f 2019-11-30 06:29:30.388 145061 ERROR nova.compute.manager return Retrying(*dargs, **dkw).call(f, *args, **kw) 2019-11-30 06:29:30.388 145061 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/retrying.py", line 223, in call 2019-11-30 06:29:30.388 145061 ERROR nova.compute.manager return attempt.get(self._wrap_exception) 2019-11-30 06:29:30.388 145061 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/retrying.py", line 261, in get 2019-11-30 06:29:30.388 145061 ERROR nova.compute.manager six.reraise(self.value[0], self.value[1], self.value[2]) 2019-11-30 06:29:30.388 145061 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/retrying.py", line 217, in call 2019-11-30 06:29:30.388 145061 ERROR nova.compute.manager attempt = Attempt(fn(*args, **kwargs), attempt_number, False) 2019-11-30 06:29:30.388 145061 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py", line 962, in _update_to_placement 2019-11-30 06:29:30.388 145061 ERROR nova.compute.manager context, compute_node.uuid, name=compute_node.hypervisor_hostname) 2019-11-30 06:29:30.388 145061 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/scheduler/client/report.py", line 873, in get_provider_tree_and_ensure_root 2019-11-30 06:29:30.388 145061 ERROR nova.compute.manager parent_provider_uuid=parent_provider_uuid) 2019-11-30 06:29:30.388 145061 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/scheduler/client/report.py", line 655, in _ensure_resource_provider 2019-11-30 06:29:30.388 145061 ERROR nova.compute.manager rps_to_refresh = self._get_providers_in_tree(context, uuid) 2019-11-30 06:29:30.388 145061 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/scheduler/client/report.py", line 71, in wrapper 2019-11-30 06:29:30.388 145061 ERROR nova.compute.manager return f(self, *a, **k) 2019-11-30 06:29:30.388 145061 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/scheduler/client/report.py", line 522, in _get_providers_in_tree 2019-11-30 06:29:30.388 145061 ERROR nova.compute.manager raise exception.ResourceProviderRetrievalFailed(uuid=uuid) 2019-11-30 06:29:30.388 145061 ERROR nova.compute.manager ResourceProviderRetrievalFailed: Failed to get resource provider with UUID 27a39914-a509-4261-90f5-8135ad471843 2019-11-30 06:29:30.388 145061 ERROR nova.compute.manager <> Nova service status : [root at openstack ~(keystone_admin)]# systemctl status openstack-nova-compute.service ? openstack-nova-compute.service - OpenStack Nova Compute Server Loaded: loaded (/usr/lib/systemd/system/openstack-nova-compute.service; enabled; vendor preset: disabled) Active: active (running) since Sun 2019-12-01 03:05:02 EST; 4h 46min left Main PID: 1632 (nova-compute) Tasks: 22 CGroup: /system.slice/openstack-nova-compute.service +-1632 /usr/bin/python2 /usr/bin/nova-compute Dec 01 03:04:23 openstack.bipul.com systemd[1]: Starting OpenStack Nova Compute Server... Dec 01 03:05:02 openstack.bipul.com systemd[1]: Started OpenStack Nova Compute Server. [root at openstack ~(keystone_admin)]# systemctl status openstack-nova-conductor.service ? openstack-nova-conductor.service - OpenStack Nova Conductor Server Loaded: loaded (/usr/lib/systemd/system/openstack-nova-conductor.service; enabled; vendor preset: disabled) Active: active (running) since Sun 2019-12-01 03:04:50 EST; 4h 46min left Main PID: 1224 (nova-conductor) Tasks: 3 CGroup: /system.slice/openstack-nova-conductor.service +-1224 /usr/bin/python2 /usr/bin/nova-conductor +-2306 /usr/bin/python2 /usr/bin/nova-conductor +-2307 /usr/bin/python2 /usr/bin/nova-conductor Dec 01 03:04:20 openstack.bipul.com systemd[1]: Starting OpenStack Nova Conductor Server... Dec 01 03:04:50 openstack.bipul.com systemd[1]: Started OpenStack Nova Conductor Server. [root at openstack ~(keystone_admin)]# systemctl status openstack-nova-consoleauth.service ? openstack-nova-consoleauth.service - OpenStack Nova VNC console auth Server Loaded: loaded (/usr/lib/systemd/system/openstack-nova-consoleauth.service; enabled; vendor preset: disabled) Active: active (running) since Sun 2019-12-01 03:04:46 EST; 4h 46min left Main PID: 1233 (nova-consoleaut) Tasks: 1 CGroup: /system.slice/openstack-nova-consoleauth.service +-1233 /usr/bin/python2 /usr/bin/nova-consoleauth Dec 01 03:04:20 openstack.bipul.com systemd[1]: Starting OpenStack Nova VNC console auth Server... Dec 01 03:04:46 openstack.bipul.com systemd[1]: Started OpenStack Nova VNC console auth Server. [root at openstack ~(keystone_admin)]# systemctl status openstack-nova-scheduler.service ? openstack-nova-scheduler.service - OpenStack Nova Scheduler Server Loaded: loaded (/usr/lib/systemd/system/openstack-nova-scheduler.service; enabled; vendor preset: disabled) Active: active (running) since Sun 2019-12-01 03:04:52 EST; 4h 45min left Main PID: 1215 (nova-scheduler) Tasks: 3 CGroup: /system.slice/openstack-nova-scheduler.service +-1215 /usr/bin/python2 /usr/bin/nova-scheduler +-2321 /usr/bin/python2 /usr/bin/nova-scheduler +-2322 /usr/bin/python2 /usr/bin/nova-scheduler Dec 01 03:04:20 openstack.bipul.com systemd[1]: Starting OpenStack Nova Scheduler Server... Dec 01 03:04:52 openstack.bipul.com systemd[1]: Started OpenStack Nova Scheduler Server. [root at openstack ~(keystone_admin)]# From jpena at redhat.com Mon Dec 2 14:13:27 2019 From: jpena at redhat.com (Javier Pena) Date: Mon, 2 Dec 2019 09:13:27 -0500 (EST) Subject: [rdo-users] Issue while creating an instance in nova and error log while restarting Nova compute service In-Reply-To: References: Message-ID: <1705297598.19154426.1575296007223.JavaMail.zimbra@redhat.com> ----- Original Message ----- > Dear users, > I am getting problem while creating an instance . It not able to determine > valid host. > I have noticed when i restart nova compute service , it restarted > successfully BUT It logs error on nova-compute.log > 2019-11-30 06:29:30.324 145061 ERROR nova.compute.resource_tracker > [req-6a4d7b4f-9ab1-4669-8c7e-99e3adb5cb18 - - - - -] Skipping removal of > allocations for deleted instances: Failed to retrieve allocations for > resource provider 27a39914-a509-4261-90f5-8135ad471843: PUBLIC "-//IETF//DTD HTML 2.0//EN"> > : ResourceProviderAllocationRetrievalFailed: Failed to retrieve allocations > for resource provider 27a39914-a509-4261-90f5-8135ad471843: PUBLIC "-//IETF//DTD HTML 2.0//EN"> > 2019-11-30 06:29:30.388 145061 ERROR nova.scheduler.client.report > [req-6a4d7b4f-9ab1-4669-8c7e-99e3adb5cb18 - - - - -] [None] Failed to > retrieve resource provider tree from placement API for UUID > 27a39914-a509-4261-90f5-8135ad471843. Got 500: "-//IETF//DTD HTML 2.0//EN"> > UUID is correct > MariaDB [(none)]> select uuid from nova.compute_nodes where host=' > openstack.bipul.com '; > +--------------------------------------+ > | uuid | > +--------------------------------------+ > | 27a39914-a509-4261-90f5-8135ad471843 | > +--------------------------------------+ > 1 row in set (0.000 sec) > MariaDB [(none)]> > 1) nova.conf is not changed , It just the same which comes with the > distribution > 2) Openstack overall health seems OK, all services are in running state > 3) Problem : placement url running on port 8778 ( URL : http:// >:8778/placement ) is showing internal server error (500) while accessing > via web browser or curl . > 4) nova-status upgrade check also showing error InternalServerError: Internal > Server Error (HTTP 500) Hi, It looks like the Placement service is having some trouble. Can you check the contents of /var/log/httpd/placement*.log? They should give you some pointer to continue troubleshooting. Regards, Javier > 5) followed the standard method of installation described in > https://www.rdoproject.org/install/packstack/ > 6) Attached o/p of log during a nove compute service restart and nova service > status > Appreciate all your help > Thanks > Bipul > _______________________________________________ > users mailing list > users at lists.rdoproject.org > http://lists.rdoproject.org/mailman/listinfo/users > To unsubscribe: users-unsubscribe at lists.rdoproject.org -------------- next part -------------- An HTML attachment was scrubbed... URL: From rleander at redhat.com Mon Dec 2 14:31:17 2019 From: rleander at redhat.com (Rain Leander) Date: Mon, 2 Dec 2019 15:31:17 +0100 Subject: [rdo-users] [Ask OpenStack] 7 updates about "rabbiitmq", "stein", "oslo_messaging", "ask-openstack", "aws" and more Message-ID: Hello RDO Stackers! Ask OpenStack has these updates, please have a look: - Issue while creating an instance in nova (Error out with port binding failure ) (2 rev, 1 ans, 2 ans rev) - Not able to set max connections in Mariadb for Openstack Stein Version (2 rev) - OSA with Different Deployment Address (new question) - vpnaas bandwidth is too low (2 rev) - ACCESS_REFUSED on oslo service (2 rev) - How to test StorPerf in Devstack? (new question) - I?m evaluating Openstack Vs Azure key vault Vs AWS secrets manager AWS KMS. Has anyone done this analysis? (new question) -- K Rain Leander OpenStack Community Liaison Open Source Program Office https://www.rdoproject.org/ http://community.redhat.com -------------- next part -------------- An HTML attachment was scrubbed... URL: From amoralej at redhat.com Mon Dec 2 15:49:10 2019 From: amoralej at redhat.com (Alfredo Moralejo Alonso) Date: Mon, 2 Dec 2019 16:49:10 +0100 Subject: [rdo-users] [RDO] Weekly status for 2019-11-29 Message-ID: Promotions * Latest promotions (TripleO CI) for Stein from 27th November, Train from 29th November and Master from 29th November. * Promotion to consistent is blocked in all releases due to patch in openstack-selinux which has caused issues with building package in CentOS7. * https://review.rdoproject.org/r/#/c/23885/ * Master and Stein are facing intermittent issues due to performance and deployment getting timed out, currently being worked upon to improve it. Currently mainly seen in RDO stein phase1 TripleO job * https://bugs.launchpad.net/tripleo/+bug/1844446 Deps Update * OVN and OpenVswitch are bumped to 2.12 in Ussuri * https://review.rdoproject.org/r/#/c/23355/ * Python-amqp is being bumped to 2.5.2 in Ussuri * Ansible-runner is updated to 1.4.4 in Ussuri Packages * Kuryr-kubernetes and networking-odl are pinned in rdoinfo for ussuri as they have dropped python2 support which made package build fails in CentOS7. These will be unpinned once RDO moves to CentOS8. Other * CentOS 8 CBS is still not ready, and there is no ETA yet, we started with preparing copr for RDO deps as Openstack Upstream has started dropping py2 support:- * https://review.rdoproject.org/etherpad/p/rebuild-deps-centos8 * https://copr.fedorainfracloud.org/coprs/g/openstack-sig/centos8-deps/packages/ * Migration from legacy to native zuulv3 jobs is in progress: * https://review.rdoproject.org/etherpad/p/rdo-zuulv3-migration * https://review.rdoproject.org/r/#/q/topic:zuulv3-migration On behalf of RDO team -------------- next part -------------- An HTML attachment was scrubbed... URL: From rleander at redhat.com Tue Dec 3 14:38:37 2019 From: rleander at redhat.com (Rain Leander) Date: Tue, 3 Dec 2019 15:38:37 +0100 Subject: [rdo-users] [newsletter] December 2019 RDO Community Newsletter Message-ID: Having difficulty with the formatting of this message? See it online at https://www.rdoproject.org/newsletter/2019/december -- We're solidly setting sail on the Ussuri river for the next six months - soon we'll arrive at our first port of call, the RDO Test Day of Milestone ONE! In the meantime, we're trying to avoid troubled waters as we shift the RDO Community meeting time EARLIER by one whole hour. And initial navigation has been mapped via last month's Project Teams Gathering - if you didn't get a chance to attend, the reports are arriving on the openstack-discuss list. Housekeeping Items Want to Help Us Prep for RDO Ussuri Test Day Milestone ONE? We are conducting an RDO test day on 19 and 20 December 2019 . This will be coordinated through the *#rdo channel on Freenode*, via http://rdoproject.org/testday/ussuri/milestone1/ and the dev at lists.rdoproject.org mailing list. We'll be testing the first Ussuri milestone release . If you can do any testing on your own ahead of time, that will help ensure that everyone isn't encountering the same problems. If you?re keen to help set up, mentor, or debrief, please reach out to leanderthal on Freenode IRC #rdo and #tripleo. RDO Changes RDO Community Meeting Shifts Times After two weeks of discussion on the mailing list and within the weekly meeting, RDO Community has shifted their weekly irc meeting time to one hour *EARLIER* making it begins at 1400 UTC/ 1500 CET / 1000 EDT / 0730 IST as of Wednesday 05th December 2019. Many thanks to everyone who collaborated to make this change! Community News Community Meetings Every Tuesday at 13:30 UTC, we have a weekly *TripleO CI community meeting* on https://meet.google.com/bqx-xwht-wky with the agenda on https://hackmd.io/IhMCTNMBSF6xtqiEd9Z0Kw. The TripleO CI meeting focuses on a group of people focusing on Continuous Integration tooling and system who would like to provide a comprehensive testing framework that is easily reproducible for TripleO contributors. This framework should also be consumable by other CI systems (OPNFV, RDO, vendor CI, etc.), so that TripleO can be tested the same way everywhere. This is NOT a place for TripleO usage questions, rather, check out the next meeting listed just below. Every Tuesday at 14:00 UTC, immediately following the TripleO CI meeting is the weekly *TripleO Community meeting* on the #TripleO channel on Freenode IRC. The agenda for this meeting is posted each week in a public etherpad . This is for addressing anything to do with TripleO, including usage, feature requests, and bug reports. Every Wednesday at 14:00 UTC, we have a weekly *RDO community meeting* on the #RDO channel on Freenode IRC. The agenda for this meeting is posted each week in a public etherpad and the minutes from the meeting are posted on the RDO website . If there's something you'd like to see happen in RDO - a package that is missing, a tool that you'd like to see included, or a change in how things are governed - this is the best time and place to help make that happen. Every Thursday at 15:00 UTC, there is a weekly *CentOS Cloud SIG meeting* on the #centos-devel channel on Freenode IRC. The agenda for this meeting is posted each week in a public etherpad and the minutes from the meeting are posted on the RDO website . This meeting makes sense for people that are involved in packaging OpenStack for CentOS and for people that are packaging OTHER cloud infra things (OpenNebula, CloudStack, Euca, etc) for CentOS. ?Alone we can do so little; together we can do so much.? - Helen Keller OpenStack News Project Teams Gathering Reports Are Here Several OpenStack project teams, SIGs and working groups met during the Project Teams Gathering in Shanghai to prepare the Ussuri development cycle . Reports are starting to be posted to the openstack-discuss mailing-list . Here are the ones that are posted so far: - TripleO PTG Summary - Glance PTG Summary - Neutron PTG Summary - Oslo PTG Summary on openstack-discuss mailing list and bnemec?s blog post ?Oslo in Shanghai? - Octavia PTG Summary - Ironic PTG Summary - Keystone PTG Summary and Colleen?s Shanghai Open Infrastructure Forum and PTG Neutron Needs YOU! S?awek Kap?o?ski, the Neutron PTL, recently reported that neutron-fwaas , neutron-vpnaas , neutron-bagpipe and neutron-bgpvpn are lacking interested maintainers. The Neutron team will drop those modules from future official OpenStack releases if nothing changes by the ussuri-2 milestone, February 14. If you are using those features and would like to step up to help, now is your chance! What?s In A Name We are looking for a name for the ?V? release of OpenStack, to follow the Ussuri release. Learn more about it in this post by Sean McGinnis . Fancy a Cup of Tea? Why, yes, please - with plenty of milk and two sugars. The next OpenStack Ops meetup is in London, UK on 7-8 January . Recent and Upcoming Events Open Infrastructure Summit Shanghai Attendees from over 45 countries attended the Open Infrastructure Summit earlier last month that was hosted in Shanghai, followed by the Project Teams Gathering (PTG) . Use cases, tutorials, and demos covering 40+ open source projects including Airship, Ceph, Hadoop, Kata Containers, Kubernetes, OpenStack, StarlingX, and Zuul were featured at the Summit. Summit keynote videos are already available, and breakout videos will be available on the Open Infrastructure videos page in the upcoming weeks. Call For Papers There are a handful of relevant conferences with open CFPs. Need help figuring out a good topic or finalizing your abstract? Feel free to reach out in the #RDO channel on IRC. - *O?e\n conf Athens Greece 20-21 March 2020* is an annual technical conference organized by Greek SW development and IT ecosystem together with Nokia Hellas. Its vision is to interconnect and foster the tech ecosystem in Greece -in the Software Engineering and IT sector-, so that it helps it gain a prominent position evolving to a big draw for other companies in the area, while contributing to the reduction of brain drain. CFP closes 25 November 2019 12:00 UTC - *Indy Cloud Conf Indianapolis Indiana USA 26-27 March 2020* focuses on cloud solutions for DevOps, Machine Learning and IoT. There will be 3 tracks of talks: DevOps, Machine Learning / AI / Big Data, and Hardware / IOT. Join us for this focused day of cloud architecture, meet fellow techies, & advance your knowledge on cloud computing. CFP closes December 21, 2019 05:00 UTC - *DevOpsDays Prague 18-19 March 2020* will be the first occurence of this volunteer-driven worldwide series in the Czech Republic. It will take place on March 18-19 (with possible extension to 17th for workshops) at City Conference Center. We are excited to bring this technical conference covering topics of Cloud Native, Containers and Microservices, Lean and DevOps approach to the heart of Europe. CFP closes 31 December 2019 00:12 UTC Other Events Other RDO events, including the many OpenStack meetups around the world, are always listed on the RDO events page . If you have an RDO-related event, please feel free to add it by submitting a pull request on Github . Keep in Touch There are lots of ways to stay in touch with what's going on in the RDO community. The best ways are ? WWW - RDO - OpenStack Q&A Mailing Lists - Dev List - Users List - This newsletter - CentOS Cloud SIG List - OpenShift on OpenStack SIG List IRC on Freenode.irc.net - RDO Project #rdo - TripleO #tripleo - CentOS Cloud SIG #centos-devel Social Media - Twitter - Facebook - Youtube As always, thanks for being part of the RDO community! -- K Rain Leander OpenStack Community Liaison Open Source Program Office https://www.rdoproject.org/ http://community.redhat.com -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- _______________________________________________ newsletter mailing list newsletter at lists.rdoproject.org http://lists.rdoproject.org/mailman/listinfo/newsletter To unsubscribe: newsletter-unsubscribe at lists.rdoproject.org From yatinkarel at gmail.com Mon Dec 9 06:40:39 2019 From: yatinkarel at gmail.com (YATIN KAREL) Date: Mon, 9 Dec 2019 12:10:39 +0530 Subject: [rdo-users] [rdo-dev] [RDO] Weekly status for 2019-12-06 Message-ID: Promotions * Latest promotions (TripleO CI) for Stein from 4th December, Train from 3rd December and Master from 29th November. Ussuri release is facing issues due to dropping py2 support, recent issue is caused by https://review.opendev.org/#/c/691874 * https://bugs.launchpad.net/tripleo/+bug/1855655 * Master and Stein are facing intermittent issues due to performance and deployment getting timed out, currently being worked upon to improve it. Currently mainly seen in RDO stein phase1 TripleO job * https://bugs.launchpad.net/tripleo/+bug/1844446 Deps Update * OVN and OpenVswitch is being bumped to 2.12 in Ussuri * https://review.rdoproject.org/r/#/c/23355/ * Python-amqp is bumped to 2.5.2 in Ussuri Packages * Ironic, ironic-python-agent, ironic-staging-drivers and networking-baremetal are pinned in rdoinfo for ussuri as they have dropped python2 support which made package build fails in CentOS7. These will be unpinned once RDO moves to CentOS8. * New package sushy-oem-idrac is being added in Ussuri * New package networking-omnipath is being added in Ussuri Other * CentOS 8 CBS is still not ready, and there is no ETA yet, we started with preparing copr for RDO deps as Openstack Upstream has started dropping py2 support:- * https://review.rdoproject.org/etherpad/p/rebuild-deps-centos8 * https://copr.fedorainfracloud.org/coprs/g/openstack-sig/centos8-deps/packages/ * New job to requirement sync(rdopkg reqcheck) is being added in Distgit projects * https://review.rdoproject.org/r/#/q/topic:test_reqcheck * Migration from legacy to native zuulv3 jobs is in progress: * https://review.rdoproject.org/etherpad/p/rdo-zuulv3-migration * https://review.rdoproject.org/r/#/q/topic:zuulv3-migration On behalf of RDO _______________________________________________ dev mailing list dev at lists.rdoproject.org http://lists.rdoproject.org/mailman/listinfo/dev To unsubscribe: dev-unsubscribe at lists.rdoproject.org -------------- next part -------------- An HTML attachment was scrubbed... URL: From rleander at redhat.com Tue Dec 10 11:04:27 2019 From: rleander at redhat.com (Rain Leander) Date: Tue, 10 Dec 2019 12:04:27 +0100 Subject: [rdo-users] [Ask OpenStack] 7 updates about "neutron", "delete", "administrator", "no", "ubuntu-18.04" and more Message-ID: Ask OpenStack has these updates, please have a look: - create_instance.py Delete Volume on Instance Delete (new question) - Install all-in-one without internet (new question) - How to Delete Orphaned Router Port (3 rev) - Instance of Ubuntu 18 cloud image stuck in booting (2 rev, 1 ans, 1 ans rev) - neutron.plugins.ml2.drivers.agent._common_agent KeyError: 'gateway' (new question) - Stein - Kolla - Neutron router ports are down - TooManyExternalNetworks (new question, 3 ans, 4 ans rev) - octavia distribution Algorithm (new question, 1 ans, 1 ans rev) And, as always, thanks for being a part of the RDO Community! -- K Rain Leander OpenStack Community Liaison Open Source Program Office https://www.rdoproject.org/ http://community.redhat.com -------------- next part -------------- An HTML attachment was scrubbed... URL: From rleander at redhat.com Tue Dec 10 11:13:09 2019 From: rleander at redhat.com (Rain Leander) Date: Tue, 10 Dec 2019 12:13:09 +0100 Subject: [rdo-users] Upcoming Meetups! Message-ID: The following are the meetups I'm aware of over the next two weeks where OpenStack and/or RDO enthusiasts are likely to be present. If you know of others, please let me know, and/or add them to http://rdoproject.org/events If there's a meetup in your area, it'd be super keen if you attended, took a few pictures and especially wrote up a summary of what was covered. And, as always, if you give me enough notice, I can send swag along with you. ~Rain. OpenInfra Meetup ? Mail.ru OpenStack & OpenInfra Russia Moscow ???? ???????? Mail.Ru Group ??????, ????????????? ???????? 39, ???. 79 ? Moscow Thu 12 Dec 2019 5:30pm ? 9:00pm UTC https://www.meetup.com/OpenStack-Russia/events/266966369/ Indonesia OpenStack Meetup #7 - Jakarta Indonesia OpenStack User Group Satrio Tower Building Jl. Prof. DR. Satrio No.RT.7, RW.2 ? Kota Jakarta Selatan Thu 12 Dec 2019 12:00pm ? 2:00pm UTC https://www.meetup.com/Indonesia-OpenStack-User-Group/events/266966431/ Meetup #1 - Roadmap 202 OpenInfra Lower Saxony Hochschule Osnabr?ck ? Campus Lingen Kaiserstra?e 10C ? Lingen (Ems) Tue 17 Dec 2019 6:00pm ? 8:00pm UTC https://www.meetup.com/OpenInfra-LowerSaxony/events/266801333/ -- K Rain Leander OpenStack Community Liaison Open Source Program Office https://www.rdoproject.org/ http://community.redhat.com -------------- next part -------------- An HTML attachment was scrubbed... URL: From yatinkarel at gmail.com Wed Dec 11 16:51:33 2019 From: yatinkarel at gmail.com (YATIN KAREL) Date: Wed, 11 Dec 2019 22:21:33 +0530 Subject: [rdo-users] RDO Meeting in Christmas/New Year Week Message-ID: Hi all, We discussed today in RDO meeting [1] regarding cancellation of weekly meeting which are scheduled on Christmas and New Year. During meeting we agreed to not held meeting on 25th December, and regarding 1st January we are thinking of conducting meeting on 2nd January if people are available, else we will be cancelling that as well. Please raise your hand if you would like to meet on 2nd January, 2020. Merry Christmas and Happy New Year in advance. [1] http://eavesdrop.openstack.org/meetings/rdo_meeting___2019_12_11/2019/rdo_meeting___2019_12_11.2019-12-11-14.03.txt Thanks and regards Yatin Karel -------------- next part -------------- An HTML attachment was scrubbed... URL: From yatinkarel at gmail.com Thu Dec 12 05:02:48 2019 From: yatinkarel at gmail.com (YATIN KAREL) Date: Thu, 12 Dec 2019 10:32:48 +0530 Subject: [rdo-users] [Meeting] RDO meeting (2019-12-11) minutes Message-ID: ============================== #rdo: RDO meeting - 2019-12-11 ============================== Meeting started by ykarel at 14:03:26 UTC. The full logs are available athttp://eavesdrop.openstack.org/meetings/rdo_meeting___2019_12_11/2019/rdo_meeting___2019_12_11.2019-12-11-14.03.log.html . Meeting summary --------------- * roll call (ykarel, 14:04:25) * [amoralej/ykarel] CentOS8 Updates (ykarel, 14:07:51) * a rdo trunk bootstrap rehearsal has been done in http://38.145.34.66/centos8-master/report.html (amoralej, 14:09:23) * dependencies are done in copr https://copr.fedorainfracloud.org/coprs/g/openstack-sig/centos8-deps/ (amoralej, 14:09:47) * testing devstack with CentOS8 in https://review.opendev.org/#/c/688614/ (amoralej, 14:10:30) * testing packstack and p-o-i with CentOS8 in https://review.opendev.org/#/q/topic:rdo-centos8 (amoralej, 14:11:01) * testing kolla with CentOS8 in https://review.opendev.org/#/c/692368/ (amoralej, 14:11:24) * LINK: https://lists.rdoproject.org/pipermail/dev/2019-December/009219.html (jpena, 14:13:01) * LINK: https://lists.rdoproject.org/pipermail/dev/2019-December/009219.html (ykarel, 14:13:43) * ACTION: package maintainers to review https://review.rdoproject.org/r/#/c/22394/ (amoralej, 14:14:27) * ACTION: amoralej to send a mail about centos8 status (amoralej, 14:22:17) * oslo.messaging bug (ykarel, 14:26:59) * oslo.messaging 10.4.0 has introduce bug in non-rabbitmq driver (ykarel, 14:27:54) * revert of breaking change is about to merge https://review.opendev.org/#/c/698090/ (amoralej, 14:29:39) * Meetings during xmas/new year week? (ykarel, 14:35:28) * ACTION: ykarel to send mail about xmas/new year RDO meeting (ykarel, 14:40:22) * Chair for next meeting ? (ykarel, 14:41:06) * ACTION: amoralej to chair next week (ykarel, 14:41:33) * open floor (ykarel, 14:41:57) Meeting ended at 14:47:44 UTC. Action items, by person ----------------------- * amoralej * amoralej to send a mail about centos8 status * amoralej to chair next week * ykarel * ykarel to send mail about xmas/new year RDO meeting People present (lines said) --------------------------- * amoralej (68) * ykarel (50) * jcapitao (10) * jpena (9) * openstack (6) * weshay (5) * rdogerrit (1) * sfbender (1) Generated by `MeetBot`_ 0.1.4 ______________________________________________ users mailing list users at lists.rdoproject.org http://lists.rdoproject.org/mailman/listinfo/users To unsubscribe: users-unsubscribe at lists.rdoproject.org -------------- next part -------------- An HTML attachment was scrubbed... URL: From amoralej at redhat.com Mon Dec 16 11:39:11 2019 From: amoralej at redhat.com (Alfredo Moralejo Alonso) Date: Mon, 16 Dec 2019 12:39:11 +0100 Subject: [rdo-users] [RDO] Weekly status for 2019-12-13 Message-ID: Promotions * Latest promotions (TripleO CI) for Stein from 13th December, Train from 12th December and Master from 9th December. * There are some intermittent issues in some TripleO jobs in master * Master and Stein are facing intermittent issues due to performance and deployment getting timed out, currently being worked upon to improve it. Currently mainly seen in RDO stein phase1 TripleO job * https://bugs.launchpad.net/tripleo/+bug/1844446 Deps Update * OVN and OpenVswitch both being bumped to 2.12 in Train * https://review.rdoproject.org/r/#/c/23960/ * Python-amqp is bumped to 2.5.2 in Ussuri Packages * Python-tempestconf is updated to 2.4.0 in Ussuri * New package sushy-oem-idrac is added in Ussuri and Train * New package networking-omnipath is being added in Ussuri Other * CentOS8 Preparation: * Dependencies have been built in copr repo:- * https://copr.fedorainfracloud.org/coprs/g/openstack-sig/centos8-deps/packages/ * https://review.rdoproject.org/etherpad/p/rebuild-deps-centos8 * A new DLRN instance for CentOS8 workers have been created. * DLRN worker for CentOS8 on Train is created and being bootstrapped: * https://trunk.rdoproject.org/centos8-train/report.html * Bootstrap for DLRN worker for CentOS8 following master is pending on componentization review: * https://review.rdoproject.org/r/#/c/22394/ * CentOS8 is being tested with devstack, packstack and puppet-openstack-integration using dependencies from copr and temporary DLRN worker. * Work in progress to add CI jobs to test distgit changes using CentOS8. * New non-voting job which checks requirements (rdopkg reqcheck) is added in Distgit projects * https://review.rdoproject.org/r/#/q/topic:test_reqcheck On behalf of RDO -------------- next part -------------- An HTML attachment was scrubbed... URL: From rleander at redhat.com Wed Dec 18 09:42:32 2019 From: rleander at redhat.com (Rain Leander) Date: Wed, 18 Dec 2019 10:42:32 +0100 Subject: [rdo-users] [Ask OpenStack] 7 updates about "nova", "cinder-manage", "installation", "redhat", "nic" and more Message-ID: Ask OpenStack has these updates, please have a look: - Failed to call refresh: 'cinder-manage db sync' (new question) - Tripleo and deployer provided data (vendor_data2) (new question) - Nova services listing in stein (2 rev, 2 ans, 2 ans rev) - How do I plan Openstack Queens Shutdown activity ? (3 rev, 1 ans, 1 ans rev) - openstack installation (new question, 1 ans, 1 ans rev) - How to attach more PCI bus to instance? (new question) - docker run --net test_net fails (new question, 1 ans, 2 ans rev) And, as always, thanks for being a part of the RDO Community! -- K Rain Leander OpenStack Community Liaison Open Source Program Office https://www.rdoproject.org/ http://community.redhat.com -------------- next part -------------- An HTML attachment was scrubbed... URL: From amoralej at redhat.com Wed Dec 18 14:42:32 2019 From: amoralej at redhat.com (Alfredo Moralejo Alonso) Date: Wed, 18 Dec 2019 15:42:32 +0100 Subject: [rdo-users] [Meeting] RDO meeting (2019-12-18) minutes Message-ID: ============================== #rdo: RDO meeting - 2019-12-18 ============================== Meeting started by amoralej at 14:00:36 UTC. The full logs are available athttp://eavesdrop.openstack.org/meetings/rdo_meeting___2019_12_18/2019/rdo_meeting___2019_12_18.2019-12-18-14.00.log.html . Meeting summary --------------- * roll call (amoralej, 14:01:04) * CentOS8 update (amoralej, 14:06:00) * centos8-train is bootstrapped https://trunk.rdoproject.org/centos8-train/report.html (amoralej, 14:06:30) * centos8-master bootstrap is ongoing with components https://trunk.rdoproject.org/centos8-master/report.html (amoralej, 14:06:56) * RDO CI is already gating updates in train-rdo branches and train tag with CentOS8 (amoralej, 14:08:17) * Patches to add centos8 to p-o-i and packstack https://review.opendev.org/#/q/topic:rdo-centos8+(status:open+OR+status:merged) (amoralej, 14:08:35) * Working to add support to centos8 in weirdo https://review.rdoproject.org/r/#/q/topic:weirdo-c8 (amoralej, 14:08:52) * Ussuri M1 Test Days are 19-20 December; please watch for testers on irc #rdo / mailing lists 19-20 December. THANK YOU! (amoralej, 14:19:41) * LINK: http://rdoproject.org/testday/ussuri/milestone1/ (amoralej, 14:20:09) * Next meeting date? 25 December? 01 January? 08 January? (amoralej, 14:23:31) * AGREED: next RDO meeting will be on Jan 8th 2020 (amoralej, 14:28:27) * chair for next meeting (amoralej, 14:29:49) * ACTION: ykarel to chair next meeting (amoralej, 14:30:15) * open floor (amoralej, 14:30:22) Meeting ended at 14:38:01 UTC. -------------- next part -------------- An HTML attachment was scrubbed... URL: From yatinkarel at gmail.com Mon Dec 30 12:32:05 2019 From: yatinkarel at gmail.com (YATIN KAREL) Date: Mon, 30 Dec 2019 18:02:05 +0530 Subject: [rdo-users] [rdo-dev] [RDO] Weekly status for 2019-12-27 Message-ID: Promotions: * Latest promotions (TripleO CI) for Stein from 24th December, Train from 26th December and Master from 19th December. * Master have couple of blocker issues. Packages: * Cinder-tempest-plugin updated in Train/Ussuri * Tobiko is pinned in Ussuri as it has dropped python2 support * Tobiko is pinned in Train to avoid breakages caused by master commits as Train is released, maintainer can ask to unpin if they want to support master commits in Train * New package tripleo-operator-ansible is being added in Ussuri * New package networking-omnipath is being added in Ussuri CentOS8 Preparation: * DLRN worker for CentOS8 on master is consistent and running: https://trunk.rdoproject.org/centos8-master/report.html * DLRN worker for CentOS8 on Train is consistent and running: https://trunk.rdoproject.org/centos8-train/report.html * Dependencies are synchronized in trunk.rdoproject.org in centos8-master. * Changes in distgits and rdoinfo updates are being gated with DLRN builds on CentOS8. * Weirdo CentOS8 jobs are being added in RDO for gating packages and dependencies * Review to add CentOS8 dependencies is WIP: * https://review.rdoproject.org/r/#/c/24274/ * CentOS8 jobs are added in packstack check pipeline. * CentOS8 in p-o-i is being added https://review.opendev.org/#/c/698142/. On behalf of RDO _______________________________________________ dev mailing list dev at lists.rdoproject.org http://lists.rdoproject.org/mailman/listinfo/dev To unsubscribe: dev-unsubscribe at lists.rdoproject.org -------------- next part -------------- An HTML attachment was scrubbed... URL: