Ussuri RDO Release Announcement
by Amy Marrich
If you're having trouble with the formatting, this release announcement is
available online https://blogs.rdoproject.org/2020/05/rdo-ussuri-released/
---
*RDO Ussuri Released*
The RDO community is pleased to announce the general availability of the
RDO build for OpenStack Ussuri for RPM-based distributions, CentOS Linux
and Red Hat Enterprise Linux. RDO is suitable for building private, public,
and hybrid clouds. Ussuri is the 21st release from the OpenStack project,
which is the work of more than 1,000 contributors from around the world.
The release is already available on the CentOS mirror network at
http://mirror.centos.org/centos/8/cloud/x86_64/openstack-ussuri/.
The RDO community project curates, packages, builds, tests and maintains a
complete OpenStack component set for RHEL and CentOS Linux and is a member
of the CentOS Cloud Infrastructure SIG. The Cloud Infrastructure SIG
focuses on delivering a great user experience for CentOS Linux users
looking to build and maintain their own on-premise, public or hybrid clouds.
All work on RDO and on the downstream release, Red Hat OpenStack Platform,
is 100% open source, with all code changes going upstream first.
PLEASE NOTE: At this time, RDO Ussuri provides packages for CentOS8 only.
Please use the previous release, Train, for CentOS7 and python 2.7.
*Interesting things in the Ussuri release include:*
- Within the Ironic project, a bare metal service that is capable of
managing and provisioning physical machines in a security-aware and
fault-tolerant manner, UEFI and device selection is now available for
Software RAID.
- The Kolla project, the containerised deployment of OpenStack used to
provide production-ready containers and deployment tools for operating
OpenStack clouds, streamlined the configuration of external [Ceph](
https://ceph.io/) integration, making it easy to go from
Ceph-Ansible-deployed Ceph cluster to enabling it in OpenStack.
*Other improvements include:*
- Support for IPv6 is available within the Kuryr project, the bridge
between container framework networking models and OpenStack networking
abstractions.
- Other highlights of the broader upstream OpenStack project may be read
via https://releases.openstack.org/ussuri/highlights.html.
- A new Neutron driver networking-omnipath has been included in RDO
distribution which enables the Omni-Path switching fabric in OpenStack
cloud.
- OVN Neutron driver has been merged in main neutron repositon from
networking-ovn.
*Contributors*
During the Ussuri cycle, we saw the following new RDO contributors:
- Amol Kahat
- Artom Lifshitz
- Bhagyashri Shewale
- Brian Haley
- Dan Pawlik
- Dmitry Tantsur
- Dougal Matthews
- Eyal
- Harald Jensås
- Kevin Carter
- Lance Albertson
- Martin Schuppert
- Mathieu Bultel
- Matthias Runge
- Miguel Garcia
- Riccardo Pittau
- Sagi Shnaidman
- Sandeep Yadav
- SurajP
- Toure Dunnon
Welcome to all of you and Thank You So Much for participating!
But we wouldn’t want to overlook anyone. A super massive Thank You to all 54
contributors who participated in producing this release. This list
includes commits to rdo-packages and rdo-infra repositories:
- Adam Kimball
- Alan Bishop
- Alan Pevec
- Alex Schultz
- Alfredo Moralejo
- Amol Kahat
- Artom Lifshitz
- Arx Cruz
- Bhagyashri Shewale
- Brian Haley
- Cédric Jeanneret
- Chandan Kumar
- Dan Pawlik
- David Moreau Simard
- Dmitry Tantsur
- Dougal Matthews
- Emilien Macchi
- Eric Harney
- Eyal
- Fabien Boucher
- Gabriele Cerami
- Gael Chamoulaud
- Giulio Fidente
- Harald Jensås
- Jakub Libosvar
- Javier Peña
- Joel Capitao
- Jon Schlueter
- Kevin Carter
- Lance Albertson
- Lee Yarwood
- Marc Dequènes (Duck)
- Marios Andreou
- Martin Mágr
- Martin Schuppert
- Mathieu Bultel
- Matthias Runge
- Miguel Garcia
- Mike Turek
- Nicolas Hicher
- Rafael Folco
- Riccardo Pittau
- Ronelle Landy
- Sagi Shnaidman
- Sandeep Yadav
- Soniya Vyas
- Sorin Sbarnea
- SurajP
- Toure Dunnon
- Tristan de Cacqueray
- Victoria Martinez de la Cruz
- Wes Hayutin
- Yatin Karel
- Zoltan Caplovic
*The Next Release Cycle*
At the end of one release, focus shifts immediately to the next, Victoria,
which has an estimated GA the week of 12-16 October 2020. The full schedule
is available at https://releases.openstack.org/victoria/schedule.html.
Twice during each release cycle, RDO hosts official Test Days shortly after
the first and third milestones; therefore, the upcoming test days are 25-26
June 2020 for Milestone One and 17-18 September 2020 for Milestone Three.
*Get Started*
There are three ways to get started with RDO.
To spin up a proof of concept cloud, quickly, and on limited hardware, try
an All-In-One Packstack installation. You can run RDO on a single node to
get a feel for how it works.
For a production deployment of RDO, use the TripleO Quickstart and you’ll
be running a production cloud in short order.
Finally, for those that don’t have any hardware or physical resources,
there’s the OpenStack Global Passport Program. This is a collaborative
effort between OpenStack public cloud providers to let you experience the
freedom, performance and interoperability of open source infrastructure.
You can quickly and easily gain access to OpenStack infrastructure via
trial programs from participating OpenStack public cloud providers around
the world.
*Get Help*
The RDO Project participates in a Q&A service at https://ask.openstack.org.
We also have our users(a)lists.rdoproject.org for RDO-specific users and
operrators. For more developer-oriented content we recommend joining the
dev(a)lists.rdoproject.org mailing list. Remember to post a brief
introduction about yourself and your RDO story. The mailing lists archives
are all available athttps://mail.rdoproject.org. You can also find
extensive documentation on RDOproject.org.
The #rdo channel on Freenode IRC is also an excellent place to find and
give help.
We also welcome comments and requests on the CentOS devel mailing list and
the CentOS and TripleO IRC channels (#centos, #centos-devel, and #tripleo
on irc.freenode.net), however we have a more focused audience within the
RDO venues.
*Get Involved*
To get involved in the OpenStack RPM packaging effort, check out the RDO
contribute pages, peruse the CentOS Cloud SIG page, and inhale the RDO
packaging documentation.
Join us in #rdo and #tripleo on the Freenode IRC network and follow us on
Twitter @RDOCommunity. You can also find us on Facebook and YouTube.
Amy Marrich (spotz)
https://www.rdoproject.org
http://community.redhat.com
4 years, 6 months
[Meeting] RDO meeting (2020-05-27) minutes
by Alfredo Moralejo Alonso
==============================
#rdo: RDO meeting - 2020-05-27
==============================
Meeting started by amoralej at 14:03:05 UTC. The full logs are
available athttp://eavesdrop.openstack.org/meetings/rdo_meeting___2020_05_27/2020/r...
.
Meeting summary
---------------
* roll call (amoralej, 14:03:59)
* Ussuri GA status update (amoralej, 14:06:25)
* TripleO GA packages have been pushed to CentOS mirrors
https://review.rdoproject.org/r/#/c/27785/ (amoralej, 14:11:47)
* RDO Ussuri GA announcement will be sent out today or tomorrow the
latest (amoralej, 14:12:07)
* ACTION: amoralej to update
https://www.rdoproject.org/install/packstack/ (amoralej, 14:15:52)
* LINK:
https://review.rdoproject.org/etherpad/p/tripleo-ussuri-standalone-clouds...
(ykarel, 14:16:00)
* next week's chair (amoralej, 14:27:21)
* ACTION: jcapitao to chair next week (amoralej, 14:29:17)
* AGREED: to cancel next week meeting to avoid conflicts with PTG
(amoralej, 14:37:52)
* next RDO meeting will be on June the 10th (amoralej, 14:38:16)
* open floor (amoralej, 14:40:41)
Meeting ended at 14:43:41 UTC.
Action items, by person
-----------------------
* amoralej
* amoralej to update https://www.rdoproject.org/install/packstack/
* jcapitao
* jcapitao to chair next week
People present (lines said)
---------------------------
* amoralej (64)
* spotz (17)
* ykarel (14)
* jcapitao (7)
* openstack (4)
Generated by `MeetBot`_ 0.1.4
4 years, 6 months
[Meeting] RDO meeting (2020-05-20) minutes
by YATIN KAREL
==============================
#rdo: RDO meeting - 2020-05-20
==============================
Meeting started by amoralej at 14:03:53 UTC. The full logs are
available athttp://eavesdrop.openstack.org/meetings/rdo_meeting___2020_05_20/2020/r...
.
Meeting summary
---------------
* Ussuri GA status update (ykarel, 14:08:29)
* waiting for new releases for tripleo and ansible roles (amoralej,
14:09:14)
* tripleo expects to do their releases between 21st and 27th may
(amoralej, 14:09:44)
* LINK: https://review.opendev.org/#/q/topic:tripleo-ussuri-release
(ykarel, 14:10:45)
* we need a new release of puppet pacemaker
https://review.opendev.org/#/c/729606/ and
https://review.opendev.org/#/c/729537/ (amoralej, 14:10:55)
* LINK:
https://review.rdoproject.org/etherpad/p/tripleo-ussuri-standalone-clouds...
(ykarel, 14:13:45)
* we also need centos-release-openstack-ussuri rpm to build and
published in centos8 extras repo (ykarel, 14:15:09)
* LINK:
https://git.centos.org/rpms/centos-release-openstack/tree/c8-sig-cloud-op...
(ykarel, 14:15:12)
* Next Week's Chair? (ykarel, 14:20:42)
* ACTION: amoralej to chair next week (ykarel, 14:21:23)
* open floor (ykarel, 14:21:36)
Meeting ended at 14:56:19 UTC.
Action items, by person
-----------------------
* amoralej
* amoralej to chair next week
People present (lines said)
---------------------------
* amoralej (51)
* ykarel (46)
* spotz (11)
* radez (9)
* openstack (8)
* iurygregory (6)
* jcapitao (4)
* jbrooks (4)
* EmilienM (1)
Generated by `MeetBot`_ 0.1.4
______________________________________________
users mailing list
users(a)lists.rdoproject.org
http://lists.rdoproject.org/mailman/listinfo/users
To unsubscribe: users-unsubscribe(a)lists.rdoproject.org
4 years, 6 months
Neutron failing to bind port when launching instances
by Sam Kidder
Hi all! I installed packstack using #packstack --answer-file=answers.txt and it brought up instances ok. Whenever I attempt to launch an instance after reboot, I obtain an error within Nova which points me to Neutron.
Nova Log:
2020-05-15 09:58:39.323 2550 ERROR nova.compute.manager [req-89d70554-3ed5-47bc-a0af-d62616cae563 38ee4149b19e4c1d8712aa459891c5c3 5f893ea237cd411791ec53d3a1339d19 - default default] Instance failed network setu
p after 1 attempt(s): PortBindingFailed: Binding failed for port f214b130-489a-479e-a9a9-f59c91227e5f, please check neutron logs for more information.
2020-05-15 09:58:39.323 2550 ERROR nova.compute.manager Traceback (most recent call last):
2020-05-15 09:58:39.323 2550 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/compute/manager.py", line 1618, in _allocate_network_async
2020-05-15 09:58:39.323 2550 ERROR nova.compute.manager resource_provider_mapping=resource_provider_mapping)
2020-05-15 09:58:39.323 2550 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/network/neutronv2/api.py", line 1130, in allocate_for_instance
2020-05-15 09:58:39.323 2550 ERROR nova.compute.manager bind_host_id, available_macs, requested_ports_dict)
2020-05-15 09:58:39.323 2550 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/network/neutronv2/api.py", line 1263, in _update_ports_for_instance
2020-05-15 09:58:39.323 2550 ERROR nova.compute.manager vif.destroy()
2020-05-15 09:58:39.323 2550 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/oslo_utils/excutils.py", line 220, in __exit__
2020-05-15 09:58:39.323 2550 ERROR nova.compute.manager self.force_reraise()
2020-05-15 09:58:39.323 2550 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/oslo_utils/excutils.py", line 196, in force_reraise
2020-05-15 09:58:39.323 2550 ERROR nova.compute.manager six.reraise(self.type_, self.value, self.tb)
2020-05-15 09:58:39.323 2550 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/network/neutronv2/api.py", line 1233, in _update_ports_for_instance
2020-05-15 09:58:39.323 2550 ERROR nova.compute.manager port_client, instance, port_id, port_req_body)
2020-05-15 09:58:39.323 2550 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/network/neutronv2/api.py", line 582, in _update_port
2020-05-15 09:58:39.323 2550 ERROR nova.compute.manager _ensure_no_port_binding_failure(port)
2020-05-15 09:58:39.323 2550 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/network/neutronv2/api.py", line 252, in _ensure_no_port_binding_failure
2020-05-15 09:58:39.323 2550 ERROR nova.compute.manager raise exception.PortBindingFailed(port_id=port['id'])
2020-05-15 09:58:39.323 2550 ERROR nova.compute.manager PortBindingFailed: Binding failed for port f214b130-489a-479e-a9a9-f59c91227e5f, please check neutron logs for more information.
2020-05-15 09:58:39.323 2550 ERROR nova.compute.manager
2020-05-15 09:58:43.070 2550 INFO nova.virt.libvirt.driver [req-89d70554-3ed5-47bc-a0af-d62616cae563 38ee4149b19e4c1d8712aa459891c5c3 5f893ea237cd411791ec53d3a1339d19 - default default] [instance: c586c05b-f87b-
4bc6-893f-25422a7e9481] Creating image
2020-05-15 09:58:43.071 2550 ERROR nova.compute.manager [req-89d70554-3ed5-47bc-a0af-d62616cae563 38ee4149b19e4c1d8712aa459891c5c3 5f893ea237cd411791ec53d3a1339d19 - default default] [instance: c586c05b-f87b-4bc
6-893f-25422a7e9481] Instance failed to spawn: PortBindingFailed: Binding failed for port f214b130-489a-479e-a9a9-f59c91227e5f, please check neutron logs for more information.
2020-05-15 09:58:43.071 2550 ERROR nova.compute.manager [instance: c586c05b-f87b-4bc6-893f-25422a7e9481] Traceback (most recent call last):
2020-05-15 09:58:43.071 2550 ERROR nova.compute.manager [instance: c586c05b-f87b-4bc6-893f-25422a7e9481] File "/usr/lib/python2.7/site-packages/nova/compute/manager.py", line 2594, in _build_resources
2020-05-15 09:58:43.071 2550 ERROR nova.compute.manager [instance: c586c05b-f87b-4bc6-893f-25422a7e9481] yield resources
2020-05-15 09:58:43.071 2550 ERROR nova.compute.manager [instance: c586c05b-f87b-4bc6-893f-25422a7e9481] File "/usr/lib/python2.7/site-packages/nova/compute/manager.py", line 2353, in _build_and_run_instance
2020-05-15 09:58:43.071 2550 ERROR nova.compute.manager [instance: c586c05b-f87b-4bc6-893f-25422a7e9481] block_device_info=block_device_info)
2020-05-15 09:58:43.071 2550 ERROR nova.compute.manager [instance: c586c05b-f87b-4bc6-893f-25422a7e9481] File "/usr/lib/python2.7/site-packages/nova/virt/libvirt/driver.py", line 3199, in spawn
2020-05-15 09:58:43.071 2550 ERROR nova.compute.manager [instance: c586c05b-f87b-4bc6-893f-25422a7e9481] mdevs=mdevs)
2020-05-15 09:58:43.071 2550 ERROR nova.compute.manager [instance: c586c05b-f87b-4bc6-893f-25422a7e9481] File "/usr/lib/python2.7/site-packages/nova/virt/libvirt/driver.py", line 5492, in _get_guest_xml
2020-05-15 09:58:43.071 2550 ERROR nova.compute.manager [instance: c586c05b-f87b-4bc6-893f-25422a7e9481] network_info_str = str(network_info)
2020-05-15 09:58:43.071 2550 ERROR nova.compute.manager [instance: c586c05b-f87b-4bc6-893f-25422a7e9481] File "/usr/lib/python2.7/site-packages/nova/network/model.py", line 595, in __str__
2020-05-15 09:58:43.071 2550 ERROR nova.compute.manager [instance: c586c05b-f87b-4bc6-893f-25422a7e9481] return self._sync_wrapper(fn, *args, **kwargs)
2020-05-15 09:58:43.071 2550 ERROR nova.compute.manager [instance: c586c05b-f87b-4bc6-893f-25422a7e9481] File "/usr/lib/python2.7/site-packages/nova/network/model.py", line 578, in _sync_wrapper
2020-05-15 09:58:43.071 2550 ERROR nova.compute.manager [instance: c586c05b-f87b-4bc6-893f-25422a7e9481] self.wait()
2020-05-15 09:58:43.071 2550 ERROR nova.compute.manager [instance: c586c05b-f87b-4bc6-893f-25422a7e9481] File "/usr/lib/python2.7/site-packages/nova/network/model.py", line 610, in wait
2020-05-15 09:58:43.071 2550 ERROR nova.compute.manager [instance: c586c05b-f87b-4bc6-893f-25422a7e9481] self[:] = self._gt.wait()
2020-05-15 09:58:43.071 2550 ERROR nova.compute.manager [instance: c586c05b-f87b-4bc6-893f-25422a7e9481] File "/usr/lib/python2.7/site-packages/eventlet/greenthread.py", line 180, in wait
2020-05-15 09:58:43.071 2550 ERROR nova.compute.manager [instance: c586c05b-f87b-4bc6-893f-25422a7e9481] return self._exit_event.wait()
2020-05-15 09:58:43.071 2550 ERROR nova.compute.manager [instance: c586c05b-f87b-4bc6-893f-25422a7e9481] File "/usr/lib/python2.7/site-packages/eventlet/event.py", line 132, in wait
2020-05-15 09:58:43.071 2550 ERROR nova.compute.manager [instance: c586c05b-f87b-4bc6-893f-25422a7e9481] current.throw(*self._exc)
2020-05-15 09:58:43.071 2550 ERROR nova.compute.manager [instance: c586c05b-f87b-4bc6-893f-25422a7e9481] File "/usr/lib/python2.7/site-packages/eventlet/greenthread.py", line 219, in main
2020-05-15 09:58:43.071 2550 ERROR nova.compute.manager [instance: c586c05b-f87b-4bc6-893f-25422a7e9481] result = function(*args, **kwargs)
2020-05-15 09:58:43.071 2550 ERROR nova.compute.manager [instance: c586c05b-f87b-4bc6-893f-25422a7e9481] File "/usr/lib/python2.7/site-packages/nova/utils.py", line 800, in context_wrapper
2020-05-15 09:58:43.071 2550 ERROR nova.compute.manager [instance: c586c05b-f87b-4bc6-893f-25422a7e9481] return func(*args, **kwargs)
2020-05-15 09:58:43.071 2550 ERROR nova.compute.manager [instance: c586c05b-f87b-4bc6-893f-25422a7e9481] File "/usr/lib/python2.7/site-packages/nova/compute/manager.py", line 1635, in _allocate_network_async
2020-05-15 09:58:43.071 2550 ERROR nova.compute.manager [instance: c586c05b-f87b-4bc6-893f-25422a7e9481] six.reraise(*exc_info)
2020-05-15 09:58:43.071 2550 ERROR nova.compute.manager [instance: c586c05b-f87b-4bc6-893f-25422a7e9481] File "/usr/lib/python2.7/site-packages/nova/compute/manager.py", line 1618, in _allocate_network_async
2020-05-15 09:58:43.071 2550 ERROR nova.compute.manager [instance: c586c05b-f87b-4bc6-893f-25422a7e9481] resource_provider_mapping=resource_provider_mapping)
2020-05-15 09:58:43.071 2550 ERROR nova.compute.manager [instance: c586c05b-f87b-4bc6-893f-25422a7e9481] File "/usr/lib/python2.7/site-packages/nova/network/neutronv2/api.py", line 1130, in allocate_for_instance
2020-05-15 09:58:43.071 2550 ERROR nova.compute.manager [instance: c586c05b-f87b-4bc6-893f-25422a7e9481] bind_host_id, available_macs, requested_ports_dict)
2020-05-15 09:58:43.071 2550 ERROR nova.compute.manager [instance: c586c05b-f87b-4bc6-893f-25422a7e9481] File "/usr/lib/python2.7/site-packages/nova/network/neutronv2/api.py", line 1263, in _update_ports_for_instance
2020-05-15 09:58:43.071 2550 ERROR nova.compute.manager [instance: c586c05b-f87b-4bc6-893f-25422a7e9481] vif.destroy()
2020-05-15 09:58:43.071 2550 ERROR nova.compute.manager [instance: c586c05b-f87b-4bc6-893f-25422a7e9481] File "/usr/lib/python2.7/site-packages/oslo_utils/excutils.py", line 220, in __exit__
2020-05-15 09:58:43.071 2550 ERROR nova.compute.manager [instance: c586c05b-f87b-4bc6-893f-25422a7e9481] self.force_reraise()
2020-05-15 09:58:43.071 2550 ERROR nova.compute.manager [instance: c586c05b-f87b-4bc6-893f-25422a7e9481] File "/usr/lib/python2.7/site-packages/oslo_utils/excutils.py", line 196, in force_reraise
2020-05-15 09:58:43.071 2550 ERROR nova.compute.manager [instance: c586c05b-f87b-4bc6-893f-25422a7e9481] six.reraise(self.type_, self.value, self.tb)
2020-05-15 09:58:43.071 2550 ERROR nova.compute.manager [instance: c586c05b-f87b-4bc6-893f-25422a7e9481] File "/usr/lib/python2.7/site-packages/nova/network/neutronv2/api.py", line 1233, in _update_ports_for_instance
2020-05-15 09:58:43.071 2550 ERROR nova.compute.manager [instance: c586c05b-f87b-4bc6-893f-25422a7e9481] port_client, instance, port_id, port_req_body)
2020-05-15 09:58:43.071 2550 ERROR nova.compute.manager [instance: c586c05b-f87b-4bc6-893f-25422a7e9481] File "/usr/lib/python2.7/site-packages/nova/network/neutronv2/api.py", line 582, in _update_port
2020-05-15 09:58:43.071 2550 ERROR nova.compute.manager [instance: c586c05b-f87b-4bc6-893f-25422a7e9481] _ensure_no_port_binding_failure(port)
2020-05-15 09:58:43.071 2550 ERROR nova.compute.manager [instance: c586c05b-f87b-4bc6-893f-25422a7e9481] File "/usr/lib/python2.7/site-packages/nova/network/neutronv2/api.py", line 252, in _ensure_no_port_binding_failure
2020-05-15 09:58:43.071 2550 ERROR nova.compute.manager [instance: c586c05b-f87b-4bc6-893f-25422a7e9481] raise exception.PortBindingFailed(port_id=port['id'])
2020-05-15 09:58:43.071 2550 ERROR nova.compute.manager [instance: c586c05b-f87b-4bc6-893f-25422a7e9481] PortBindingFailed: Binding failed for port f214b130-489a-479e-a9a9-f59c91227e5f, please check neutron logs for more information.
2020-05-15 09:58:43.071 2550 ERROR nova.compute.manager [instance: c586c05b-f87b-4bc6-893f-25422a7e9481]
2020-05-15 09:58:43.071 2550 INFO nova.compute.manager [req-89d70554-3ed5-47bc-a0af-d62616cae563 38ee4149b19e4c1d8712aa459891c5c3 5f893ea237cd411791ec53d3a1339d19 - default default] [instance: c586c05b-f87b-4bc6-893f-25422a7e9481] Terminating instance
2020-05-15 09:58:43.178 2550 INFO nova.virt.libvirt.driver [-] [instance: c586c05b-f87b-4bc6-893f-25422a7e9481] Instance destroyed successfully.
2020-05-15 09:58:43.192 2550 INFO nova.virt.libvirt.driver [req-89d70554-3ed5-47bc-a0af-d62616cae563 38ee4149b19e4c1d8712aa459891c5c3 5f893ea237cd411791ec53d3a1339d19 - default default] [instance: c586c05b-f87b-4bc6-893f-25422a7e9481] Deleting instance files /var/lib/nova/instances/c586c05b-f87b-4bc6-893f-25422a7e9481_del
2020-05-15 09:58:43.193 2550 INFO nova.virt.libvirt.driver [req-89d70554-3ed5-47bc-a0af-d62616cae563 38ee4149b19e4c1d8712aa459891c5c3 5f893ea237cd411791ec53d3a1339d19 - default default] [instance: c586c05b-f87b-4bc6-893f-25422a7e9481] Deletion of /var/lib/nova/instances/c586c05b-f87b-4bc6-893f-25422a7e9481_del complete
2020-05-15 09:58:43.238 2550 INFO nova.compute.manager [req-89d70554-3ed5-47bc-a0af-d62616cae563 38ee4149b19e4c1d8712aa459891c5c3 5f893ea237cd411791ec53d3a1339d19 - default default] [instance: c586c05b-f87b-4bc6-893f-25422a7e9481] Took 0.06 seconds to destroy the instance on the hypervisor.
2020-05-15 09:58:43.273 2550 INFO nova.compute.manager [-] [instance: c586c05b-f87b-4bc6-893f-25422a7e9481] Took 0.03 seconds to deallocate network for instance.
2020-05-15 09:58:44.921 2550 INFO nova.compute.manager [req-89d70554-3ed5-47bc-a0af-d62616cae563 38ee4149b19e4c1d8712aa459891c5c3 5f893ea237cd411791ec53d3a1339d19 - default default] [instance: c586c05b-f87b-4bc6-893f-25422a7e9481] Took 1.65 seconds to detach 1 volumes for instance.
2020-05-15 09:58:44.998 2550 ERROR nova.compute.manager [req-89d70554-3ed5-47bc-a0af-d62616cae563 38ee4149b19e4c1d8712aa459891c5c3 5f893ea237cd411791ec53d3a1339d19 - default default] [instance: c586c05b-f87b-4bc6-893f-25422a7e9481] Failed to build and run instance: PortBindingFailed: Binding failed for port f214b130-489a-479e-a9a9-f59c91227e5f, please check neutron logs for more information.
2020-05-15 09:58:44.998 2550 ERROR nova.compute.manager [instance: c586c05b-f87b-4bc6-893f-25422a7e9481] Traceback (most recent call last):
2020-05-15 09:58:44.998 2550 ERROR nova.compute.manager [instance: c586c05b-f87b-4bc6-893f-25422a7e9481] File "/usr/lib/python2.7/site-packages/nova/compute/manager.py", line 2353, in _build_and_run_instance
2020-05-15 09:58:44.998 2550 ERROR nova.compute.manager [instance: c586c05b-f87b-4bc6-893f-25422a7e9481] block_device_info=block_device_info)
2020-05-15 09:58:44.998 2550 ERROR nova.compute.manager [instance: c586c05b-f87b-4bc6-893f-25422a7e9481] File "/usr/lib/python2.7/site-packages/nova/virt/libvirt/driver.py", line 3199, in spawn
2020-05-15 09:58:44.998 2550 ERROR nova.compute.manager [instance: c586c05b-f87b-4bc6-893f-25422a7e9481] mdevs=mdevs)
2020-05-15 09:58:44.998 2550 ERROR nova.compute.manager [instance: c586c05b-f87b-4bc6-893f-25422a7e9481] File "/usr/lib/python2.7/site-packages/nova/virt/libvirt/driver.py", line 5492, in _get_guest_xml
2020-05-15 09:58:44.998 2550 ERROR nova.compute.manager [instance: c586c05b-f87b-4bc6-893f-25422a7e9481] network_info_str = str(network_info)
2020-05-15 09:58:44.998 2550 ERROR nova.compute.manager [instance: c586c05b-f87b-4bc6-893f-25422a7e9481] File "/usr/lib/python2.7/site-packages/nova/network/model.py", line 595, in __str__
2020-05-15 09:58:44.998 2550 ERROR nova.compute.manager [instance: c586c05b-f87b-4bc6-893f-25422a7e9481] return self._sync_wrapper(fn, *args, **kwargs)
2020-05-15 09:58:44.998 2550 ERROR nova.compute.manager [instance: c586c05b-f87b-4bc6-893f-25422a7e9481] File "/usr/lib/python2.7/site-packages/nova/network/model.py", line 578, in _sync_wrapper
2020-05-15 09:58:44.998 2550 ERROR nova.compute.manager [instance: c586c05b-f87b-4bc6-893f-25422a7e9481] self.wait()
2020-05-15 09:58:44.998 2550 ERROR nova.compute.manager [instance: c586c05b-f87b-4bc6-893f-25422a7e9481] File "/usr/lib/python2.7/site-packages/nova/network/model.py", line 610, in wait
2020-05-15 09:58:44.998 2550 ERROR nova.compute.manager [instance: c586c05b-f87b-4bc6-893f-25422a7e9481] self[:] = self._gt.wait()
2020-05-15 09:58:44.998 2550 ERROR nova.compute.manager [instance: c586c05b-f87b-4bc6-893f-25422a7e9481] File "/usr/lib/python2.7/site-packages/eventlet/greenthread.py", line 180, in wait
2020-05-15 09:58:44.998 2550 ERROR nova.compute.manager [instance: c586c05b-f87b-4bc6-893f-25422a7e9481] return self._exit_event.wait()
2020-05-15 09:58:44.998 2550 ERROR nova.compute.manager [instance: c586c05b-f87b-4bc6-893f-25422a7e9481] File "/usr/lib/python2.7/site-packages/eventlet/event.py", line 132, in wait
2020-05-15 09:58:44.998 2550 ERROR nova.compute.manager [instance: c586c05b-f87b-4bc6-893f-25422a7e9481] current.throw(*self._exc)
2020-05-15 09:58:44.998 2550 ERROR nova.compute.manager [instance: c586c05b-f87b-4bc6-893f-25422a7e9481] File "/usr/lib/python2.7/site-packages/eventlet/greenthread.py", line 219, in main
2020-05-15 09:58:44.998 2550 ERROR nova.compute.manager [instance: c586c05b-f87b-4bc6-893f-25422a7e9481] result = function(*args, **kwargs)
2020-05-15 09:58:44.998 2550 ERROR nova.compute.manager [instance: c586c05b-f87b-4bc6-893f-25422a7e9481] File "/usr/lib/python2.7/site-packages/nova/utils.py", line 800, in context_wrapper
2020-05-15 09:58:44.998 2550 ERROR nova.compute.manager [instance: c586c05b-f87b-4bc6-893f-25422a7e9481] return func(*args, **kwargs)
2020-05-15 09:58:44.998 2550 ERROR nova.compute.manager [instance: c586c05b-f87b-4bc6-893f-25422a7e9481] File "/usr/lib/python2.7/site-packages/nova/compute/manager.py", line 1635, in _allocate_network_async
2020-05-15 09:58:44.998 2550 ERROR nova.compute.manager [instance: c586c05b-f87b-4bc6-893f-25422a7e9481] six.reraise(*exc_info)
2020-05-15 09:58:44.998 2550 ERROR nova.compute.manager [instance: c586c05b-f87b-4bc6-893f-25422a7e9481] File "/usr/lib/python2.7/site-packages/nova/compute/manager.py", line 1618, in _allocate_network_async
2020-05-15 09:58:44.998 2550 ERROR nova.compute.manager [instance: c586c05b-f87b-4bc6-893f-25422a7e9481] resource_provider_mapping=resource_provider_mapping)
2020-05-15 09:58:44.998 2550 ERROR nova.compute.manager [instance: c586c05b-f87b-4bc6-893f-25422a7e9481] File "/usr/lib/python2.7/site-packages/nova/network/neutronv2/api.py", line 1130, in allocate_for_instance
2020-05-15 09:58:44.998 2550 ERROR nova.compute.manager [instance: c586c05b-f87b-4bc6-893f-25422a7e9481] bind_host_id, available_macs, requested_ports_dict)
2020-05-15 09:58:44.998 2550 ERROR nova.compute.manager [instance: c586c05b-f87b-4bc6-893f-25422a7e9481] File "/usr/lib/python2.7/site-packages/nova/network/neutronv2/api.py", line 1263, in _update_ports_for_instance
2020-05-15 09:58:44.998 2550 ERROR nova.compute.manager [instance: c586c05b-f87b-4bc6-893f-25422a7e9481] vif.destroy()
2020-05-15 09:58:44.998 2550 ERROR nova.compute.manager [instance: c586c05b-f87b-4bc6-893f-25422a7e9481] File "/usr/lib/python2.7/site-packages/oslo_utils/excutils.py", line 220, in __exit__
2020-05-15 09:58:44.998 2550 ERROR nova.compute.manager [instance: c586c05b-f87b-4bc6-893f-25422a7e9481] self.force_reraise()
2020-05-15 09:58:44.998 2550 ERROR nova.compute.manager [instance: c586c05b-f87b-4bc6-893f-25422a7e9481] File "/usr/lib/python2.7/site-packages/oslo_utils/excutils.py", line 196, in force_reraise
2020-05-15 09:58:44.998 2550 ERROR nova.compute.manager [instance: c586c05b-f87b-4bc6-893f-25422a7e9481] six.reraise(self.type_, self.value, self.tb)
2020-05-15 09:58:44.998 2550 ERROR nova.compute.manager [instance: c586c05b-f87b-4bc6-893f-25422a7e9481] File "/usr/lib/python2.7/site-packages/nova/network/neutronv2/api.py", line 1233, in _update_ports_for_instance
2020-05-15 09:58:44.998 2550 ERROR nova.compute.manager [instance: c586c05b-f87b-4bc6-893f-25422a7e9481] port_client, instance, port_id, port_req_body)
2020-05-15 09:58:44.998 2550 ERROR nova.compute.manager [instance: c586c05b-f87b-4bc6-893f-25422a7e9481] File "/usr/lib/python2.7/site-packages/nova/network/neutronv2/api.py", line 582, in _update_port
2020-05-15 09:58:44.998 2550 ERROR nova.compute.manager [instance: c586c05b-f87b-4bc6-893f-25422a7e9481] _ensure_no_port_binding_failure(port)
2020-05-15 09:58:44.998 2550 ERROR nova.compute.manager [instance: c586c05b-f87b-4bc6-893f-25422a7e9481] File "/usr/lib/python2.7/site-packages/nova/network/neutronv2/api.py", line 252, in _ensure_no_port_binding_failure
2020-05-15 09:58:44.998 2550 ERROR nova.compute.manager [instance: c586c05b-f87b-4bc6-893f-25422a7e9481] raise exception.PortBindingFailed(port_id=port['id'])
2020-05-15 09:58:44.998 2550 ERROR nova.compute.manager [instance: c586c05b-f87b-4bc6-893f-25422a7e9481] PortBindingFailed: Binding failed for port f214b130-489a-479e-a9a9-f59c91227e5f, please check neutron logs for more information.
2020-05-15 09:58:44.998 2550 ERROR nova.compute.manager [instance: c586c05b-f87b-4bc6-893f-25422a7e9481]
2020-05-15 09:58:45.036 2550 INFO nova.compute.manager [req-89d70554-3ed5-47bc-a0af-d62616cae563 38ee4149b19e4c1d8712aa459891c5c3 5f893ea237cd411791ec53d3a1339d19 - default default] [instance: c586c05b-f87b-4bc6-893f-25422a7e9481] Took 0.03 seconds to deallocate network for instance.
Neutron Log:
2020-05-15 09:58:38.626 3266 ERROR neutron.plugins.ml2.managers [req-db7b226d-86ae-44ce-82ec-1fdc2fd8f55c 02e832b6d9f843fc85483e168a9bad69 d8c22bfdc3d94d3e805ea5099799764c - default default] Failed to bind port
f214b130-489a-479e-a9a9-f59c91227e5f on host localhost.localdomain for vnic_type normal using segments [{'network_id': 'f2c06235-f72f-433d-9e64-189219bbc2c4', 'segmentation_id': None, 'physical_network': u'extne
t', 'id': '5945ae55-7f4b-4d0e-ba98-0bd350e5ca09', 'network_type': u'flat'}]
2020-05-15 09:58:38.626 3266 INFO neutron.plugins.ml2.plugin [req-db7b226d-86ae-44ce-82ec-1fdc2fd8f55c 02e832b6d9f843fc85483e168a9bad69 d8c22bfdc3d94d3e805ea5099799764c - default default] Attempt 2 to bind port
f214b130-489a-479e-a9a9-f59c91227e5f
4 years, 6 months
[newsletter] May 2020 RDO Community Newsletter
by Rain Leander
Having difficulty with the format of this message? See it online at
http://rdoproject.org/newsletter/2020/may/
--
Is it time for an upgrade? The definition of DONE! OpenStack Foundation
announces the 21st version of the most widely deployed open source cloud
infrastructure software! And we send off our OpenStack Community Liaison
with love. Welcome to RDO Project's May 2020 Newsletter.
Housekeeping ItemsAll Your Repos Are Belong To Us
Ocata and Pike are in the Extended Maintenance Phase
<https://releases.openstack.org/> for more than a year now, The promotion
jobs <https://review.rdoproject.org/r/#/q/topic:remove_pike> used to test
these repos were dropped long ago
<http://rdoproject.org/newsletter/2020/may/>
https://review.rdoproject.org/r/#/c/16485/. Now we are planning to drop the
Okata <http://trunk.rdoproject.org/centos7-ocata/> and Pike
<http://trunk.rdoproject.org/centos7-pike/> trunk repos by the first week
of June 2020. We have already stopped building new commits
<https://softwarefactory-project.io/r/#/c/18347/> for both repos.
If anyone is still using these repos, please consider an upgrade to queens
or any later releases. If something is blocking you from an upgrade or
migration away from ocata or pike, please respond to this email thread
<https://lists.rdoproject.org/pipermail/dev/2020-May/009380.html> so we may
consider it before June.
RDO ChangesWhat’s Done is Done
For the past few releases, we’ve been waiting until the trailing projects
TripleO <https://wiki.openstack.org/wiki/TripleO> and Kolla
<https://wiki.openstack.org/wiki/Kolla> were completed and published before
sending out the official Release Announcement. Once upon a time, we
officially posted the technical definition of done
<https://blogs.rdoproject.org/2016/05/technical-definition-of-done/> for
the Mitaka release and we’ve decided in the RDO Community meeting yesterday
to officially incorporate a ‘definition of done’ for each release cycle in
the future starting with Ussuri <https://docs.openstack.org/ussuri/>.
To that end, the RDO Project agrees that before announcing a new release to
the community formally, the following specific criteria must be confirmed
within the CloudSIG builds:
- The three packstack all-in-one upstream scenarios can be executed
successfully.
- The four puppet-openstack-integration scenarios can be executed
successfully.
- TripleO container images can be built.
- TripleO standalone scenario001 can be deployed with the containers
from CloudSIG builds.
This also needs to be completed before the next major event following the
OpenStack release. In the case of Ussuri, since announcements cannot happen
on a Friday, Saturday, or Sunday, the announcement will happen no later
than Thursday, 28 May because the virtual PTG is 01-05 June. This criteria
has been agreed for Ussuri GA and may be updated for next releases. If you
would like to contribute, comment or commend this change, please feel free
to join us on Wednesdays at 14:00 UTC for the weekly *RDO community meeting*
on Freenode IRC channel #RDO.
Community NewsWe Wish You All The Best
RDO Project’s OpenStack Community Liaison, Rain Leander, is leaving Red Hat
and the RDO Project as a technical community manager. In their own words,
“this is not actually a good bye. ASIDE: I’m terrible at goodbyes. This is
“I’ll See You Around”. Because the next chapter is within open source.
Within cloud computing. Within edge. And while I absolutely cannot wait to
tell you about it, this is not the time for looking forward, but for
looking back. This is a time for celebrating the past. For thanking my
beautiful collaboraters. Maybe for shedding a tear or two. Definitely for
expressing those difficult emotions. I love you, RDO Project. Thank you.
And I'll see you around.”
Community Meetings
Every Tuesday at 13:30 UTC, we have a weekly *TripleO CI community meeting*
on https://meet.google.com/bqx-xwht-wky with the agenda on
https://hackmd.io/IhMCTNMBSF6xtqiEd9Z0Kw. The TripleO CI meeting focuses on
a group of people focusing on Continuous Integration tooling and system who
would like to provide a comprehensive testing framework that is easily
reproducible for TripleO contributors. This framework should also be
consumable by other CI systems (OPNFV, RDO, vendor CI, etc.), so that
TripleO can be tested the same way everywhere. This is NOT a place for
TripleO usage questions, rather, check out the next meeting listed just
below.
Every Tuesday at 14:00 UTC, immediately following the TripleO CI meeting is
the weekly *TripleO Community meeting* on the #TripleO channel on Freenode
IRC. The agenda for this meeting is posted each week in a public etherpad
<https://etherpad.openstack.org/p/tripleo-meeting-items>. This is for
addressing anything to do with TripleO, including usage, feature requests,
and bug reports.
Every Wednesday at 14:00 UTC, we have a weekly *RDO community meeting* on
the #RDO channel on Freenode IRC. The agenda for this meeting is posted
each week in a public etherpad
<https://etherpad.openstack.org/p/RDO-Meeting> and the minutes from the
meeting are posted on the RDO website
<https://www.rdoproject.org/community/community-meeting/>. If there's
something you'd like to see happen in RDO - a package that is missing, a
tool that you'd like to see included, or a change in how things are
governed - this is the best time and place to help make that happen.
Every Thursday at 15:00 UTC, there is a weekly *CentOS Cloud SIG meeting* on
the #centos-devel channel on Freenode IRC. The agenda for this meeting is
posted each week in a public etherpad
<https://etherpad.openstack.org/p/centos-cloud-sig> and the minutes from
the meeting are posted on the RDO website
<https://www.rdoproject.org/contribute/cloud-sig-meeting/>. This meeting
makes sense for people that are involved in packaging OpenStack for CentOS
and for people that are packaging OTHER cloud infra things (OpenNebula,
CloudStack, Euca, etc) for CentOS. “Alone we can do so little; together we
can do so much.” - Helen Keller
OpenStack NewsOpenStack Ussuri Release Delivers Automation for Intelligent
Open Infrastructure
*AUSTIN, Texas - May 13, 2020* The OpenStack community today released Ussuri
<https://www.openstack.org/software/ussuri/>, the 21st version of the most
widely deployed open source cloud infrastructure software. The release
delivers advancements in three core areas:
- Ongoing improvements to the reliability of the core infrastructure
layer
- Enhancements to security and encryption capabilities
- Extended versatility to deliver support for new and emerging use cases
These improvements were designed and delivered by a global community of
upstream developers and operators. OpenStack software now powers more than
75 public cloud data centers and thousands of private clouds at a scale of
more than 10 million compute cores. OpenStack is the one infrastructure
platform uniquely suited to deployments of diverse architectures—bare
metal, virtual machines (VMs), graphics processing units (GPUs) and
containers.
For the Ussuri release, OpenStack received over 24,000 code changes by
1,003 developers from 188 different organizations and over 50 countries.
OpenStack is supported by a large, global open source community and is one
of the top three open source projects in the world in terms of active
contributions, along with the Linux kernel and Chromium.
Learn more about the 21st release of this open source cloud software
platform at
https://www.openstack.org/news/view/453/openstack-ussuri-release-lands-to...
Recent and Upcoming EventsVirtual Project Teams Gathering
The June Project Teams Gathering is going virtual since it is critical to
producing the next release. The virtual event will be held from Monday,
June 1 to Friday, June 5.
The event is open to all OSF projects, and teams are currently signing up
for their time slots. Find participating teams below, and the schedule will
be posted in the upcoming weeks.
Registration is now open! <https://virtualptgjune2020.eventbrite.com/>
Participating Teams include Airship, Automation SIG, Cinder, Edge Computing
Group, First Contact SIG, Interop WG, Ironic, Glance, Heat, Horizon, Kata
Containers, Kolla, Manila, Monasca, Multi-Arch SIG, Neutron, Nova, Octavia,
OpenDev, OpenStackAnsible, OpenStackAnsibleModules, OpenStack-Helm, Oslo,
QA, Scientific SIG, Security SIG, Tacker, and TripleO!
Continue to check https://www.openstack.org/ptg/ for event updates and if
you have any questions, please email ptg(a)openstack.org.
OpenDev: Three Part Virtual Event Series
OpenDev events bring together the developers and users of the open source
software powering today's infrastructure, to share best practices, identify
gaps, and advance the state of the art in open infrastructure. OpenDev
events focused on Edge Computing
<https://superuser.openstack.org/articles/report-cloud-edge-computing/> in
2017 and CI/CD in 2018. In 2020, OpenDev is a virtual series of three
separate events, each covering a different open infrastructure topic.
Participants can expect discussion oriented, collaborative sessions
exploring challenges, sharing common architectures, and collaborating
around potential solutions.
*Event #1: Large Scale Usage of Open Infrastructure* *June 29 - July 1,
2020*
Operating open infrastructure at scale presents common challenges and
constraints. During this event, users will share case studies and
architectures, discuss problem areas impacting their environments, and
collaborate around open source requirements directly with upstream
developers.
Topics include:
- Scaling user stories with the goal of pushing back cluster scaling
limits
- Upgrades
- Centralized compute vs distributed compute for NFV and edge computing
use case
- User Stories - challenges based on size of the deployment
Register at
https://www.eventbrite.com/e/opendev-large-scale-usage-of-open-infrastruc...
*Event #2: Hardware Automation topics* *July 20 - 22, 2020*
>From hardware acceleration to running applications directly on bare metal,
hardware automation enables organizations to save resources and increase
productivity. During this OpenDev event, operators will discuss hardware
limitations for cloud provisioning, share networking challenges, and
collaborate on open source requirements directly with upstream developers.
Topics include:
- End-to-end hardware provisioning lifecycle for bare metal / cradle to
grave for hypervisors
- Networking
- Consuming bare metal infrastructure to provision cloud based workloads
Register at
https://www.eventbrite.com/e/opendev-hardware-automation-registration-104...
*Event #3: Containers in Production topics* *August 10 - 12, 2020*
Whether you want to run containerized applications on bare metal or VMs,
organizations are developing architectures for a variety of workloads.
During this event, users will discuss the infrastructure requirements to
support containers, share challenges from their production environments,
and collaborate on open source requirements directly with upstream
developers.
Topics include:
- Using OpenStack and containers together
- Security and Isolation
- Telco and Network Functions
- Bare metal and containers
- Acceleration and optimization
Register at
https://www.eventbrite.com/e/opendev-containers-in-production-registratio...
Other Events
Other RDO events, including the many OpenStack meetups around the world,
are always listed on the RDO events page <http://rdoproject.org/events>. If
you have an RDO-related event, please feel free to add it by submitting a
pull request on Github
<https://github.com/OSAS/rh-events/blob/master/2018/RDO-Meetups.yml>.
Keep in Touch
There are lots of ways to stay in in touch with what's going on in the RDO
community. The best ways are …
WWW
- RDO <http://rdoproject.org/>
- OpenStack Q&A <http://ask.openstack.org/>
Mailing Lists
- Dev List <https://lists.rdoproject.org/mailman/listinfo/dev>
- Users List <https://lists.rdoproject.org/mailman/listinfo/users>
- This newsletter
<https://lists.rdoproject.org/mailman/listinfo/newsletter>
- CentOS Cloud SIG List
<https://lists.centos.org/mailman/listinfo/centos-devel>
- OpenShift on OpenStack SIG List
<https://commons.openshift.org/sig/OpenshiftOpenstack.html>
IRC on Freenode.irc.net
- RDO Project #rdo
- TripleO #tripleo
- CentOS Cloud SIG #centos-devel
Social Media
- Twitter <http://twitter.com/rdocommunity>
- Facebook <http://facebook.com/rdocommunity>
- Youtube <https://www.youtube.com/RDOcommunity>
As always, thanks for being part of the RDO community!
--
K Rain Leander
OpenStack Community Liaison
Open Source Program Office
https://www.rdoproject.org/
http://community.redhat.com
_______________________________________________
newsletter mailing list
newsletter(a)lists.rdoproject.org
http://lists.rdoproject.org/mailman/listinfo/newsletter
To unsubscribe: newsletter-unsubscribe(a)lists.rdoproject.org
4 years, 6 months