[rdo-list] Python-shade in RDO
by Graeme Gillies
Hi,
A while ago there was a discussion around python-shade library and
getting it into RDO. [1]
It's been a few months since then, and shade is now successfully
packaged and shipped as part of Fedora [2] which is great, but now I
wanted to restart the conversation about how to make it available to
users of CentOS/RDO.
While it was suggested to get it into EPEL, I don't feel that is the
best course of action simply because of the restrictive update policies
of EPEL not allowing us to update it as frequently as needed, and also
because python-shade depends on the python openstack clients, which are
not a part of EPEL (as my understanding).
The best place for us to make this package available is in RDO itself,
as shade is an official Openstack big tent project, and RDOs aims to be
a distribution providing packages for Openstack projects.
So I just wanted to confirm with everyone and get some feedback, but
unless there is any major objections, I was going to start looking at
the process to get a new package into RDO, which I assume means putting
a review request in to the project https://github.com/rdo-packages
(though I assume a new repo needs to be created for it first).
Regards,
Graeme
[1] https://www.redhat.com/archives/rdo-list/2015-November/thread.html
[2] http://koji.fedoraproject.org/koji/packageinfo?packageID=21707
--
Graeme Gillies
Principal Systems Administrator
Openstack Infrastructure
Red Hat Australia
8 years
[rdo-list] Multiple tools for deploying and testing TripleO
by Arie Bregman
Hi,
I would like to start a discussion on the overlap between tools we
have for deploying and testing TripleO (RDO & RHOSP) in CI.
Several months ago, we worked on one common framework for deploying
and testing OpenStack (RDO & RHOSP) in CI. I think you can say it
didn't work out well, which eventually led each group to focus on
developing other existing/new tools.
What we have right now for deploying and testing
--------------------------------------------------------
=== Component CI, Gating ===
I'll start with the projects we created, I think that's only fair :)
* Ansible-OVB[1] - Provisioning Tripleo heat stack, using the OVB project.
* Ansible-RHOSP[2] - Product installation (RHOSP). Branch per release.
* Octario[3] - Testing using RPMs (pep8, unit, functional, tempest,
csit) + Patching RPMs with submitted code.
=== Automation, QE ===
* InfraRed[4] - provision install and test. Pluggable and modular,
allows you to create your own provisioner, installer and tester.
As far as I know, the groups is working now on different structure of
one main project and three sub projects (provision, install and test).
=== RDO ===
I didn't use RDO tools, so I apologize if I got something wrong:
* About ~25 micro independent Ansible roles[5]. You can either choose
to use one of them or several together. They are used for
provisioning, installing and testing Tripleo.
* Tripleo-quickstart[6] - uses the micro roles for deploying tripleo
and test it.
As I said, I didn't use the tools, so feel free to add more
information you think is relevant.
=== More? ===
I hope not. Let us know if are familiar with more tools.
Conclusion
--------------
So as you can see, there are several projects that eventually overlap
in many areas. Each group is basically using the same tasks (provision
resources, build/import overcloud images, run tempest, collect logs,
etc.)
Personally, I think it's a waste of resources. For each task there is
at least two people from different groups who work on exactly the same
task. The most recent example I can give is OVB. As far as I know,
both groups are working on implementing it in their set of tools right
now.
On the other hand, you can always claim: "we already tried to work on
the same framework, we failed to do it successfully" - right, but
maybe with better ground rules we can manage it. We would defiantly
benefit a lot from doing that.
What's next?
----------------
So first of all, I would like to hear from you if you think that we
can collaborate once again or is it actually better to keep it as it
is now.
If you agree that collaboration here makes sense, maybe you have ideas
on how we can do it better this time.
I think that setting up a meeting to discuss the right architecture
for the project(s) and decide on good review/gating process, would be
a good start.
Please let me know what do you think and keep in mind that this is not
about which tool is better!. As you can see I didn't mention the time
it takes for each tool to deploy and test, and also not the full
feature list it supports.
If possible, we should keep it about collaborating and not choosing
the best tool. Our solution could be the combination of two or more
tools eventually (tripleo-red, infra-quickstart? :D )
"You may say I'm a dreamer, but I'm not the only one. I hope some day
you'll join us and the infra will be as one" :)
[1] https://github.com/redhat-openstack/ansible-ovb
[2] https://github.com/redhat-openstack/ansible-rhosp
[3] https://github.com/redhat-openstack/octario
[4] https://github.com/rhosqeauto/InfraRed
[5] https://github.com/redhat-openstack?utf8=%E2%9C%93&query=ansible-role
[6] https://github.com/openstack/tripleo-quickstart
8 years, 1 month
[rdo-list] Maintaining os-*-config in Fedora
by James Slagle
os-*-config (apply, cloud, collect, refresh, net) are still maintained in
Fedora (despite that the maintainers, myself included, have not been doing
regular builds).
I've been asked by a couple people from Fedora to update these packages to
build python 3 packages, and some other things like removing outdated Requires
on python libraries that are now in the stdlib (argparse, etc).
This raised the question for me though if we still want to maintain these
packages in Fedora at all. Aiui, they were not retired when the rest of the
core OpenStack packages were retired because os-*-config is useful when using
Fedora as an instance orchestarted via Heat on OpenStack.
However, we haven't been properly maintaining these packages in Fedora by doing
updated builds to pick up new releases, not to mention that these packages and
use case are entirely untested on Fedora to the best of my knowledge.
os-cloud-config I think we can retire from Fedora without any concern since it
is specific to TripleO.
For the others, is there anyone who wants to take them over to continue to work
on the "Fedora as an instance orchestrated via Heat" use case? If not, I
propose to retire the others as well.
Note that diskimage-builder and dib-utls are also still in Fedora but it
appears that pabelanger has been keeping those updated, so is likely using
them.
--
-- James Slagle
--
8 years, 1 month
Re: [rdo-list] Reminder: test day this week (Sept 29, 30)
by Luca 'remix_tj' Lorenzetto
On Tue, Sep 27, 2016 at 10:29 PM, Rich Bowen <rbowen(a)redhat.com> wrote:
> On 09/27/2016 04:21 PM, Luca 'remix_tj' Lorenzetto wrote:
>>
>> Hi,
>>
>> This could be interesting for our team. Is there any existing test plan?
>> We have a cluster were we are now experimenting mitaka tripleo
>> deployment and is redeployed daily (more or less) for testing.
>>
>> We could also test building images based on rhel 7 and testing.
>
>
> There's a test plan, of sorts, at the URL above -
> https://www.rdoproject.org/testday/newton/testedsetups_rc/ - However, we
> really need help making that test plan both more comprehensive, and more
> accessible - that is, make it so that beginners can come help test, as
> well as experts.
The link reported for the howto is returning 404
(https://www.rdoproject.org/testday/newton/milestone_rc#how-to-test)
Which is the right one?
>
> We would love to hear from you on rdo-list about scenarios that you'd
> like to see tested, as well as scenarios you are already testing.
We're mainly working on deployment with TripleO. Could be interesting for tests?
I see only packstack related tests...
Luca
--
"E' assurdo impiegare gli uomini di intelligenza eccellente per fare
calcoli che potrebbero essere affidati a chiunque se si usassero delle
macchine"
Gottfried Wilhelm von Leibnitz, Filosofo e Matematico (1646-1716)
"Internet è la più grande biblioteca del mondo.
Ma il problema è che i libri sono tutti sparsi sul pavimento"
John Allen Paulos, Matematico (1945-vivente)
Luca 'remix_tj' Lorenzetto, http://www.remixtj.net , <lorenzetto.luca(a)gmail.com>
8 years, 1 month
[rdo-list] [TripleO] TripleO behind a proxy with a pyPi mirrir
by John Marks
I am having issues installing tripleo behind a proxy/mirror environment. I
am using the USB key for the underclound install at it always breaks
installing the "pbr" package into the virtual env. It simple will not pull
the files from my mirror. Since I can't modify the USB key (read only) I
can't pass parameters to "python setup.py" to use the mirror. I have tried
defining pip.conf in the users home directory but it is not used. (By hand,
pip will use it just fine). Any hints would be appreciated.
8 years, 2 months
[rdo-list] [TripleO] undercloud dashboard installation
by Samuel Monderer
Hi,
Is undercloud dashboard installed together with the undercloud, i.e.: will
the dashboard be installed after running "openstack undercloud install"???
If that is the case how can I access it from the external network?
Regards,
Samuel
8 years, 2 months
[rdo-list] Nova scheduler tries to assign an already-in-use SRIOV QAT VF to a new instance (openstack-mitaka)
by Chinmaya Dwibedy
Hi,
Upon trying to create VM instance (Say A) with one QAT VF, it fails with
the following error i.e., “Requested operation is not valid: PCI device
0000:88:04.7 is in use by driver QEMU, domain instance-00000081”. Please
note that, PCI device 0000:88:04.7 is already being assigned to another VM
(Say B) . We have installed openstack-mitaka release on CentO7 system. It
has two Intel QAT devices. There are 32 VF devices available per QAT
Device/DH895xCC
device Out of 64 VFs, only 8 VFs are allocated (to VM instances) and rest
should be available.
But the nova scheduler tries to assign an already-in-use SRIOV VF to a new
instance and instance fails. It appears that the nova database is not
tracking which VF's have already been taken. But if I shut down VM B
instance, then other instance VM A boots up and vice-versa. Note that, both
the VM instances cannot run simultaneously because of the aforesaid issue.
We should always be able to create as many instances with the requested PCI
devices as there are available VFs.
Please feel free to let me know if additional information is needed. Can
anyone please suggest why it tries to assign same PCI device which has been
assigned already? Is there any way to resolve this issue? Thank you in
advance for your support and help.
[root@localhost ~(keystone_admin)]# lspci -d:435
83:00.0 Co-processor: Intel Corporation DH895XCC Series QAT
88:00.0 Co-processor: Intel Corporation DH895XCC Series QAT
[root@localhost ~(keystone_admin)]#
[root@localhost ~(keystone_admin)]# lspci -d:443 | grep "QAT Virtual
Function" | wc -l
64
[root@localhost ~(keystone_admin)]#
[root@localhost ~(keystone_admin)]# mysql -u root nova -e "SELECT
hypervisor_hostname, address, instance_uuid, status FROM pci_devices JOIN
compute_nodes oncompute_nodes.id=compute_node_id" | grep 0000:88:04.7
localhost 0000:88:04.7 e10a76f3-e58e-4071-a4dd-7a545e8000de allocated
localhost 0000:88:04.7 c3dbac90-198d-4150-ba0f-a80b912d8021 allocated
localhost 0000:88:04.7 c7f6adad-83f0-4881-b68f-6d154d565ce3 allocated
localhost.nfv.benunets.com <http://stag48.nfv.benunets.com/> 0000:88:04.7
0c3c11a5-f9a4-4f0d-b120-40e4dde843d4 allocated
[root@localhost ~(keystone_admin)]#
[root@localhost ~(keystone_admin)]# grep -r
e10a76f3-e58e-4071-a4dd-7a545e8000de
/etc/libvirt/qemu
/etc/libvirt/qemu/instance-00000081.xml: <uuid>e10a76f3-e58e-4071-a4dd-
7a545e8000de</uuid>
/etc/libvirt/qemu/instance-00000081.xml: <entry
name='uuid'>e10a76f3-e58e-4071-a4dd-7a545e8000de</entry>
/etc/libvirt/qemu/instance-00000081.xml: <source
file='/var/lib/nova/instances/e10a76f3-e58e-4071-a4dd-7a545e8000de/disk'/>
/etc/libvirt/qemu/instance-00000081.xml: <source
path='/var/lib/nova/instances/e10a76f3-e58e-4071-a4dd-
7a545e8000de/console.log'/>
/etc/libvirt/qemu/instance-00000081.xml: <source
path='/var/lib/nova/instances/e10a76f3-e58e-4071-a4dd-
7a545e8000de/console.log'/>
[root@localhost ~(keystone_admin)]#
[root@localhost ~(keystone_admin)]# grep -r
0c3c11a5-f9a4-4f0d-b120-40e4dde843d4
/etc/libvirt/qemu
/etc/libvirt/qemu/instance-000000ab.xml: <uuid>0c3c11a5-f9a4-4f0d-b120-
40e4dde843d4</uuid>
/etc/libvirt/qemu/instance-000000ab.xml: <entry
name='uuid'>0c3c11a5-f9a4-4f0d-b120-40e4dde843d4</entry>
/etc/libvirt/qemu/instance-000000ab.xml: <source
file='/var/lib/nova/instances/0c3c11a5-f9a4-4f0d-b120-40e4dde843d4/disk'/>
/etc/libvirt/qemu/instance-000000ab.xml: <source
path='/var/lib/nova/instances/0c3c11a5-f9a4-4f0d-b120-
40e4dde843d4/console.log'/>
/etc/libvirt/qemu/instance-000000ab.xml: <source
path='/var/lib/nova/instances/0c3c11a5-f9a4-4f0d-b120-
40e4dde843d4/console.log'/>
[root@localhost ~(keystone_admin)]#
On the controller, , it appears there are duplicate PCI device entries in
the Database:
MariaDB [nova]> select hypervisor_hostname,address,count(*) from
pci_devices JOIN compute_nodes on compute_nodes.id=compute_node_id group by
hypervisor_hostname,address having count(*) > 1;
+---------------------+--------------+----------+
| hypervisor_hostname | address | count(*) |
+---------------------+--------------+----------+
| localhost | 0000:05:00.0 | 3 |
| localhost | 0000:05:00.1 | 3 |
| localhost | 0000:83:01.0 | 3 |
| localhost | 0000:83:01.1 | 3 |
| localhost | 0000:83:01.2 | 3 |
| localhost | 0000:83:01.3 | 3 |
| localhost | 0000:83:01.4 | 3 |
| localhost | 0000:83:01.5 | 3 |
| localhost | 0000:83:01.6 | 3 |
| localhost | 0000:83:01.7 | 3 |
| localhost | 0000:83:02.0 | 3 |
| localhost | 0000:83:02.1 | 3 |
| localhost | 0000:83:02.2 | 3 |
| localhost | 0000:83:02.3 | 3 |
| localhost | 0000:83:02.4 | 3 |
| localhost | 0000:83:02.5 | 3 |
| localhost | 0000:83:02.6 | 3 |
| localhost | 0000:83:02.7 | 3 |
| localhost | 0000:83:03.0 | 3 |
| localhost | 0000:83:03.1 | 3 |
| localhost | 0000:83:03.2 | 3 |
| localhost | 0000:83:03.3 | 3 |
| localhost | 0000:83:03.4 | 3 |
| localhost | 0000:83:03.5 | 3 |
| localhost | 0000:83:03.6 | 3 |
| localhost | 0000:83:03.7 | 3 |
| localhost | 0000:83:04.0 | 3 |
| localhost | 0000:83:04.1 | 3 |
| localhost | 0000:83:04.2 | 3 |
| localhost | 0000:83:04.3 | 3 |
| localhost | 0000:83:04.4 | 3 |
| localhost | 0000:83:04.5 | 3 |
| localhost | 0000:83:04.6 | 3 |
| localhost | 0000:83:04.7 | 3 |
| localhost | 0000:88:01.0 | 3 |
| localhost | 0000:88:01.1 | 3 |
| localhost | 0000:88:01.2 | 3 |
| localhost | 0000:88:01.3 | 3 |
| localhost | 0000:88:01.4 | 3 |
| localhost | 0000:88:01.5 | 3 |
| localhost | 0000:88:01.6 | 3 |
| localhost | 0000:88:01.7 | 3 |
| localhost | 0000:88:02.0 | 3 |
| localhost | 0000:88:02.1 | 3 |
| localhost | 0000:88:02.2 | 3 |
| localhost | 0000:88:02.3 | 3 |
| localhost | 0000:88:02.4 | 3 |
| localhost | 0000:88:02.5 | 3 |
| localhost | 0000:88:02.6 | 3 |
| localhost | 0000:88:02.7 | 3 |
| localhost | 0000:88:03.0 | 3 |
| localhost | 0000:88:03.1 | 3 |
| localhost | 0000:88:03.2 | 3 |
| localhost | 0000:88:03.3 | 3 |
| localhost | 0000:88:03.4 | 3 |
| localhost | 0000:88:03.5 | 3 |
| localhost | 0000:88:03.6 | 3 |
| localhost | 0000:88:03.7 | 3 |
| localhost | 0000:88:04.0 | 3 |
| localhost | 0000:88:04.1 | 3 |
| localhost | 0000:88:04.2 | 3 |
| localhost | 0000:88:04.3 | 3 |
| localhost | 0000:88:04.4 | 3 |
| localhost | 0000:88:04.5 | 3 |
| localhost | 0000:88:04.6 | 3 |
| localhost | 0000:88:04.7 | 3 |
+---------------------+--------------+----------+
66 rows in set (0.00 sec)
MariaDB [nova]>
Regards,
Chinmaya
8 years, 2 months
[rdo-list] [Meeting] RDO meeting (2016-09-28)
by Alfredo Moralejo Alonso
==============================
#rdo: RDO meeting - 2016-09-28
==============================
Meeting started by amoralej at 15:02:13 UTC. The full logs are available
at
https://meetbot.fedoraproject.org/rdo/2016-09-28/rdo_meeting_-_2016-09-28...
.
Meeting summary
---------------
* DLRN Ocata branch preparation (amoralej, 15:04:04)
* ocata will be open and u-c pins will be maintained in newton tag
until newton GA (amoralej, 15:14:43)
* ACTION: number80 will check with centos team about why
openstack-ocata doesn't exist yet in buildlogs (amoralej, 15:19:45)
* Announcements (amoralej, 15:19:56)
* Test day, tomorrow and Friday -
https://www.rdoproject.org/testday/newton/rc/ (amoralej, 15:20:15)
* LINK:
https://raw.githubusercontent.com/redhat-openstack/tempest/master/tools/i...
(chandankumar, 15:22:43)
* Important dates (amoralej, 15:26:50)
* CentOS outage October 10th:
https://lists.centos.org/pipermail/ci-users/2016-September/000392.html
(amoralej, 15:27:22)
* rcip-dev (review.rdoproject.org) outage >= October 15th (not
announced yet) (amoralej, 15:27:58)
* review.rdoproject.org maintenance 2016-10-10 13:00 UTC:
https://www.redhat.com/archives/rdo-list/2016-September/msg00095.html
(amoralej, 15:28:15)
* Newton release: October 6th:
https://releases.openstack.org/newton/index.html (amoralej,
15:28:29)
* LINK: https://etherpad.openstack.org/p/rdo-barcelona-meetup-schedule
(rbowen, 15:29:03)
* open floor (amoralej, 15:34:08)
Meeting ended at 15:43:33 UTC.
Action Items
------------
* number80 will check with centos team about why openstack-ocata doesn't
exist yet in buildlogs
Action Items, by person
-----------------------
* number80
* number80 will check with centos team about why openstack-ocata
doesn't exist yet in buildlogs
* **UNASSIGNED**
* (none)
People Present (lines said)
---------------------------
* amoralej (60)
* jpena (17)
* rbowen (12)
* dmsimard (11)
* chandankumar (11)
* zodbot (9)
* number80 (8)
* EmilienM (5)
* bandini (3)
* trown (2)
* panda (2)
* myoung (1)
* Duck (1)
* jschlueter (1)
* imcsk8 (1)
* gchamoul (1)
* rdogerrit (1)
Generated by `MeetBot`_ 0.1.4
.. _`MeetBot`: http://wiki.debian.org/MeetBot
8 years, 2 months