A while ago there was a discussion around python-shade library and
getting it into RDO. 
It's been a few months since then, and shade is now successfully
packaged and shipped as part of Fedora  which is great, but now I
wanted to restart the conversation about how to make it available to
users of CentOS/RDO.
While it was suggested to get it into EPEL, I don't feel that is the
best course of action simply because of the restrictive update policies
of EPEL not allowing us to update it as frequently as needed, and also
because python-shade depends on the python openstack clients, which are
not a part of EPEL (as my understanding).
The best place for us to make this package available is in RDO itself,
as shade is an official Openstack big tent project, and RDOs aims to be
a distribution providing packages for Openstack projects.
So I just wanted to confirm with everyone and get some feedback, but
unless there is any major objections, I was going to start looking at
the process to get a new package into RDO, which I assume means putting
a review request in to the project https://github.com/rdo-packages
(though I assume a new repo needs to be created for it first).
Principal Systems Administrator
Red Hat Australia
I would like to start a discussion on the overlap between tools we
have for deploying and testing TripleO (RDO & RHOSP) in CI.
Several months ago, we worked on one common framework for deploying
and testing OpenStack (RDO & RHOSP) in CI. I think you can say it
didn't work out well, which eventually led each group to focus on
developing other existing/new tools.
What we have right now for deploying and testing
=== Component CI, Gating ===
I'll start with the projects we created, I think that's only fair :)
* Ansible-OVB - Provisioning Tripleo heat stack, using the OVB project.
* Ansible-RHOSP - Product installation (RHOSP). Branch per release.
* Octario - Testing using RPMs (pep8, unit, functional, tempest,
csit) + Patching RPMs with submitted code.
=== Automation, QE ===
* InfraRed - provision install and test. Pluggable and modular,
allows you to create your own provisioner, installer and tester.
As far as I know, the groups is working now on different structure of
one main project and three sub projects (provision, install and test).
=== RDO ===
I didn't use RDO tools, so I apologize if I got something wrong:
* About ~25 micro independent Ansible roles. You can either choose
to use one of them or several together. They are used for
provisioning, installing and testing Tripleo.
* Tripleo-quickstart - uses the micro roles for deploying tripleo
and test it.
As I said, I didn't use the tools, so feel free to add more
information you think is relevant.
=== More? ===
I hope not. Let us know if are familiar with more tools.
So as you can see, there are several projects that eventually overlap
in many areas. Each group is basically using the same tasks (provision
resources, build/import overcloud images, run tempest, collect logs,
Personally, I think it's a waste of resources. For each task there is
at least two people from different groups who work on exactly the same
task. The most recent example I can give is OVB. As far as I know,
both groups are working on implementing it in their set of tools right
On the other hand, you can always claim: "we already tried to work on
the same framework, we failed to do it successfully" - right, but
maybe with better ground rules we can manage it. We would defiantly
benefit a lot from doing that.
So first of all, I would like to hear from you if you think that we
can collaborate once again or is it actually better to keep it as it
If you agree that collaboration here makes sense, maybe you have ideas
on how we can do it better this time.
I think that setting up a meeting to discuss the right architecture
for the project(s) and decide on good review/gating process, would be
a good start.
Please let me know what do you think and keep in mind that this is not
about which tool is better!. As you can see I didn't mention the time
it takes for each tool to deploy and test, and also not the full
feature list it supports.
If possible, we should keep it about collaborating and not choosing
the best tool. Our solution could be the combination of two or more
tools eventually (tripleo-red, infra-quickstart? :D )
"You may say I'm a dreamer, but I'm not the only one. I hope some day
you'll join us and the infra will be as one" :)
os-*-config (apply, cloud, collect, refresh, net) are still maintained in
Fedora (despite that the maintainers, myself included, have not been doing
I've been asked by a couple people from Fedora to update these packages to
build python 3 packages, and some other things like removing outdated Requires
on python libraries that are now in the stdlib (argparse, etc).
This raised the question for me though if we still want to maintain these
packages in Fedora at all. Aiui, they were not retired when the rest of the
core OpenStack packages were retired because os-*-config is useful when using
Fedora as an instance orchestarted via Heat on OpenStack.
However, we haven't been properly maintaining these packages in Fedora by doing
updated builds to pick up new releases, not to mention that these packages and
use case are entirely untested on Fedora to the best of my knowledge.
os-cloud-config I think we can retire from Fedora without any concern since it
is specific to TripleO.
For the others, is there anyone who wants to take them over to continue to work
on the "Fedora as an instance orchestrated via Heat" use case? If not, I
propose to retire the others as well.
Note that diskimage-builder and dib-utls are also still in Fedora but it
appears that pabelanger has been keeping those updated, so is likely using
-- James Slagle
On Tue, Sep 27, 2016 at 10:29 PM, Rich Bowen <rbowen(a)redhat.com> wrote:
> On 09/27/2016 04:21 PM, Luca 'remix_tj' Lorenzetto wrote:
>> This could be interesting for our team. Is there any existing test plan?
>> We have a cluster were we are now experimenting mitaka tripleo
>> deployment and is redeployed daily (more or less) for testing.
>> We could also test building images based on rhel 7 and testing.
> There's a test plan, of sorts, at the URL above -
> https://www.rdoproject.org/testday/newton/testedsetups_rc/ - However, we
> really need help making that test plan both more comprehensive, and more
> accessible - that is, make it so that beginners can come help test, as
> well as experts.
The link reported for the howto is returning 404
Which is the right one?
> We would love to hear from you on rdo-list about scenarios that you'd
> like to see tested, as well as scenarios you are already testing.
We're mainly working on deployment with TripleO. Could be interesting for tests?
I see only packstack related tests...
"E' assurdo impiegare gli uomini di intelligenza eccellente per fare
calcoli che potrebbero essere affidati a chiunque se si usassero delle
Gottfried Wilhelm von Leibnitz, Filosofo e Matematico (1646-1716)
"Internet è la più grande biblioteca del mondo.
Ma il problema è che i libri sono tutti sparsi sul pavimento"
John Allen Paulos, Matematico (1945-vivente)
Luca 'remix_tj' Lorenzetto, http://www.remixtj.net , <lorenzetto.luca(a)gmail.com>
I am interested in understanding what comprises your release build system, as described on https://www.rdoproject.org/documentation/rdo-packaging/. I understand that DLRN is used for per-commit builds. But, I cannot find any description of your “CentOS Build System” which builds your release RPMs. Any information would be very helpful.
I am having issues installing tripleo behind a proxy/mirror environment. I
am using the USB key for the underclound install at it always breaks
installing the "pbr" package into the virtual env. It simple will not pull
the files from my mirror. Since I can't modify the USB key (read only) I
can't pass parameters to "python setup.py" to use the mirror. I have tried
defining pip.conf in the users home directory but it is not used. (By hand,
pip will use it just fine). Any hints would be appreciated.
Is undercloud dashboard installed together with the undercloud, i.e.: will
the dashboard be installed after running "openstack undercloud install"???
If that is the case how can I access it from the external network?