Attempt to reproduce Carlos Camacho simplest deployment (controller+compute) now fails as well.
by Boris Derzhavets
just via remote connection to VIRTHOST (32 GB) from F27 wks . Overcloud deployment
fails with error :-
2018-02-16 19:02:51 | 2018-02-16 19:02:39Z [overcloud.Compute.0]: CREATE_COMPLETE Stack CREATE completed successfully
2018-02-16 19:02:51 | 2018-02-16 19:02:40Z [overcloud.Compute.0]: CREATE_COMPLETE state changed
2018-02-16 19:02:51 | 2018-02-16 19:02:40Z [overcloud.Compute]: UPDATE_COMPLETE Stack UPDATE completed successfully
2018-02-16 19:02:51 | 2018-02-16 19:02:40Z [overcloud.Controller.0.ControllerExtraConfigPre]: CREATE_COMPLETE state changed
2018-02-16 19:02:51 | 2018-02-16 19:02:40Z [overcloud.Controller.0.NodeExtraConfig]: CREATE_IN_PROGRESS state changed
2018-02-16 19:02:51 | 2018-02-16 19:02:41Z [overcloud.Compute]: CREATE_COMPLETE state changed
2018-02-16 19:02:51 | 2018-02-16 19:02:42Z [overcloud.ComputeServers]: CREATE_IN_PROGRESS state changed
2018-02-16 19:02:51 | 2018-02-16 19:02:42Z [overcloud.ComputeServers]: CREATE_COMPLETE state changed
2018-02-16 19:02:51 | 2018-02-16 19:02:42Z [overcloud]: CREATE_FAILED list index out of range
2018-02-16 19:02:51 | 2018-02-16 19:02:42Z [overcloud.Controller.0.NodeExtraConfig]: CREATE_COMPLETE state changed
2018-02-16 19:02:51 |
2018-02-16 19:02:51 | Stack overcloud CREATE_FAILED
2018-02-16 19:02:51 |
2018-02-16 19:02:51 | + status_code=1
2018-02-16 19:02:51 | + openstack stack list
2018-02-16 19:02:51 | + grep -q overcloud
2018-02-16 19:02:55 | + openstack stack list
2018-02-16 19:02:55 | + grep -Eq '(CREATE|UPDATE)_COMPLETE'
2018-02-16 19:02:57 | + openstack stack failures list overcloud --long
2018-02-16 19:03:01 | ++ openstack stack resource list --nested-depth 5 overcloud
2018-02-16 19:03:01 | ++ grep FAILED
2018-02-16 19:03:01 | ++ grep 'StructuredDeployment '
2018-02-16 19:03:01 | ++ cut -d '|' -f3
2018-02-16 19:03:16 | + exit 1
4 years, 11 months
Strategy on packaging external dependencies in RDO + include Ansible in RDO
by Emilien Macchi
Before going down to my specific context, I would like to raise that we're
having more and more dependencies outside of OpenStack (Ceph, MariaDB,
Ansible, OVS, etc); and we know that all of them can easily "break"
projects like TripleO and therefore cause problems in the production chain.
I would like to understand if we have a common path for these projects, to
include them in delorean and in our machinery so we can easily test them in
RDO CI (and stop manual testing like we are doing now).
We all know that we can't and don't want to consume EPEL but for some
projects it's going to be problematic, I'll discuss about one of them a bit
later in this email.
My proposal is the following:
- Identify which projects we have had troubles in the past months and that
we are not automatically testing when new versions bump up (OVS? Ceph? etc)
- For these projects, how could we either 1) import them in delorean and
automatically bump them at each new tag, with proper CI job in place or 2)
have a repo that is nightly build from imports on latest tags available
from their upstream repos), and have periodic jobs that test these bits.
What I like with 1) is that we can easily test them independently (e.g. a
new version of OVS) versus all together in 2) (a new version of OVS + a new
version of MariaDB, etc).
Which leads to my second question... how are going to ship Ansible.
I talked with Sam Doran (in cc) today and it seems like in the near future
Ansible would be shipped via releases.ansible.com or EPE (and not from
CentOS Extras anymore).
So here are the questions:
1) How are we going to test new versions of Ansible in RDO CI context?
2) Where should we ship it? Back in my proposal, would it make sense to
import Ansible in RDO and gate each bump (proposal #1) or take upstream,
put in a deps repo (as we have current I believe) and test it with a
periodic jobs among other new deps.
Discussion is open, thanks for participating,
We Need You! For the RDO Booth at OpenStack Summit Vancouver
by K Rain Leander
Hello RDO Community!
We are gearing up for OpenStack Summit Vancouver where we'll have our regular presence within the Red Hat booth. This spring we'll continue the ManageIQ / RDO demo we started last year and we'd love to see your demos, too! If you've got something you'd like to demo via video OR LIVE because you're not afraid of ANYTHING, sign up .
We're also looking for people to spend their precious free time answering questions  at the RDO booth. If you're attending, please consider spending one or more of your free moments with us. Questions run the gamut from "What's RDO?" to "I found a bug in Neutron and need help troubleshooting it." Of course, you're not expected to know everything and you can always keep IRC open to access the RDO Community for help! Plus, you get cool stuff! If you sign up for three or more shifts, we're going to hook you up with an RDO hoodie.
If you have any thoughts / questions / concerns about space / schematics / details, catch me on irc  or email me directly.
Thanks so much!
Fwd: [openstack-community] OpenStack Day Sao Paulo (Brazil) - Call4papers
by Rich Bowen
-------- Forwarded Message --------
Subject: [openstack-community] OpenStack Day Sao Paulo (Brazil) -
Date: Mon, 19 Mar 2018 23:16:33 -0300
From: Marcelo Dieder <marcelodieder(a)gmail.com>
The Call for Papers OpenStack Day Sao Paulo 2018 is open. The event will
be on 27-28 July!
Call for papers: *https://goo.gl/V8zSCA*
Call for sponsors: *https://goo.gl/5DcRQh*
*OpenStack Day Sao Paulo Site: *http://openstackbr.com.br/events/2018/
RDO infra/TripleO-CI teams integration level on infrastructure
by Gabriele Cerami
TripleO CI team had a design session yesterday on the tasks for the
current sprint, which is addressing infra.
There are some open questions left, because we had important roles in
the team unavailable, and because we had to create and reprioritize some
of the cards yesterday, due to design decisions.
I think the questions involve the rdo infra people, especially the
second. But in general, I'd like to understand what level of
integration we want between the two groups regarding all the things
related to infrastructure servers.
1) We spoke about bastion host. We forgot to create a card for it during
planning, and yesterday we failed to agree on the scope of the task.
Bastion host is a precise security element, with certain rules which have
to be respected for it to be useful. The bastion host is the single
access point to all the other servers on the infrastructure, it's the
only one possessing a public ip, and that means that *EVERYTHING* needs
to get through it: logs exposure, web pages access (sova included), ssh
access. Implementing it will take more time, also because we have to
understand for example if sova will work behind a reverse proxy
The card tracking the bastion is here
So the question is: is the bastion host in scope, or we have something
different in mind when we talked about it ?
We also more or less agreed that the bastion host will be present only
on the infra tenant, not on the nodepool tenant, in which it would
serve to bastion only the te-broker. So the te-broker will be
completely exposed, and it will have a public ip
2) We understood that we need a common set of roles that will set up
common aspect of every server in infra: credentials, networks,
continuous deployment setup. The role will store a file with keys for
example, and this file will be used to setup all the other servers (or
only the bastion host)
This task is tracked here, but we had to reprioritize the card, as this
is the common ground to all the other "server setup" cards.
While beginning working on this card, yesterday I asked a question to
dmsimard, and it showed me the common roles that are already in place to
setup RDO infra servers. These common roles cover 70-80% of this card
The question is: what level of integration we want to achieve between
our two groups ? Can we modify these roles to be a bit more generic so
they can be used in setting up our infrastructure too ? Can we for
example modify the log server to accept logs from remote journald in our
servers, so we can already use the log server to expose our promotion
I would certainly hope so. We already agree that rdo infra team needs to
have access to our server in case of need, it makes only sense that we
use something they are already familiar with.
My implementation will be probably very very very similar to what I see
in rdo-infra repository anyway.
Thanks for any feedback