On Tue, Sep 8, 2015 at 10:42 AM, James Slagle <jslagle(a)redhat.com> wrote:
On Mon, Sep 07, 2015 at 02:07:56PM +0100, Steven Hardy wrote:
> Hi Tim,
>
> On Sun, Sep 06, 2015 at 07:35:30AM +0000, Tim Bell wrote:
> > Reading the RDO September newsletter, I noticed a mail thread
> > (
https://www.redhat.com/archives/rdo-list/2015-August/msg00032.html)
on
> > the future of packstack vs rdo-manager.
> >
> > We use packstack to spin up small OpenStack instances for
development and
> > testing. Typical cases are to have a look at the features of the
latest
> > releases or do some prototyping of an option we've not tried yet.
> >
> > It was not clear to me based on the mailing list thread as to how
this
> > could be done using rdo-manager unless you already have the
undercloud
> > configiured by RDO.
>
> > Has there been any further discussions around packstack future ?
>
> Thanks for raising this - I am aware that a number of folks have been
> thinking about this topic (myself included), but I don't think we've yet
> reached a definitive consensus re the path forward yet.
>
> Here's my view on the subject:
>
> 1. Packstack is clearly successful, useful to a lot of folks, and does
> satisfy a use-case currently not well served via rdo-manager, so IMO we
> absolutely should maintain it until that is no longer the case.
>
> 2. Many people are interested in easier ways to stand up PoC environments
> via rdo-manager, so we do need to work on ways to make that easier (or
even
> possible at all in the single-node case).
>
> 3. It would be really great if we could figure out (2) in such a way as
to
> enable a simple migration path from packstack to whatever the PoC mode of
> rdo-manager ends up being, for example perhaps we could have an rdo
manager
> interface which is capable of consuming a packstack answer file?
>
> Re the thread you reference, it raises a number of interesting questions,
> particularly the similarities/differences between an all-in-one packstack
> install and an all-in-one undercloud install;
>
> >From an abstract perspective, installing an all-in-one undercloud looks
a
> lot like installing an all-in-one packstack environment, both sets of
tools
> take a config file, and create a puppet-configured all-in-one OpenStack.
>
> But there's a lot of potential complexity related to providing a
> flexible/configurable deployment (like packstack) vs an opinionated
> bootstrap environment (e.g the current instack undercloud environment).
Besides there being some TripleO related history (which I won't bore
everyone
with), the above is a big reason why we didn't just use packstack
originally to
install the all-in-one undercloud.
As you point out, the undercloud installer is very opinionated by design.
It's
not meant to be a flexible all-in-one *OpenStack* installer, nor do I
think we
want to turn it into one. That would just end up in reimplementing
packstack.
>
> There are a few possible approaches:
>
> - Do the work to enable a more flexibly configured undercloud, and just
> have that as the "all in one" solution
-1 :).
> - Have some sort of transient undercloud (I'm thinking a container) which
> exists only for the duration of deploying the all-in-one overcloud, on
> the local (pre-provisioned, e.g not via Ironic) host. Some prototyping
> of this approach has already happened [1] which I think James Slagle
has
> used to successfully deploy TripleO templates on pre-provisioned nodes.
Right, so my thinking was to leverage the work (or some part of it) that
Jeff
Peeler has done on the standalone Heat container as a bootstrap mechanism.
Once
that container is up, you can use Heat to deploy to preprovisoned nodes
that
already have an OS installed. Not only would this be nice for POC's, there
are
also real use cases where dedicated provisioning networks are not
available, or
there's no access to ipmi/drac/whatever.
Perhaps off topic, but are people still interested in the Heat standalone
container work? I never received any replies on the openstack-dev list when
I asked for direction on how to best integrate with TripleO. I need to
bring it up to date to work with recent changes of Kolla (and will do so if
people are interested after getting the ironic containers completed).
http://lists.openstack.org/pipermail/openstack-dev/2015-August/071613.html
It would also provide a solution on how to orchestrate an HA undercloud as
well.
Note that the node running the bootstrap Heat container itself could
potentially be reused, providing for the true all-in-one.
I do have some hacked on templates I was working with, and had made enough
progress to where I was able to get the preprovisoned nodes to start
applying the
SoftwareDeployments from Heat after I manually configured
os-collect-config on
each node.
I'll get those in order and push up a WIP patch.
There are a lot of wrinkles here still, things like how to orchestrate the
manual config you still have to do on each node (have to configure
os-collect-config with a stack id), and assumptions on network setup, etc.
>
> The latter approach is quite interesting, because it potentially
maintains
> a greater degree of symmetry between the minimal PoC install and real
> production deployments (e.g you'd use the same heat templates etc), it
> could also potentially provide easier access to features as they are
added
> to overcloud templates (container integration, as an example), vs
> integrating new features in two places.
>
> Overall at this point I think there are still many unanswered questions
> around enabling the PoC use-case for rdo-manager (and, more generally
> making TripleO upstream more easily consumable for these kinds of
> use-cases). I hope/expect we'll have a TripleO session on this at the
> forthcoming summit, where we refine the various ideas people have been
> investigating, and define the path forward wrt PoC deployments.
So I did just send out the etherpad link for our session planning for Tokyo
this morning to openstack-dev :)
https://etherpad.openstack.org/p/tripleo-mitaka-proposed-sessions
I'll add a bullet item about this point.
>
> Hopefully that is somewhat helpful, and thanks again for re-starting this
> discussion! :)
>
> Steve
>
> [1]
https://etherpad.openstack.org/p/noop-softwareconfig
>
--
-- James Slagle
--