<div dir="ltr"><div class="gmail_extra"><br><div class="gmail_quote">On Tue, Sep 8, 2015 at 10:42 AM, James Slagle <span dir="ltr"><<a href="mailto:jslagle@redhat.com" target="_blank">jslagle@redhat.com</a>></span> wrote:<br><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left-width:1px;border-left-color:rgb(204,204,204);border-left-style:solid;padding-left:1ex"><div class=""><div class="h5">On Mon, Sep 07, 2015 at 02:07:56PM +0100, Steven Hardy wrote:<br>
> Hi Tim,<br>
><br>
> On Sun, Sep 06, 2015 at 07:35:30AM +0000, Tim Bell wrote:<br>
> > Reading the RDO September newsletter, I noticed a mail thread<br>
> > (<a href="https://www.redhat.com/archives/rdo-list/2015-August/msg00032.html" rel="noreferrer" target="_blank">https://www.redhat.com/archives/rdo-list/2015-August/msg00032.html</a>) on<br>
> > the future of packstack vs rdo-manager.<br>
> ><br>
> > We use packstack to spin up small OpenStack instances for development and<br>
> > testing. Typical cases are to have a look at the features of the latest<br>
> > releases or do some prototyping of an option we've not tried yet.<br>
> ><br>
> > It was not clear to me based on the mailing list thread as to how this<br>
> > could be done using rdo-manager unless you already have the undercloud<br>
> > configiured by RDO.<br>
><br>
> > Has there been any further discussions around packstack future ?<br>
><br>
> Thanks for raising this - I am aware that a number of folks have been<br>
> thinking about this topic (myself included), but I don't think we've yet<br>
> reached a definitive consensus re the path forward yet.<br>
><br>
> Here's my view on the subject:<br>
><br>
> 1. Packstack is clearly successful, useful to a lot of folks, and does<br>
> satisfy a use-case currently not well served via rdo-manager, so IMO we<br>
> absolutely should maintain it until that is no longer the case.<br>
><br>
> 2. Many people are interested in easier ways to stand up PoC environments<br>
> via rdo-manager, so we do need to work on ways to make that easier (or even<br>
> possible at all in the single-node case).<br>
><br>
> 3. It would be really great if we could figure out (2) in such a way as to<br>
> enable a simple migration path from packstack to whatever the PoC mode of<br>
> rdo-manager ends up being, for example perhaps we could have an rdo manager<br>
> interface which is capable of consuming a packstack answer file?<br>
><br>
> Re the thread you reference, it raises a number of interesting questions,<br>
> particularly the similarities/differences between an all-in-one packstack<br>
> install and an all-in-one undercloud install;<br>
><br>
> >From an abstract perspective, installing an all-in-one undercloud looks a<br>
> lot like installing an all-in-one packstack environment, both sets of tools<br>
> take a config file, and create a puppet-configured all-in-one OpenStack.<br>
><br>
> But there's a lot of potential complexity related to providing a<br>
> flexible/configurable deployment (like packstack) vs an opinionated<br>
> bootstrap environment (e.g the current instack undercloud environment).<br>
<br>
</div></div>Besides there being some TripleO related history (which I won't bore everyone<br>
with), the above is a big reason why we didn't just use packstack originally to<br>
install the all-in-one undercloud.<br>
<br>
As you point out, the undercloud installer is very opinionated by design. It's<br>
not meant to be a flexible all-in-one *OpenStack* installer, nor do I think we<br>
want to turn it into one. That would just end up in reimplementing packstack.<br>
<span class=""><br>
><br>
> There are a few possible approaches:<br>
><br>
> - Do the work to enable a more flexibly configured undercloud, and just<br>
> have that as the "all in one" solution<br>
<br>
</span>-1 :).<br>
<span class=""><br>
> - Have some sort of transient undercloud (I'm thinking a container) which<br>
> exists only for the duration of deploying the all-in-one overcloud, on<br>
> the local (pre-provisioned, e.g not via Ironic) host. Some prototyping<br>
> of this approach has already happened [1] which I think James Slagle has<br>
> used to successfully deploy TripleO templates on pre-provisioned nodes.<br>
<br>
</span>Right, so my thinking was to leverage the work (or some part of it) that Jeff<br>
Peeler has done on the standalone Heat container as a bootstrap mechanism. Once<br>
that container is up, you can use Heat to deploy to preprovisoned nodes that<br>
already have an OS installed. Not only would this be nice for POC's, there are<br>
also real use cases where dedicated provisioning networks are not available, or<br>
there's no access to ipmi/drac/whatever.<br></blockquote><div><br></div><div>Perhaps off topic, but are people still interested in the Heat standalone container work? I never received any replies on the openstack-dev list when I asked for direction on how to best integrate with TripleO. I need to bring it up to date to work with recent changes of Kolla (and will do so if people are interested after getting the ironic containers completed).</div><div><br></div><div><a href="http://lists.openstack.org/pipermail/openstack-dev/2015-August/071613.html">http://lists.openstack.org/pipermail/openstack-dev/2015-August/071613.html</a><br></div><div> </div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left-width:1px;border-left-color:rgb(204,204,204);border-left-style:solid;padding-left:1ex">
<br>
It would also provide a solution on how to orchestrate an HA undercloud as<br>
well.<br>
<br>
Note that the node running the bootstrap Heat container itself could<br>
potentially be reused, providing for the true all-in-one.<br>
<br>
I do have some hacked on templates I was working with, and had made enough<br>
progress to where I was able to get the preprovisoned nodes to start applying the<br>
SoftwareDeployments from Heat after I manually configured os-collect-config on<br>
each node.<br>
<br>
I'll get those in order and push up a WIP patch.<br>
<br>
There are a lot of wrinkles here still, things like how to orchestrate the<br>
manual config you still have to do on each node (have to configure<br>
os-collect-config with a stack id), and assumptions on network setup, etc.<br><span class=""><br>
><br>
> The latter approach is quite interesting, because it potentially maintains<br>
> a greater degree of symmetry between the minimal PoC install and real<br>
> production deployments (e.g you'd use the same heat templates etc), it<br>
> could also potentially provide easier access to features as they are added<br>
> to overcloud templates (container integration, as an example), vs<br>
> integrating new features in two places.<br>
><br>
> Overall at this point I think there are still many unanswered questions<br>
> around enabling the PoC use-case for rdo-manager (and, more generally<br>
> making TripleO upstream more easily consumable for these kinds of<br>
> use-cases). I hope/expect we'll have a TripleO session on this at the<br>
> forthcoming summit, where we refine the various ideas people have been<br>
> investigating, and define the path forward wrt PoC deployments.<br>
<br>
</span>So I did just send out the etherpad link for our session planning for Tokyo<br>
this morning to openstack-dev :)<br>
<br>
<a href="https://etherpad.openstack.org/p/tripleo-mitaka-proposed-sessions" rel="noreferrer" target="_blank">https://etherpad.openstack.org/p/tripleo-mitaka-proposed-sessions</a><br>
<br>
I'll add a bullet item about this point.<br>
<span class=""><br>
><br>
> Hopefully that is somewhat helpful, and thanks again for re-starting this<br>
> discussion! :)<br>
><br>
> Steve<br>
><br>
> [1] <a href="https://etherpad.openstack.org/p/noop-softwareconfig" rel="noreferrer" target="_blank">https://etherpad.openstack.org/p/noop-softwareconfig</a><br>
><br>
</span>--<br>
-- James Slagle<br>
--</blockquote></div><br></div></div>