On Thu, Jun 9, 2016 at 10:23 AM, Javier Pena <javier.pena(a)redhat.com> wrote:
----- Original Message -----
> On Wed, Jun 8, 2016 at 6:33 PM, Ivan Chavero <ichavero(a)redhat.com> wrote:
> > I can be wrong but right now Packstack can already do this stuff,
> > more command line options are needed or it might need little tweaks to the
> > code but this is not far from current Packstack options.
>
> Right now Packstack has a lot of code and logic to connect to
> additional nodes and do things.
To be honest, the amount of code is not that big (at least to me).
On a quick check over the refactored version, I see
https://github.com/javierpena/packstack/blob/feature/manifest_refactor/pa...
could be simplified (maybe removed), then
https://github.com/javierpena/packstack/blob/feature/manifest_refactor/pa...
would need to be rewritten, to support a single node. Everything else is small
simplifications on the plugins to assume all hosts are the same.
> Packstack, itself, connects to compute hosts to install nova, same
> with the other kind of hosts.
>
> What I am saying is that Packstack should only ever be able to install
> (efficiently) services on "localhost".
>
> Hence, me, as a user (with Ansible or manually), could do something
> like I mentioned before:
> - Login to Server 1 and run "packstack --install-rabbitmq=y
> --install-mariadb=y"
> - Login to Server 2 and run "packstack --install-keystone=y
> --rabbitmq-server=server1 --database-server=server1"
> - Login to Server 3 and run "packstack --install-glance=y
> --keystone-server=server2 --database-server=server1
> --rabbitmq-server=server1"
> - Login to Server 4 and run "packstack --install-nova=y
> --keystone-server=server2 --database-server=server1
> --rabbitmq-server=server1"
> (etc)
>
> This would work, allow multi node without having all the multi node
> logic embedded and handled by Packstack itself.
Doing this would require adding a similar layer of complexity, but in the puppet code
instead of python. Right now, we assume that every API service is running on
config['CONTROLLER_HOST'], with your proposal we should have the current host, and
separate variables (and Hiera processing in python) to have a single variable per service.
I think it's a good idea anyway, but I think it wouldn't reduce complexity or any
associated CI coverage concerns.
We could take an easier way and assume we only have 3 roles, as in the current refactored
code: controller, network, compute. The logic would then be:
- By default we install everything, so all in one
- If our host is not CONFIG_CONTROLLER_HOST but is part of CONFIG_NETWORK_HOSTS, we
apply the network manifest
- Same as above if our host is part of CONFIG_COMPUTE_HOSTS
Of course, the last two options would assume a first server is installed as controller.
This would allow us to reuse the same answer file on all runs (one per host as you
proposed), eliminate the ssh code as we are always running locally, and make some
assumptions in the python code, like expecting OPM to be deployed and such. A contributed
ansible wrapper to automate the runs would be straightforward to create.
What do you think? Would it be worth the effort?
IMO, the modular deployment model proposed by David is the best
approach for openstack installers that i always dreamt about when
deploying OpenStack in real production environments. However, i think
moving into that will be hard and would exceed the PoC oriented nature
of Packstack where a controller + compute + network should suffice.
In fact, i'd say that the best way to go for this modular approach is
with containers and a orchestration tool designed for them
(kubernetes, right?) but this is a different story, :)
Regards,
Javier
>
> David Moreau Simard
> Senior Software Engineer | Openstack RDO
>
> dmsimard = [irc, github, twitter]
>
_______________________________________________
rdo-list mailing list
rdo-list(a)redhat.com
https://www.redhat.com/mailman/listinfo/rdo-list
To unsubscribe: rdo-list-unsubscribe(a)redhat.com