On Mon, Oct 22, 2018 at 8:51 PM Cody <codeology.lab(a)gmail.com> wrote:
Thank you John for the reply.
I am unsure to what extent I should include in an environment file
when it comes to scale a Ceph cluster. Should I include every
customization done to the cluster since previous deployment? In my
case, I have altered the CRUSH hierarchy, changed failure domains, and
created an EC pool with a custom EC rule. Do I need to count in all of
those?
In short: yes.
In long: in general with TripleO, if you deploy and include (via a -e)
N environment files and you re-run 'openstack overcloud deploy ...'
you must include the same N files or you'd be asking TripleO to change
something about your deployment. The ceph-ansible integration assumes
the same. ceph-ansible will re-run the site.yml playbook and
idempotence will keep things the same unless you change the input
variables. So if you defined the CRUSH hierarchy in an environment
file, then please include the same environment file. Similarly, if you
defined a pool with the CephPools parameter, then please keep that
list of pools unchanged. How exactly things will behave if you don't,
could be undefined depending on implementation details of the tasks.
E.g. ceph-ansible isn't going to remove a pool if it's not in the
pools list, but you'll be on the safe side if you reassert
consistently with each update as this is how both tools are tested.
John
Thank you very much.
Best regards,
Cody
On Mon, Oct 22, 2018 at 7:03 AM John Fulton <johfulto(a)redhat.com> wrote:
>
> No, I don't see why it would hurt the existing settings, provided you continue
to pass the CRUSH data environment files.
>
> John
>
> On Sun, Oct 21, 2018, 10:08 PM Cody <codeology.lab(a)gmail.com> wrote:
>>
>> Hello folks,
>>
>> I have made some changes to a Ceph cluster initially deployed with
>> OpenStack using TripleO. Specifically, I have changed the CRUSH map
>> and failure domain for the pools used by the overcloud. Now, if I
>> attempt to add new storage nodes (with identical specs) to the cluster
>> simply by increasing the CephStorageCount, would that mess up the
>> existing settings?
>>
>> Thank you very much.
>>
>> Best regards,
>> Cody
>> _______________________________________________
>> users mailing list
>> users(a)lists.rdoproject.org
>>
http://lists.rdoproject.org/mailman/listinfo/users
>>
>> To unsubscribe: users-unsubscribe(a)lists.rdoproject.org