[rdo-users] Scaling integrated Ceph cluster with post deployment customization

Cody codeology.lab at gmail.com
Tue Oct 23 00:50:43 UTC 2018


Thank you John for the reply.

I am unsure to what extent I should include in an environment file
when it comes to scale a Ceph cluster. Should I include every
customization done to the cluster since previous deployment? In my
case, I have altered the CRUSH hierarchy, changed failure domains, and
created an EC pool with a custom EC rule. Do I need to count in all of
those?

Thank you very much.

Best regards,
Cody

On Mon, Oct 22, 2018 at 7:03 AM John Fulton <johfulto at redhat.com> wrote:
>
> No, I don't see why it would hurt the existing settings, provided you continue to pass the CRUSH data environment files.
>
>   John
>
> On Sun, Oct 21, 2018, 10:08 PM Cody <codeology.lab at gmail.com> wrote:
>>
>> Hello folks,
>>
>> I have made some changes to a Ceph cluster initially deployed with
>> OpenStack using TripleO. Specifically, I have changed the CRUSH map
>> and failure domain for the pools used by the overcloud. Now, if I
>> attempt to add new storage nodes (with identical specs) to the cluster
>> simply by increasing the CephStorageCount, would that mess up the
>> existing settings?
>>
>> Thank you very much.
>>
>> Best regards,
>> Cody
>> _______________________________________________
>> users mailing list
>> users at lists.rdoproject.org
>> http://lists.rdoproject.org/mailman/listinfo/users
>>
>> To unsubscribe: users-unsubscribe at lists.rdoproject.org


More information about the users mailing list