[rdo-users] Creating CRUSH placement rules with TripleO
Cody
codeology.lab at gmail.com
Sat Oct 13 20:04:24 UTC 2018
Hello John,
Thank you for pointing me to the right direction. I wish I could be
more knowledgeable in using ceph-ansible for making customization at
the deployment time.
After going through the docs, I came up with the following environment
file in hope to assign SSD to the "vms" pool and HDD to the "volumes"
pool using TripleO.
parameter_defaults:
# Create CRUSH rules using device_class {hdd|ssh} supported by
Luminous release
CephAnsibleExtraConfig:
crush_rule_config: true
crush_rule_hdd:
name: replicated_hdd
root: default
type: host
device_class: hdd
default: false
crush_rule_ssd:
name: replicated_ssd
root: default
type: host
device_class: ssd
default: false
crush_rules:
- "{{ crush_rule_hdd }}"
- "{{ crush_rule_ssd }}"
create_crush_tree: true
# Create overcloud pools with custom crush rules {replicated_hdd |
replicated_ssh}
CephPools:
- name: vms
rule_name: replicated_ssd
- name: volumes
rule_name: replicated_hdd
Does it look right?
Thank you very much and have a good weekend!
Best regards,
Cody
On Sat, Oct 13, 2018 at 3:33 AM John Fulton <johfulto at redhat.com> wrote:
>
> On Saturday, October 13, 2018, Cody <codeology.lab at gmail.com> wrote:
>>
>> Hi everyone,
>>
>> Is it possible to define CRUSH placement rules and apply to different
>> pools while using TripleO to deploy an overcloud with Ceph
>> integration?
>>
>> I wish to set the "vms" pool to use SSDs and "volumes" pool to use
>> HDDs. On a pre-existing Ceph cluster, I can define CRUSH placement
>> rules using device-class and apply the corresponding rules when create
>> pools. But, I don't know how to do so with TripleO.
>>
>> Could someone shed light on this?
>
>
> CRUSH rules may be passed to specific nodes. You may identify specific nodes in TripleO by using node-specific overrides as per:
>
> https://docs.openstack.org/tripleo-docs/latest/install/advanced_deployment/node_specific_hieradata.html
>
> In the documentation above a specific devices list is passed to a specific node. However, you may pass other properties to the specific node including the osd_crush_location. For example:
>
> {"32C2BC31-F6BB-49AA-971A-377EFDFDB111": {"osd_crush_location": {"root": "standard_root", "rack": "rack1_std", "host": "lab-ceph01"}},
>
> TripleO then will map the node's UUID to the IP used in the ceph-ansible inventory and pass node specific variable overrides.
>
> You'll also want to use CephAnsibleExtraConfig to override specific ceph-ansible variables for all nodes, e.g.
>
> CephAnsibleExtraConfig:
> create_crush_tree: true
>
> More info at https://docs.openstack.org/tripleo-docs/latest/install/advanced_deployment/ceph_config.html
>
> Overall, if you know how to make ceph-ansible do what you need it to do, then TripleO can pass the variables to ceph-ansible to achieve it.
>
> John
>
>>
>>
>> Best regards,
>> Cody
>> _______________________________________________
>> users mailing list
>> users at lists.rdoproject.org
>> http://lists.rdoproject.org/mailman/listinfo/users
>>
>> To unsubscribe: users-unsubscribe at lists.rdoproject.org
More information about the users
mailing list