[rdo-users] Creating CRUSH placement rules with TripleO
John Fulton
johfulto at redhat.com
Mon Oct 15 18:00:58 UTC 2018
On Sat, Oct 13, 2018 at 4:04 PM Cody <codeology.lab at gmail.com> wrote:
>
> Hello John,
>
> Thank you for pointing me to the right direction. I wish I could be
> more knowledgeable in using ceph-ansible for making customization at
> the deployment time.
>
> After going through the docs, I came up with the following environment
> file in hope to assign SSD to the "vms" pool and HDD to the "volumes"
> pool using TripleO.
>
> parameter_defaults:
> CephAnsibleExtraConfig:
> crush_rule_config: true
so far so good but I don't know about the following:
> crush_rule_hdd:
> name: replicated_hdd
> root: default
> type: host
> device_class: hdd
> default: false
> crush_rule_ssd:
> name: replicated_ssd
> root: default
> type: host
> device_class: ssd
> default: false
Try something more like this. Also, I assume you want one to be the
default so that if create a pool it ends up in one of the two
crush_rules.
crush_rules:
- name: replicated_hdd
root: standard_root
type: host
default: true
- name: replicated_ssd
root: fast_root
type: host
default: false
> crush_rules:
> - "{{ crush_rule_hdd }}"
> - "{{ crush_rule_ssd }}"
The environment file is THT, not ansible. I don't think this will work
and you should just be bale to omit the 3 lines above.
> create_crush_tree: true
Good.
> CephPools:
> - name: vms
> rule_name: replicated_ssd
> - name: volumes
> rule_name: replicated_hdd
Yes. You could also add pg_num to the above per pool.
Also, watch out for https://bugzilla.redhat.com/show_bug.cgi?id=1638092
John
>
> Thank you very much and have a good weekend!
>
> Best regards,
> Cody
> On Sat, Oct 13, 2018 at 3:33 AM John Fulton <johfulto at redhat.com> wrote:
> >
> > On Saturday, October 13, 2018, Cody <codeology.lab at gmail.com> wrote:
> >>
> >> Hi everyone,
> >>
> >> Is it possible to define CRUSH placement rules and apply to different
> >> pools while using TripleO to deploy an overcloud with Ceph
> >> integration?
> >>
> >> I wish to set the "vms" pool to use SSDs and "volumes" pool to use
> >> HDDs. On a pre-existing Ceph cluster, I can define CRUSH placement
> >> rules using device-class and apply the corresponding rules when create
> >> pools. But, I don't know how to do so with TripleO.
> >>
> >> Could someone shed light on this?
> >
> >
> > CRUSH rules may be passed to specific nodes. You may identify specific nodes in TripleO by using node-specific overrides as per:
> >
> > https://docs.openstack.org/tripleo-docs/latest/install/advanced_deployment/node_specific_hieradata.html
> >
> > In the documentation above a specific devices list is passed to a specific node. However, you may pass other properties to the specific node including the osd_crush_location. For example:
> >
> > {"32C2BC31-F6BB-49AA-971A-377EFDFDB111": {"osd_crush_location": {"root": "standard_root", "rack": "rack1_std", "host": "lab-ceph01"}},
> >
> > TripleO then will map the node's UUID to the IP used in the ceph-ansible inventory and pass node specific variable overrides.
> >
> > You'll also want to use CephAnsibleExtraConfig to override specific ceph-ansible variables for all nodes, e.g.
> >
> > CephAnsibleExtraConfig:
> > create_crush_tree: true
> >
> > More info at https://docs.openstack.org/tripleo-docs/latest/install/advanced_deployment/ceph_config.html
> >
> > Overall, if you know how to make ceph-ansible do what you need it to do, then TripleO can pass the variables to ceph-ansible to achieve it.
> >
> > John
> >
> >>
> >>
> >> Best regards,
> >> Cody
> >> _______________________________________________
> >> users mailing list
> >> users at lists.rdoproject.org
> >> http://lists.rdoproject.org/mailman/listinfo/users
> >>
> >> To unsubscribe: users-unsubscribe at lists.rdoproject.org
More information about the users
mailing list