On Saturday, October 13, 2018, Cody <codeology.lab(a)gmail.com> wrote:
Hi everyone,
Is it possible to define CRUSH placement rules and apply to different
pools while using TripleO to deploy an overcloud with Ceph
integration?
I wish to set the "vms" pool to use SSDs and "volumes" pool to use
HDDs. On a pre-existing Ceph cluster, I can define CRUSH placement
rules using device-class and apply the corresponding rules when create
pools. But, I don't know how to do so with TripleO.
Could someone shed light on this?
CRUSH rules may be passed to specific nodes. You may identify specific
nodes in TripleO by using node-specific overrides as per:
https://docs.openstack.org/tripleo-docs/latest/install/advanced_deploymen...
In the documentation above a specific devices list is passed to a specific
node. However, you may pass other properties to the specific node including
the osd_crush_location. For example:
{"32C2BC31-F6BB-49AA-971A-377EFDFDB111": {"osd_crush_location":
{"root":
"standard_root", "rack": "rack1_std", "host":
"lab-ceph01"}},
TripleO then will map the node's UUID to the IP used in the ceph-ansible
inventory and pass node specific variable overrides.
You'll also want to use CephAnsibleExtraConfig to override specific
ceph-ansible variables for all nodes, e.g.
CephAnsibleExtraConfig:
create_crush_tree: true
More info at
https://docs.openstack.org/tripleo-docs/latest/install/advanced_deploymen...
Overall, if you know how to make ceph-ansible do what you need it to do,
then TripleO can pass the variables to ceph-ansible to achieve it.
John
Best regards,
Cody
_______________________________________________
users mailing list
users(a)lists.rdoproject.org
http://lists.rdoproject.org/mailman/listinfo/users
To unsubscribe: users-unsubscribe(a)lists.rdoproject.org