The current AHC workflow[1] requires us to send the already introspected
nodes back to ironic-discoverd, if we change the matching rules after
the initial introspection step.
This is problematic, because if we want to match on the benchmark data,
the benchmarks will need to be re-run. Currently, the edeploy plugin[2]
to ironic-discoverd is doing the matching, and it only deals with data
posted by the discovery ramdisk. Running the benchmarks can be very time
consuming on a typical production server, and we already store the
results in the ironic db. The benchmarks should not vary much between
runs, so this time is wasted in future runs.
One solution would be to add a feature to the benchmark analysis tool,
ironic-cardiff,[3] to do the subsequent rounds of matching. This would
be straight forward as this tool already gets an ironic client, and
already requires the hardware library which has the matching logic.
I would like to gather feedback on whether this approach seems
reasonable, or if there are any better suggestions to solve this problem.
[1]
https://repos.fedorapeople.org/repos/openstack-m/instack-undercloud/html/...
[2]
https://github.com/stackforge/ironic-discoverd/blob/master/ironic_discove...
[3]
https://github.com/rdo-management/rdo-ramdisk-tools/blob/master/rdo_ramdi...