we migrated an OpenStack installation of one of our customers to TripleO RDO when Rocky was released. Back then deployment times were fine, as the whole cluster didn't have that many nodes yet.
In the meantime we upgraded to Ussuri and use 3 controllers and 44 compute nodes by now. Our deployment times exponentially increased with each set of compute nodes we added. With our current setup, a complete deployment run takes about 15 to 16 hours. In our clusters we count the following ressources right now:
VMs: ~ 2100
This sounds like it's related to https://bugs.launchpad.net/tripleo/+bug/1915761
. This should be fixed in the latest versions and we did backport it to Train. So it'd be interesting to see if you're missing some of these patches. We've had reports of updates taking about 4.5 hours with newer versions of train to update so the numbers you have seem to point to possibly missing patches related to that bug or an execution configuration problem.
We implemented ARA for now so we can get exact measures of each ansible-playbook runtime to see what is taking the most time. My question is: How big are your production OpenStack environments and how long does it take you to deploy?
Are you running ansible-playbook by hand? And do you have a ansible.cfg? We added an `openstack tripleo config generate ansible` command that'll generate a starting ansible.cfg that's similar to what we have in the mistral execution.
Which methods do you guys use to scale-up compute nodes? (spoiler: --skip-deploy-identifier doesn't seem to work properly)
Is Blacklisting all other Compute nodes the right move? Do you even blacklist the Controllers as well?
users mailing list
To unsubscribe: firstname.lastname@example.org