> a Undercloud on one VM, single overcloud controller on another
VM,
> single compute node on another VM (using nested virt, or just plain
> emulation)
I try to stay away from nested KVM. Every now and then I or someone will come
along and try it, report it works for a bit, but ends up in various kernel
crashes/panics if the environment stays up for too long.
For some (again need to look at specific personas), _emulated_ Instances
might be good enough. (i.e. no nested KVM and instead using qemu
emulation on the virtual Compute Node)
It's not fast, but it is enough to show end to end usage of the system,
in a fairly minimal hardware footprint.
As for the stability of nested KVM... Kashyap any thoughts on this?
We did try with it enabled in an OpenStack cloud itself during a
hackfest
event, and had planned on giving each participant a 32 GB vm (spawned by nova),
that had KVM support. They would then use that to do an rdo-manager HA
deployment in virt. It hummed along quite nicely initially, but started
experiencing hard lockups before long.
>
> b 2nd variation on the above would be to run the 3 node controller HA
> setup, which means 1 undercloud, 3 overcloud controllers + 1 compute
>
> The question is... what is the minimum amount of RAM that you can run an
> overcloud controller with? 4GB? Or can that be squeezed to 2 or 3GB
> just for playing around purposes?
>
> What is the minimum amount of RAM you need for the undercloud node?
4GB, and that now results in OOM'ing after a couple deployments without swap
enabled on the undercloud.
Ack
>
> If 4GB per VM, then a) maybe can be done on a 16GB system, while b)
> needs 32GB
You can do a) with 12GB max (I do on my laptop) since not all of the memory is
in use, and you can give the compute node much less than 4GB even. KSM also
helps.
Ah, good point about KSM. Last time I ran it (admittedly more than 1 yr
ago) all it did was suck massive amounts of CPU cycles from me, but
maybe it's gotten a bit more efficient since then? :)
>
> If we could squeeze controller and undercloud nodes into 3GB each, then
We haven't honestly spent a lot (any) time tuning for memory optimization, so
it might be possible, but I'm a little doubtful.
Ack
> it might be possible to run b) on a 16GB machine, opening up
> experimentation with RDO Manager in a real HA configuration to lots more
> people
If you have swap enabled on the host, I've done b) on a 16GB host. I don't
recall how much swap ended up getting used.
Thx