On 02/19/2016 09:57 AM, Lars Kellogg-Stedman wrote:
> I'm working on a slightly more in-depth article on this topic, but in
> order for some people (pmyers I'm looking at you) to get started I
/me looks around sheepishly... :)
> wanted to write up some very quick instructions. Forgive me any typos
> in this email because I'd like to send it out before actually running
> through everything locally: while the process is automated, an HA
> deploy can still take quite a while to complete. Also, the fetch of
> the undercloud image *also* takes a chunk of time; there are
> instructions in the tripleo-quickstart README for caching a local copy
> of the image to speed up subsequent installs.
https://github.com/redhat-openstack/tripleo-quickstart/blob/master/README...
Definitely do this.
> You will need a physical host with at least 32GB of RAM. More is
> better, less *may* be possible but you will probably regret it.
Wheee....
MemTotal: 65764284 kB
MemFree: 58529856 kB
> You will also need Ansible 2.0.x, which is what you will get if you
> 'pip install ansible', or install Ansible from updates-testing
> (Fedora) or epel-testing (RHEL/CentOS/...).
Ok, since I'm starting with a pretty vanilla CentOS7 Server with
libvirt, qemu, etc installed...
# yum install ansible --enablerepo epel-testing
ansible.noarch 0:2.0.0.2-1.el7
> Do *not* run Ansible HEAD from the git repository! This will lead to
> sadness and disappointment.
>
> 1. Prepare your target host.
>
> You need a user on your target host to which you can (a) log in via
> ssh without a password and then (b) sudo to root without a password.
> We'll refer to this user as "targetuser", below. That is, the
> following should work:
>
> ssh -tt targetuser@targethost sudo uptime
using 'admin' account, verified passwordless ssh as admin works and
passwordless sudo from admin to root works
> 2. Clone the tripleo-quickstart repository:
>
> git clone
https://github.com/redhat-openstack/tripleo-quickstart
> cd tripleo-quickstart
>
> (Everything below is run from inside the tripleo-quickstart
> directory)
>
> 2. Create an ansible inventory file.
>
> Create an inventory file that lists your target host in the 'virthost'
> and that provides ansible with the necessary connection information:
>
> cat > inventory <<EOF
> [virthost]
> my.target.host ansible_user=targetuser
duh, for those of us that are noobs... replace my.target.host with localhost
So for me its:
cat > inventory <<EOF
[virthost]
localhost ansible_user=admin
> 3. Create an ansible playbook.
>
> cat > playbooks/ha.yml <<EOF
> - hosts: virthost
> roles:
> - role: libvirt/teardown
> - role: libvirt/setup
>
> - hosts: localhost
> roles:
> - rebuild-inventory
>
> - hosts: undercloud
> roles:
> - overcloud
> EOF
>
> 4. Create a variables file that describes your architecture:
>
> cat > nodes.yml <<EOF
> extra_args: >-
> --control-scale 3
> -e
/usr/share/openstack-tripleo-heat-templates/environments/puppet-pacemaker.yaml
> --ntp-server
pool.ntp.org
> baremetal_vm_xml: |
> <cpu mode='host-passthrough'/>
> libvirt_args: --libvirt-type kvm
>
> # Set these to values appropriate for your target system. You
> # by default get three controllers, one compute node, and
> # one ceph node (so this example has a memory footprint of
> # 32GB, which is probably too much for a host with only
> # 32GB of RAM).
> control_memory: 8192
> compute_memory: 4096
> ceph_memory: 4096
url: file:///usr/share/quickstart_images/mitaka/undercloud.qcow2
(if you pre-downloaded the undercloud image as per the instructions in
the rst file link above)
> EOF
>
> The above configuration will enabled nested KVM on the target host.
>
> It is possible to change the number of nodes of each type that are
> created, but that's for another time.
>
> 5. Run it!
>
> ansible-playbook playbooks/ha.yml -i inventory -e @nodes.yml
Ran into a small issue... since I was running as admin and needing sudo
for root escalation, this patch from larsks was necessary to apply:
http://chunk.io/f/574614d4738c460db656714931591694
Kicked off at 11am EST sharp. Currently running overcloud deploy.
Lars, let us know when this is merged :)
> This will:
>
> - First attempt to clean up the virtual environment from any
> previous run of tripleo-quickstart
> - Deploy a new virtual undercloud and virtual overcloud
> - Install the undercloud
> - Deploy the overcloud
> - Validate the overcloud
>
> If you don't trust your copying-and-pasting, the example files
> referenced in this email are also available from:
>
>
https://gist.github.com/larsks/e02ca28982d1daacfa5d
>
> E.g.:
>
> git clone
https://gist.github.com/e02ca28982d1daacfa5d.git
To login to undercloud
ssh -F ~/.quickstart/ssh.config.ansible undercloud
hewbrocca also mentions that in the future, heat.conf in the undercloud
will set by default max_resources_per_stack to -1 which should make
things go much faster
trown notes that undercloud node will be slow to deploy with only 1
vCPU. Since my box has 16 real (32 with HT), this seems like a waste of
computing power :)
Adding:
undercloud_vcpu: 4
control_vcpu: 2
to nodes.yml may make sense
At least on my machine given that I have 64gb/32 cores
Validate step finished around 70 minutes after initial run of the
ansible playbook, but it failed. larsks suggested that it might on a HA
deployment. This will need looking into.
I was able to ssh into the undercloud and from there ssh into overcloud
nodes after getting their control plane IP addresses via undercloud nova
list
I was also able to source overcloudrc and nova list, and that worked.
Probably I'll want to set up some ssh tunnels so that I can access the
overcloud horizon (just for the heck of it) without needing to be ON the
undercloud node
And for the uninitiated... there is no Undercloud Horizon. Only CLI
So far, I think this is all fairly accessible. It's really not that much
more time consuming than Packstack and I think some optimizations can be
put into place (max_resources_per_stack and undercloud vcpus) to make
things speedier.
That being said... I am running on a machine which most developers
wouldn't have, so the next steps will be to make it reasonable on a 32gb
machine and to provide a non-HA setup for those with 16gb machines.
Perry
One other thing I've noticed... If you're experimenting and going
between say... an HA setup with 3 controllers and 2 computes and then
follow that with a deploy of a 1 controller/1 compute... the other
computes and controllers hang around.
That is... on the second run, with the smaller config, it doesn't know
to go and clean up the nodes left behind from the larger config. It's
not a big deal, you just need to remember to go and virsh destroy stuff
manually. But, it's something to be aware of if you're toggling between
HA and non HA environments
Perry