[Rdo-list] Deploying an HA dev environment with tripleo-quickstart

Lars Kellogg-Stedman lars at redhat.com
Fri Feb 19 14:57:00 UTC 2016


I'm working on a slightly more in-depth article on this topic, but in
order for some people (pmyers I'm looking at you) to get started I
wanted to write up some very quick instructions.  Forgive me any typos
in this email because I'd like to send it out before actually running
through everything locally: while the process is automated, an HA
deploy can still take quite a while to complete.  Also, the fetch of
the undercloud image *also* takes a chunk of time; there are
instructions in the tripleo-quickstart README for caching a local copy
of the image to speed up subsequent installs.

You will need a physical host with at least 32GB of RAM.  More is
better, less *may* be possible but you will probably regret it.

You will also need Ansible 2.0.x, which is what you will get if you
'pip install ansible', or install Ansible from updates-testing
(Fedora) or epel-testing (RHEL/CentOS/...).

Do *not* run Ansible HEAD from the git repository!  This will lead to
sadness and disappointment.

1. Prepare your target host.

  You need a user on your target host to which you can (a) log in via
  ssh without a password and then (b) sudo to root without a password.
  We'll refer to this user as "targetuser", below.  That is, the
  following should work:

    ssh -tt targetuser at targethost sudo uptime

2. Clone the tripleo-quickstart repository:

    git clone https://github.com/redhat-openstack/tripleo-quickstart
    cd tripleo-quickstart

  (Everything below is run from inside the tripleo-quickstart
  directory)

2. Create an ansible inventory file.

  Create an inventory file that lists your target host in the 'virthost'
  and that provides ansible with the necessary connection information:

    cat > inventory <<EOF
    [virthost]
    my.target.host ansible_user=targetuser

3. Create an ansible playbook.

    cat > playbooks/ha.yml <<EOF
    - hosts: virthost
      roles:
        - role: libvirt/teardown
        - role: libvirt/setup

    - hosts: localhost
      roles:
        - rebuild-inventory

    - hosts: undercloud
      roles:
        - overcloud
    EOF

4. Create a variables file that describes your architecture:

    cat > nodes.yml <<EOF
    extra_args: >-
      --control-scale 3
      -e /usr/share/openstack-tripleo-heat-templates/environments/puppet-pacemaker.yaml
      --ntp-server pool.ntp.org
    baremetal_vm_xml: |
      <cpu mode='host-passthrough'/>
    libvirt_args: --libvirt-type kvm

    # Set these to values appropriate for your target system.  You
    # by default get three controllers, one compute node, and
    # one ceph node (so this example has a memory footprint of
    # 32GB, which is probably too much for a host with only 
    # 32GB of RAM).
    control_memory: 8192
    compute_memory: 4096
    ceph_memory: 4096
    EOF

  The above configuration will enabled nested KVM on the target host.

  It is possible to change the number of nodes of each type that are
  created, but that's for another time.

5. Run it!

    ansible-playbook playbooks/ha.yml -i inventory -e @nodes.yml

  This will:

  - First attempt to clean up the virtual environment from any
    previous run of tripleo-quickstart
  - Deploy a new virtual undercloud and virtual overcloud
  - Install the undercloud
  - Deploy the overcloud
  - Validate the overcloud

If you don't trust your copying-and-pasting, the example files
referenced in this email are also available from:

    https://gist.github.com/larsks/e02ca28982d1daacfa5d

E.g.:

    git clone https://gist.github.com/e02ca28982d1daacfa5d.git

-- 
Lars Kellogg-Stedman <lars at redhat.com> | larsks @ {freenode,twitter,github}
Cloud Engineering / OpenStack          | http://blog.oddbit.com/

-------------- next part --------------
A non-text attachment was scrubbed...
Name: signature.asc
Type: application/pgp-signature
Size: 819 bytes
Desc: not available
URL: <http://lists.rdoproject.org/pipermail/dev/attachments/20160219/e652a0f2/attachment.sig>


More information about the dev mailing list