[rdo-list] Fw: Re: What happens to the most recent Ocata trunk tested ( Tripleo QuickStart failure to deploy overcloud ) ?

Boris Derzhavets bderzhavets at hotmail.com
Thu Apr 6 20:45:46 UTC 2017


You are correct. I succeeded with


[alan at fedora24wks nodes]$ cat  3ctlr_1comp_1ceph.yml

#############################################################
# Deploy an HA openstack environment.
#  Memory encrease  on PCS Cluster  Controllers from 6700 to 7200
#  decreasing Ceph nodes from 2 to 1.
# Compute Node same as for Newton memory allocation.
#############################################################
control_memory: 7200
compute_memory: 6500
undercloud_memory: 8192

# Giving the undercloud additional CPUs can greatly improve heat's
# performance (and result in a shorter deploy time).
undercloud_vcpu: 4

# Since HA has more machines, we set the cpu for controllers and
# compute nodes to 1
default_vcpu: 1
compute_vcpu: 2

# This enables TLS for the undercloud which will also make haproxy bind to the
# configured public-vip and admin-vip.
undercloud_generate_service_certificate: True

# Create three controller nodes and one compute node.
overcloud_nodes:
  - name: control_0
    flavor: control
    virtualbmc_port: 6230

  - name: control_1
    flavor: control
    virtualbmc_port: 6231

  - name: control_2
    flavor: control
    virtualbmc_port: 6232

  - name: compute_0
    flavor: compute
    virtualbmc_port: 6233

  - name: ceph_0
    flavor: ceph
    virtualbmc_port: 6234

# Tell tripleo about our environment.
topology: >-
  --control-scale 3
  --compute-scale 1
  --ceph-storage-scale 1
  -e {{overcloud_templates_path}}/environments/storage-environment.yaml

$ bash quickstart.sh -R ocata  --config config/general_config/pacemaker.yml  \
   --nodes  config/nodes/3ctrl_1comp_1ceph.yml  $VIRTHOST

on 32 GB  && 4 Core  VIRTHOST.

Ocata overcloud deployment appears to be more memory consuming then Newton.

[stack at undercloud ~]$ date
Thu Apr  6 20:26:46 UTC 2017
[stack at undercloud ~]$ nova-manage --version
15.0.2
[stack at undercloud ~]$ openstack server list
+-------------------------+-------------------------+--------+------------------------+----------------+
| ID                      | Name                    | Status | Networks               | Image Name     |
+-------------------------+-------------------------+--------+------------------------+----------------+
| ff4f04f0-428e-40fb-     | overcloud-controller-2  | ACTIVE | ctlplane=192.168.24.16 | overcloud-full |
| 844c-0fdb6e8064fe       |                         |        |                        |                |
| 3d7870ff-d371-45fa-b404 | overcloud-controller-1  | ACTIVE | ctlplane=192.168.24.8  | overcloud-full |
| -c9f7a4e99e68           |                         |        |                        |                |
| ce0413d5-e084-4eea-     | overcloud-cephstorage-0 | ACTIVE | ctlplane=192.168.24.11 | overcloud-full |
| 850e-0bfd83c67276       |                         |        |                        |                |
| c02d2ebf-35ad-          | overcloud-controller-0  | ACTIVE | ctlplane=192.168.24.13 | overcloud-full |
| 44b9-a348-09cc48866936  |                         |        |                        |                |
| dce0085a-106e-4178-bae4 | overcloud-novacompute-0 | ACTIVE | ctlplane=192.168.24.7  | overcloud-full |
| -5b31a6f58ede           |                         |        |                        |                |
+-------------------------+-------------------------+--------+------------------------+----------------+

[stack at undercloud ~]$ ssh heat-admin at 192.168.24.13
The authenticity of host '192.168.24.13 (192.168.24.13)' can't be established.
ECDSA key fingerprint is bd:f0:23:c6:6f:c9:7e:8e:52:2e:5a:1e:42:3d:4b:75.
Are you sure you want to continue connecting (yes/no)? yes
Warning: Permanently added '192.168.24.13' (ECDSA) to the list of known hosts.

[heat-admin at overcloud-controller-0 ~]$ sudo su -
[root at overcloud-controller-0 ~]# pcs status
Cluster name: tripleo_cluster
Stack: corosync
Current DC: overcloud-controller-0 (version 1.1.15-11.el7_3.4-e174ec8) - partition with quorum
Last updated: Thu Apr  6 20:28:28 2017        Last change: Thu Apr  6 20:14:34 2017 by root via cibadmin on overcloud-controller-0
3 nodes and 19 resources configured
Online: [ overcloud-controller-0 overcloud-controller-1 overcloud-controller-2 ]
Full list of resources:
 Master/Slave Set: galera-master [galera]
     Masters: [ overcloud-controller-0 overcloud-controller-1 overcloud-controller-2 ]
 Clone Set: rabbitmq-clone [rabbitmq]
     Started: [ overcloud-controller-0 overcloud-controller-1 overcloud-controller-2 ]
 Master/Slave Set: redis-master [redis]
     Masters: [ overcloud-controller-0 ]
     Slaves: [ overcloud-controller-1 overcloud-controller-2 ]
 ip-192.168.24.10    (ocf::heartbeat:IPaddr2):    Started overcloud-controller-0
 ip-10.0.0.8    (ocf::heartbeat:IPaddr2):    Started overcloud-controller-1
 ip-172.16.2.7    (ocf::heartbeat:IPaddr2):    Started overcloud-controller-2
 ip-172.16.2.6    (ocf::heartbeat:IPaddr2):    Started overcloud-controller-0
 ip-172.16.1.10    (ocf::heartbeat:IPaddr2):    Started overcloud-controller-1
 ip-172.16.3.6    (ocf::heartbeat:IPaddr2):    Started overcloud-controller-2
 Clone Set: haproxy-clone [haproxy]
     Started: [ overcloud-controller-0 overcloud-controller-1 overcloud-controller-2 ]
 openstack-cinder-volume    (systemd:openstack-cinder-volume):    Started overcloud-controller-0

Daemon Status:
  corosync: active/enabled
  pacemaker: active/enabled
  pcsd: active/enabled

[root at overcloud-controller-0 ~]# ceph status
    cluster 7da5d9b6-1afd-11e7-b5df-00badf797db4
     health HEALTH_OK
     monmap e1: 3 mons at {overcloud-controller-0=172.16.1.14:6789/0,overcloud-controller-1=172.16.1.4:6789/0,overcloud-controller-2=172.16.1.8:6789/0}
            election epoch 4, quorum 0,1,2 overcloud-controller-1,overcloud-controller-2,overcloud-controller-0
     osdmap e17: 1 osds: 1 up, 1 in
            flags sortbitwise
      pgmap v94: 288 pgs, 8 pools, 13701 bytes data, 19 objects
            8384 MB used, 42803 MB / 51187 MB avail
                 288 active+clean
[root at overcloud-controller-0 ~]# nova-manage --version
15.0.2

Thank you once again .
Boris



________________________________
From: Alex Schultz <aschultz at redhat.com>
Sent: Thursday, April 6, 2017 9:28 PM
To: Boris Derzhavets
Cc: John Trowbridge; rdo-list
Subject: Re: [rdo-list] Fw: Re: What happens to the most recent Ocata trunk tested ( Tripleo QuickStart failure to deploy overcloud ) ?

Error: /Stage[main]/Apache::Service/Service[httpd]: Could not evaluate: Cannot allocate memory - fork(2)

You don't have enough memory allocated on the nodes. You could also try enabling swap.

Thanks,
-Alex

On Thu, Apr 6, 2017 at 12:09 PM, Boris Derzhavets <bderzhavets at hotmail.com<mailto:bderzhavets at hotmail.com>> wrote:
Boris Derzhavets has shared a OneDrive file with you. To view it, click the link below.


<https://1drv.ms/t/s!AqjiDzRpwaKogScWstr20Us2j923>
[https://r1.res.office365.com/owa/prem/images/dc-txt_20.png]<https://1drv.ms/t/s!AqjiDzRpwaKogScWstr20Us2j923>

deployment-show-b6e5ef50-5c7a-40d9-ad42-5c2e70444785.txt<https://1drv.ms/t/s!AqjiDzRpwaKogScWstr20Us2j923>




One more attempt to send correct data via One Drive upload.

No comressing . Text uploaded as is.


Boris.


________________________________
From: Boris Derzhavets <bderzhavets at hotmail.com<mailto:bderzhavets at hotmail.com>>
Sent: Thursday, April 6, 2017 8:50 PM
To: Boris Derzhavets
Subject: Test


test

_______________________________________________
rdo-list mailing list
rdo-list at redhat.com<mailto:rdo-list at redhat.com>
https://www.redhat.com/mailman/listinfo/rdo-list

To unsubscribe: rdo-list-unsubscribe at redhat.com<mailto:rdo-list-unsubscribe at redhat.com>

-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.rdoproject.org/pipermail/dev/attachments/20170406/b78f491b/attachment.html>


More information about the dev mailing list