Thank you once again it really works.

[root@ip-192-169-142-127 ~(keystone_admin)]# nova hypervisor-list
+----+----------------------------------------+-------+---------+
| ID | Hypervisor hostname                    | State | Status  |
+----+----------------------------------------+-------+---------+
| 1  | ip-192-169-142-127.ip.secureserver.net | up    | enabled |
| 2  | ip-192-169-142-137.ip.secureserver.net | up    | enabled |
+----+----------------------------------------+-------+---------+

[root@ip-192-169-142-127 ~(keystone_admin)]# nova hypervisor-servers ip-192-169-142-137.ip.secureserver.net
+--------------------------------------+-------------------+---------------+----------------------------------------+
| ID                                   | Name              | Hypervisor ID | Hypervisor Hostname                    |
+--------------------------------------+-------------------+---------------+----------------------------------------+
| 16ab7825-1403-442e-b3e2-7056d14398e0 | instance-00000002 | 2             | ip-192-169-142-137.ip.secureserver.net |
| 5fa444c8-30b8-47c3-b073-6ce10dd83c5a | instance-00000004 | 2             | ip-192-169-142-137.ip.secureserver.net |
+--------------------------------------+-------------------+---------------+----------------------------------------+

with only one issue:-

 during AIO run CONFIG_NEUTRON_OVS_TUNNEL_IF=
 during Compute Node setup CONFIG_NEUTRON_OVS_TUNNEL_IF=eth1

 and finally it results mess in ml2_vxlan_endpoints table. I had manually update
 ml2_vxlan_endpoints and restart   neutron-openvswitch-agent.service on both nodes
 afterwards VMs on compute node obtained access to meta-data server.

 I also believe that synchronized delete records from tables "compute_nodes && services" 
 ( along with disabling nova-compute on Controller)  could  turn AIO host into real Controller.

Boris.


Date: Fri, 1 May 2015 22:22:41 +0200
Subject: Re: [Rdo-list] RE(1) Failure to start openstack-nova-compute on Compute Node when testing delorean RC2 or CI repo on CentOS 7.1
From: ak@cloudssky.com
To: bderzhavets@hotmail.com
CC: apevec@gmail.com; rdo-list@redhat.com

I got the compute node working by adding the delorean-kilo.repo on compute node,
yum updating the compute node, rebooted and extended the packstack file from the first AIO
install with the IP of compute node and ran packstack again with NetworkManager enabled
and did a second yum update on compute node before the 3rd packstack run, and now it works :-)

In short, for RC2 we have to force by hand to get the nova-compute running on compute node,
before running packstack from controller again from an existing AIO install.

Now I have 2 compute nodes (controller AIO with compute + 2nd compute) and could spawn a
3rd cirros instance which landed on 2nd compute node.
ssh'ing into the instances over the floating ip works fine too.

Before running packstack again, I set:

EXCLUDE_SERVERS=<ip of controller>

[root@csky01 ~(keystone_osx)]# virsh list --all
 Id    Name                           Status
----------------------------------------------------
 2     instance-00000001              laufend --> means running in German

 3     instance-00000002              laufend --> means running in German


[root@csky06 ~]# virsh list --all
 Id    Name                           Status
----------------------------------------------------
 2     instance-00000003              laufend --> means running in German


== Nova managed services ==
+----+------------------+----------------+----------+---------+-------+----------------------------+-----------------+
| Id | Binary           | Host           | Zone     | Status  | State | Updated_at                 | Disabled Reason |
+----+------------------+----------------+----------+---------+-------+----------------------------+-----------------+
| 1  | nova-consoleauth | csky01.csg.net | internal | enabled | up    | 2015-05-01T19:46:42.000000 | -               |
| 2  | nova-conductor   | csky01.csg.net | internal | enabled | up    | 2015-05-01T19:46:42.000000 | -               |
| 3  | nova-scheduler   | csky01.csg.net | internal | enabled | up    | 2015-05-01T19:46:42.000000 | -               |
| 4  | nova-compute     | csky01.csg.net | nova     | enabled | up    | 2015-05-01T19:46:40.000000 | -               |
| 5  | nova-cert        | csky01.csg.net | internal | enabled | up    | 2015-05-01T19:46:42.000000 | -               |
| 6  | nova-compute     | csky06.csg.net | nova     | enabled | up    | 2015-05-01T19:46:38.000000 | -               |
+----+------------------+----------------+----------+---------+-------+----------------------------+-----------------+


On Fri, May 1, 2015 at 9:02 AM, Boris Derzhavets <bderzhavets@hotmail.com> wrote:
Ran packstack --debug --answer-file=./answer-fileRC2.txt
192.169.142.137_nova.pp.log.gz attached

Boris


From: bderzhavets@hotmail.com
To: apevec@gmail.com
Date: Fri, 1 May 2015 01:44:17 -0400
CC: rdo-list@redhat.com
Subject: [Rdo-list] Failure to start openstack-nova-compute on Compute Node when testing delorean RC2 or CI repo on CentOS 7.1

Follow instructions https://www.redhat.com/archives/rdo-list/2015-April/msg00254.html
packstack fails :-

Applying 192.169.142.127_nova.pp
Applying 192.169.142.137_nova.pp
192.169.142.127_nova.pp:                             [ DONE ]     
192.169.142.137_nova.pp:                          [ ERROR ]       
Applying Puppet manifests                         [ ERROR ]

ERROR : Error appeared during Puppet run: 192.169.142.137_nova.pp
Error: Could not start Service[nova-compute]: Execution of '/usr/bin/systemctl start openstack-nova-compute' returned 1: Job for openstack-nova-compute.service failed. See 'systemctl status openstack-nova-compute.service' and 'journalctl -xn' for details.
You will find full trace in log /var/tmp/packstack/20150501-081745-rIpCIr/manifests/192.169.142.137_nova.pp.log

In both cases (RC2 or CI repos)  on compute node 192.169.142.137 /var/log/nova/nova-compute.log
reports :-

2015-05-01 08:21:41.354 4999 INFO oslo.messaging._drivers.impl_rabbit [req-0ae34524-9ee0-4a87-aa5a-fff5d1999a9c ] Delaying reconnect for 1.0 seconds...
2015-05-01 08:21:42.355 4999 INFO oslo.messaging._drivers.impl_rabbit [req-0ae34524-9ee0-4a87-aa5a-fff5d1999a9c ] Connecting to AMQP server on localhost:5672
2015-05-01 08:21:42.360 4999 ERROR oslo.messaging._drivers.impl_rabbit [req-0ae34524-9ee0-4a87-aa5a-fff5d1999a9c ] AMQP server on localhost:5672 is unreachable: [Errno 111] ECONNREFUSED. Trying again in 11 seconds.

Seems like it is looking for AMQP Server at wrong host . Should be 192.169.142.127
On 192.169.142.127 :-

[root@ip-192-169-142-127 ~]# netstat -lntp | grep 5672
==>  tcp        0      0 0.0.0.0:25672           0.0.0.0:*               LISTEN      14506/beam.smp     
        tcp6       0      0 :::5672                              :::*                    LISTEN      14506/beam.smp  

[root@ip-192-169-142-127 ~]# iptables-save | grep 5672
-A INPUT -s 192.169.142.127/32 -p tcp -m multiport --dports 5671,5672 -m comment --comment "001 amqp incoming amqp_192.169.142.127" -j ACCEPT
-A INPUT -s 192.169.142.137/32 -p tcp -m multiport --dports 5671,5672 -m comment --comment "001 amqp incoming amqp_192.169.142.137" -j ACCEPT

Answer-file is attached

Thanks.
Boris

_______________________________________________ Rdo-list mailing list Rdo-list@redhat.com https://www.redhat.com/mailman/listinfo/rdo-list To unsubscribe: rdo-list-unsubscribe@redhat.com

_______________________________________________
Rdo-list mailing list
Rdo-list@redhat.com
https://www.redhat.com/mailman/listinfo/rdo-list

To unsubscribe: rdo-list-unsubscribe@redhat.com