<html>
<head>
<style><!--
.hmmessage P
{
margin:0px;
padding:0px
}
body.hmmessage
{
font-size: 12pt;
font-family:Calibri
}
--></style></head>
<body class='hmmessage'><div dir='ltr'>Thank you once again it really works.<br><br>[root@ip-192-169-142-127 ~(keystone_admin)]# nova hypervisor-list<br>+----+----------------------------------------+-------+---------+<br>| ID | Hypervisor hostname                    | State | Status  |<br>+----+----------------------------------------+-------+---------+<br>| 1  | ip-192-169-142-127.ip.secureserver.net | up    | enabled |<br>| 2  | ip-192-169-142-137.ip.secureserver.net | up    | enabled |<br>+----+----------------------------------------+-------+---------+<br><br>[root@ip-192-169-142-127 ~(keystone_admin)]# nova hypervisor-servers ip-192-169-142-137.ip.secureserver.net<br>+--------------------------------------+-------------------+---------------+----------------------------------------+<br>| ID                                   | Name              | Hypervisor ID | Hypervisor Hostname                    |<br>+--------------------------------------+-------------------+---------------+----------------------------------------+<br>| 16ab7825-1403-442e-b3e2-7056d14398e0 | instance-00000002 | 2             | ip-192-169-142-137.ip.secureserver.net |<br>| 5fa444c8-30b8-47c3-b073-6ce10dd83c5a | instance-00000004 | 2             | ip-192-169-142-137.ip.secureserver.net |<br>+--------------------------------------+-------------------+---------------+----------------------------------------+<br><br>with only one issue:-<br><br> during AIO run CONFIG_NEUTRON_OVS_TUNNEL_IF=<br> during Compute Node setup CONFIG_NEUTRON_OVS_TUNNEL_IF=eth1<br><br> and finally it results mess in ml2_vxlan_endpoints table. I had manually update<br> ml2_vxlan_endpoints and restart   neutron-openvswitch-agent.service on both nodes<br> afterwards VMs on compute node obtained access to meta-data server.<br><br> I also believe that synchronized delete records from tables "compute_nodes && services"  <br> ( along with disabling nova-compute on Controller)  could  turn AIO host into real Controller.<br><br>Boris.<br><br><div><hr id="stopSpelling">Date: Fri, 1 May 2015 22:22:41 +0200<br>Subject: Re: [Rdo-list] RE(1) Failure to start openstack-nova-compute on Compute Node when testing delorean RC2 or CI repo on CentOS 7.1<br>From: ak@cloudssky.com<br>To: bderzhavets@hotmail.com<br>CC: apevec@gmail.com; rdo-list@redhat.com<br><br><div dir="ltr"><div class="ecxgmail_extra">I got the compute node working by adding the delorean-kilo.repo on compute node,</div><div class="ecxgmail_extra">yum updating the compute node, rebooted and extended the packstack file from the first AIO</div><div class="ecxgmail_extra">install with the IP of compute node and ran packstack again with NetworkManager enabled</div><div class="ecxgmail_extra">and did a second yum update on compute node before the 3rd packstack run, and now it works :-)</div><div class="ecxgmail_extra"><br></div><div class="ecxgmail_extra">In short, for RC2 we have to force by hand to get the nova-compute running on compute node,</div><div class="ecxgmail_extra">before running packstack from controller again from an existing AIO install.</div><div class="ecxgmail_extra"><br></div><div class="ecxgmail_extra">Now I have 2 compute nodes (controller AIO with compute + 2nd compute) and could spawn a</div><div class="ecxgmail_extra">3rd cirros instance which landed on 2nd compute node.</div><div class="ecxgmail_extra">ssh'ing into the instances over the floating ip works fine too.</div><div class="ecxgmail_extra"><br></div><div class="ecxgmail_extra">Before running packstack again, I set:</div><div class="ecxgmail_extra"><br>EXCLUDE_SERVERS=<ip of controller><br></div><div class="ecxgmail_extra"><br></div><div class="ecxgmail_extra"><span>[root@csky01 ~(keystone_osx)]# virsh list --all</span><BR><span> Id    Name                           Status</span><BR><span>----------------------------------------------------</span><BR><span> 2     instance-00000001              laufend </span>--> means running in German<BR><span></span><BR><span> 3     instance-00000002              laufend </span>--> means running in German<BR><br><BR></div><div class="ecxgmail_extra">







<span>[root@csky06 ~]# virsh list --all</span><BR>
<span> Id    Name                           Status</span><BR>
<span>----------------------------------------------------</span><BR>
<span> 2     instance-00000003              laufend --> means running in German</span><BR><br><BR></div><div class="ecxgmail_extra">== Nova managed services ==<br></div><div class="ecxgmail_extra">
<span>+----+------------------+----------------+----------+---------+-------+----------------------------+-----------------+</span><BR>
<span>| Id | Binary           | Host           | Zone     | Status  | State | Updated_at                 | Disabled Reason |</span><BR>
<span>+----+------------------+----------------+----------+---------+-------+----------------------------+-----------------+</span><BR>
<span>| 1  | nova-consoleauth | <a href="http://csky01.csg.net" target="_blank">csky01.csg.net</a> | internal | enabled | up    | 2015-05-01T19:46:42.000000 | -               |</span><BR>
<span>| 2  | nova-conductor   | <a href="http://csky01.csg.net" target="_blank">csky01.csg.net</a> | internal | enabled | up    | 2015-05-01T19:46:42.000000 | -               |</span><BR>
<span>| 3  | nova-scheduler   | <a href="http://csky01.csg.net" target="_blank">csky01.csg.net</a> | internal | enabled | up    | 2015-05-01T19:46:42.000000 | -               |</span><BR>
<span>| 4  | nova-compute     | <a href="http://csky01.csg.net" target="_blank">csky01.csg.net</a> | nova     | enabled | up    | 2015-05-01T19:46:40.000000 | -               |</span><BR>
<span>| 5  | nova-cert        | <a href="http://csky01.csg.net" target="_blank">csky01.csg.net</a> | internal | enabled | up    | 2015-05-01T19:46:42.000000 | -               |</span><BR>
<span>| 6  | nova-compute     | <a href="http://csky06.csg.net" target="_blank">csky06.csg.net</a> | nova     | enabled | up    | 2015-05-01T19:46:38.000000 | -               |</span><BR>
<span>+----+------------------+----------------+----------+---------+-------+----------------------------+-----------------+</span><BR></div><div class="ecxgmail_extra"><br></div><div class="ecxgmail_extra"><br></div><div class="ecxgmail_extra"><div class="ecxgmail_quote">On Fri, May 1, 2015 at 9:02 AM, Boris Derzhavets <span dir="ltr"><<a href="mailto:bderzhavets@hotmail.com" target="_blank">bderzhavets@hotmail.com</a>></span> wrote:<br><blockquote class="ecxgmail_quote" style="border-left-width:1px;border-left-color:rgb(204,204,204);border-left-style:solid;padding-left:1ex;">


<div><div dir="ltr">Ran packstack --debug --answer-file=./answer-fileRC2.txt<br>192.169.142.137_nova.pp.log.gz attached<br><br>Boris<br><br><div><hr>From: <a href="mailto:bderzhavets@hotmail.com" target="_blank">bderzhavets@hotmail.com</a><br>To: <a href="mailto:apevec@gmail.com" target="_blank">apevec@gmail.com</a><br>Date: Fri, 1 May 2015 01:44:17 -0400<br>CC: <a href="mailto:rdo-list@redhat.com" target="_blank">rdo-list@redhat.com</a><br>Subject: [Rdo-list] Failure to start openstack-nova-compute on Compute Node when testing delorean RC2 or CI repo on CentOS 7.1<br><br>


<div dir="ltr">Follow instructions <a href="https://www.redhat.com/archives/rdo-list/2015-April/msg00254.html" target="_blank">https://www.redhat.com/archives/rdo-list/2015-April/msg00254.html</a><br>packstack fails :-<br><br>Applying 192.169.142.127_nova.pp<br>Applying 192.169.142.137_nova.pp<br>192.169.142.127_nova.pp:                             [ DONE ]      <br>192.169.142.137_nova.pp:                          [ ERROR ]        <br>Applying Puppet manifests                         [ ERROR ]<br><br>ERROR : Error appeared during Puppet run: 192.169.142.137_nova.pp<br>Error: Could not start Service[nova-compute]: Execution of '/usr/bin/systemctl start openstack-nova-compute' returned 1: Job for openstack-nova-compute.service failed. See 'systemctl status openstack-nova-compute.service' and 'journalctl -xn' for details.<br>You will find full trace in log /var/tmp/packstack/20150501-081745-rIpCIr/manifests/192.169.142.137_nova.pp.log<br><br>In both cases (RC2 or CI repos)  on compute node 192.169.142.137 /var/log/nova/nova-compute.log<br>reports :-<br><br>2015-05-01 08:21:41.354 4999 INFO oslo.messaging._drivers.impl_rabbit [req-0ae34524-9ee0-4a87-aa5a-fff5d1999a9c ] Delaying reconnect for 1.0 seconds...<br>2015-05-01 08:21:42.355 4999 INFO oslo.messaging._drivers.impl_rabbit [req-0ae34524-9ee0-4a87-aa5a-fff5d1999a9c ] Connecting to AMQP server on localhost:5672<br>2015-05-01 08:21:42.360 4999 ERROR oslo.messaging._drivers.impl_rabbit [req-0ae34524-9ee0-4a87-aa5a-fff5d1999a9c ] AMQP server on localhost:5672 is unreachable: [Errno 111] ECONNREFUSED. Trying again in 11 seconds.<br><br>Seems like it is looking for AMQP Server at wrong host . Should be 192.169.142.127<br>On 192.169.142.127 :-<br><br>[root@ip-192-169-142-127 ~]# netstat -lntp | grep 5672<br>==>  tcp        0      0 <a href="http://0.0.0.0:25672" target="_blank">0.0.0.0:25672</a>           0.0.0.0:*               LISTEN      14506/beam.smp      <br>        tcp6       0      0 :::5672                              :::*                    LISTEN      14506/beam.smp   <br><br>[root@ip-192-169-142-127 ~]# iptables-save | grep 5672<br>-A INPUT -s <a href="http://192.169.142.127/32" target="_blank">192.169.142.127/32</a> -p tcp -m multiport --dports 5671,5672 -m comment --comment "001 amqp incoming amqp_192.169.142.127" -j ACCEPT<br>-A INPUT -s <a href="http://192.169.142.137/32" target="_blank">192.169.142.137/32</a> -p tcp -m multiport --dports 5671,5672 -m comment --comment "001 amqp incoming amqp_192.169.142.137" -j ACCEPT<br><br>Answer-file is attached<br><br>Thanks.<br>Boris<br>                                        </div>
<br>_______________________________________________
Rdo-list mailing list
<a href="mailto:Rdo-list@redhat.com" target="_blank">Rdo-list@redhat.com</a>
<a href="https://www.redhat.com/mailman/listinfo/rdo-list" target="_blank">https://www.redhat.com/mailman/listinfo/rdo-list</a>

To unsubscribe: <a href="mailto:rdo-list-unsubscribe@redhat.com" target="_blank">rdo-list-unsubscribe@redhat.com</a></div>                                          </div></div>
<br>_______________________________________________<br>
Rdo-list mailing list<br>
<a href="mailto:Rdo-list@redhat.com">Rdo-list@redhat.com</a><br>
<a href="https://www.redhat.com/mailman/listinfo/rdo-list" target="_blank">https://www.redhat.com/mailman/listinfo/rdo-list</a><br>
<br>
To unsubscribe: <a href="mailto:rdo-list-unsubscribe@redhat.com">rdo-list-unsubscribe@redhat.com</a><br></blockquote></div><br></div></div></div>                                         </div></body>
</html>