<html>
<head>
<style><!--
.hmmessage P
{
margin:0px;
padding:0px
}
body.hmmessage
{
font-size: 12pt;
font-family:Calibri
}
--></style></head>
<body class='hmmessage'><div dir='ltr'>Yes, it looks possible to perform multi node deployment with RDO Kilo RC2 via single packstack run.<br>I've tried:-<br><br>CONFIG_CONTROLLER_HOST=192.169.142.127<br>CONFIG_COMPUTE_HOSTS=192.169.142.127,192.169.142.137<br>CONFIG_NETWORK_HOSTS=192.169.142.127<br><br> was able use different IPs for VTEPs ( CONFIG_TUNNEL_IF=eth1 works as expected ).<br><br>Bridge br-tun<br>        fail_mode: secure<br>        Port "vxlan-0a000089"<br>            Interface "vxlan-0a000089"<br>                type: vxlan<br>                options: {df_default="true", in_key=flow, local_ip="10.0.0.127", out_key=flow, remote_ip="10.0.0.137"}<br><br>and succeeded .<br>Per your report looks like 192.169.142.127 may be removed from CONFIG_COMPUTE_HOSTS.<br>AIO host  plus separate Compute node setups scared me too much ;) <br><br>   The point seems to be presence of delorean.repo on Compute nodes. Am I correct ? <br>My testing resources are limited 16 GB RAM and  4CORE CPU , I cannot start third VM for testing<br><br>You wrote :-<br><br>> What I noticed here, if I associate a floating ip to a VM with 2 
interfaces, then I'll lose the connectivity >to the instance and Kilo<br><br>I just used VMs with eth0 for public && management network and eth1 for VXLAN endpoints <br>Answer-file is attached.<br><br>Thank you for keeping me posted.<br>Boris<br><br><div><hr id="stopSpelling">Date: Sun, 3 May 2015 16:51:54 +0200<br>Subject: Re: [Rdo-list] RE(2) Failure to start openstack-nova-compute on Compute Node when testing delorean RC2 or CI repo on CentOS 7.1<br>From: ak@cloudssky.com<br>To: bderzhavets@hotmail.com<br>CC: apevec@gmail.com; rdo-list@redhat.com<br><br><div dir="ltr">Boris, thanks for your kind feedback.<div><br>I did a 3 node Kilo RC2 virt setup on top of my Kilo RC2 which was installed on bare metal.</div><div>The installation was successful by the first run.</div><div><br>The network looks like this:</div><div><a href="https://cloudssky.com/.galleries/images/kilo-virt-setup.png" target="_blank">https://cloudssky.com/.galleries/images/kilo-virt-setup.png</a><br></div><div><br></div><div>For this setup I added the latest CentOS cloud image to glance, ran an instance (controller), enabled root login,</div><div>added ifcfg-eth1 to the instance, created a snapshot from the controller, added the repos to this instance, yum updated,</div><div>rebooted and spawn the network and compute1 vm nodes from that snapshot.</div><div>(To be able to ssh into the VMs over 20.0.1.0 network, I created the gate VM with a floating ip assigned and installed OpenVPN<br></div><div>on it.)</div><div><br></div><div>What I noticed here, if I associate a floating ip to a VM with 2 interfaces, then I'll lose the connectivity to the instance and Kilo</div><div>becomes crazy (the AIO controller on bare metal lose somehow its br-ex interface, but I didn't try to reproduce it again).</div><div><br></div><div>The packstack file was created in interactive mode with:<br></div><div><br></div><div>packstack --answer-file= --> press enter</div><div><br></div><div>I accepted most default values and selected trove and heat to be installed.</div><div><br></div><div>The answers are on pastebin:</div><div><br></div><div><a href="http://pastebin.com/SYp8Qf7d" target="_blank">http://pastebin.com/SYp8Qf7d</a><br></div><div><br>The generated packstack file is here:<br></div><div><br></div><div><a href="http://pastebin.com/XqJuvQxf" target="_blank">http://pastebin.com/XqJuvQxf</a><br></div><div>The br-ex interfaces and changes to eth0 are created on network and compute nodes correctly (output below).<BR>And one nice thing for me coming from Havana was to see how easy has got to create an image in Horizon<br>by uploading an image file (in my case rancheros.iso and centos.qcow2 worked like a charm).<BR>Now its time to discover Ironic, Trove and Manila and if someone has some tips or guidelines on how to test these<br>new exciting things or has any news about Murano or Magnum on RDO, then I'll be more lucky and excited<br>as I'm now about Kilo :-)<BR>Thanks!<br>Arash<BR>---<BR>Some outputs here:<BR><span>[root@controller ~(keystone_admin)]# nova hypervisor-list</span><BR><span>+----+---------------------+-------+---------+</span><BR><span>| ID | Hypervisor hostname | State | Status  |</span><BR><span>+----+---------------------+-------+---------+</span><BR><span>| 1  | compute1.novalocal   | up    | enabled |</span><BR>












<BR><span>+----+---------------------+-------+---------+</span><BR><span>[root@network ~]# ovs-vsctl show</span><BR><span>436a6114-d489-4160-b469-f088d66bd752</span><BR><span>    Bridge br-tun</span><BR><span>        fail_mode: secure</span><BR><span>        Port "vxlan-14000212"</span><BR><span>            Interface "vxlan-14000212"</span><BR><span>                type: vxlan</span><BR><span>                options: {df_default="true", in_key=flow, local_ip="20.0.2.19", out_key=flow, remote_ip="20.0.2.18"}</span><BR><span>        Port br-tun</span><BR><span>            Interface br-tun</span><BR><span>                type: internal</span><BR><span>        Port patch-int</span><BR><span>            Interface patch-int</span><BR><span>                type: patch</span><BR><span>                options: {peer=patch-tun}</span><BR><span>    Bridge br-int</span><BR><span>        fail_mode: secure</span><BR><span>        Port br-int</span><BR><span>            Interface br-int</span><BR><span>                type: internal</span><BR><span>        Port int-br-ex</span><BR><span>            Interface int-br-ex</span><BR><span>                type: patch</span><BR><span>                options: {peer=phy-br-ex}</span><BR><span>        Port patch-tun</span><BR><span>            Interface patch-tun</span><BR><span>                type: patch</span><BR><span>                options: {peer=patch-int}</span><BR><span>    Bridge br-ex</span><BR><span>        Port br-ex</span><BR><span>            Interface br-ex</span><BR><span>                type: internal</span><BR><span>        Port phy-br-ex</span><BR><span>            Interface phy-br-ex</span><BR><span>                type: patch</span><BR><span>                options: {peer=int-br-ex}</span><BR><span>        Port "eth0"</span><BR><span>            Interface "eth0"</span><BR>













































<BR><span>    ovs_version: "2.3.1"</span><BR><span><br></span><BR><span>[root@compute~]# ovs-vsctl show</span><BR><span>8123433e-b477-4ef5-88aa-721487a4bd58</span><BR><span>    Bridge br-int</span><BR><span>        fail_mode: secure</span><BR><span>        Port int-br-ex</span><BR><span>            Interface int-br-ex</span><BR><span>                type: patch</span><BR><span>                options: {peer=phy-br-ex}</span><BR><span>        Port patch-tun</span><BR><span>            Interface patch-tun</span><BR><span>                type: patch</span><BR><span>                options: {peer=patch-int}</span><BR><span>        Port br-int</span><BR><span>            Interface br-int</span><BR><span>                type: internal</span><BR><span>    Bridge br-tun</span><BR><span>        fail_mode: secure</span><BR><span>        Port br-tun</span><BR><span>            Interface br-tun</span><BR><span>                type: internal</span><BR><span>        Port patch-int</span><BR><span>            Interface patch-int</span><BR><span>                type: patch</span><BR><span>                options: {peer=patch-tun}</span><BR><span>        Port "vxlan-14000213"</span><BR><span>            Interface "vxlan-14000213"</span><BR><span>                type: vxlan</span><BR><span>                options: {df_default="true", in_key=flow, local_ip="20.0.2.18", out_key=flow, remote_ip="20.0.2.19"}</span><BR><span>    Bridge br-ex</span><BR><span>        Port phy-br-ex</span><BR><span>            Interface phy-br-ex</span><BR><span>                type: patch</span><BR><span>                options: {peer=int-br-ex}</span><BR><span>        Port "eth0"</span><BR><span>            Interface "eth0"</span><BR><span>        Port br-ex</span><BR><span>            Interface br-ex</span><BR><span>                type: internal</span><BR><span>













































</span><BR><span>    ovs_version: "2.3.1"</span><BR><span><br></span><BR><span><br></span><BR></div><div><br></div><div><br></div><div class="ecxgmail_extra"><div class="ecxgmail_quote">On Sat, May 2, 2015 at 9:02 AM, Boris Derzhavets <span dir="ltr"><<a href="mailto:bderzhavets@hotmail.com" target="_blank">bderzhavets@hotmail.com</a>></span> wrote:<br><blockquote class="ecxgmail_quote" style="border-left-width:1px;border-left-color:rgb(204,204,204);border-left-style:solid;padding-left:1ex;">


<div><div dir="ltr">Thank you once again it really works.<br><br>[root@ip-192-169-142-127 ~(keystone_admin)]# nova hypervisor-list<br>+----+----------------------------------------+-------+---------+<br>| ID | Hypervisor hostname                    | State | Status  |<br>+----+----------------------------------------+-------+---------+<br>| 1  | <a href="http://ip-192-169-142-127.ip.secureserver.net" target="_blank">ip-192-169-142-127.ip.secureserver.net</a> | up    | enabled |<br>| 2  | <a href="http://ip-192-169-142-137.ip.secureserver.net" target="_blank">ip-192-169-142-137.ip.secureserver.net</a> | up    | enabled |<br>+----+----------------------------------------+-------+---------+<br><br>[root@ip-192-169-142-127 ~(keystone_admin)]# nova hypervisor-servers <a href="http://ip-192-169-142-137.ip.secureserver.net" target="_blank">ip-192-169-142-137.ip.secureserver.net</a><br>+--------------------------------------+-------------------+---------------+----------------------------------------+<br>| ID                                   | Name              | Hypervisor ID | Hypervisor Hostname                    |<br>+--------------------------------------+-------------------+---------------+----------------------------------------+<br>| 16ab7825-1403-442e-b3e2-7056d14398e0 | instance-00000002 | 2             | <a href="http://ip-192-169-142-137.ip.secureserver.net" target="_blank">ip-192-169-142-137.ip.secureserver.net</a> |<br>| 5fa444c8-30b8-47c3-b073-6ce10dd83c5a | instance-00000004 | 2             | <a href="http://ip-192-169-142-137.ip.secureserver.net" target="_blank">ip-192-169-142-137.ip.secureserver.net</a> |<br>+--------------------------------------+-------------------+---------------+----------------------------------------+<br><br>with only one issue:-<br><br> during AIO run CONFIG_NEUTRON_OVS_TUNNEL_IF=<br> during Compute Node setup CONFIG_NEUTRON_OVS_TUNNEL_IF=eth1<br><br> and finally it results mess in ml2_vxlan_endpoints table. I had manually update<br> ml2_vxlan_endpoints and restart   neutron-openvswitch-agent.service on both nodes<br> afterwards VMs on compute node obtained access to meta-data server.<br><br> I also believe that synchronized delete records from tables "compute_nodes && services"  <br> ( along with disabling nova-compute on Controller)  could  turn AIO host into real Controller.<br><br>Boris.<br><br><div><hr>Date: Fri, 1 May 2015 22:22:41 +0200<br>Subject: Re: [Rdo-list] RE(1) Failure to start openstack-nova-compute on Compute Node when testing delorean RC2 or CI repo on CentOS 7.1<br>From: <a href="mailto:ak@cloudssky.com" target="_blank">ak@cloudssky.com</a><br>To: <a href="mailto:bderzhavets@hotmail.com" target="_blank">bderzhavets@hotmail.com</a><br>CC: <a href="mailto:apevec@gmail.com" target="_blank">apevec@gmail.com</a>; <a href="mailto:rdo-list@redhat.com" target="_blank">rdo-list@redhat.com</a><br><br><div dir="ltr"><div>I got the compute node working by adding the delorean-kilo.repo on compute node,</div><div>yum updating the compute node, rebooted and extended the packstack file from the first AIO</div><div>install with the IP of compute node and ran packstack again with NetworkManager enabled</div><div>and did a second yum update on compute node before the 3rd packstack run, and now it works :-)</div><div><br></div><div>In short, for RC2 we have to force by hand to get the nova-compute running on compute node,</div><div>before running packstack from controller again from an existing AIO install.</div><div><br></div><div>Now I have 2 compute nodes (controller AIO with compute + 2nd compute) and could spawn a</div><div>3rd cirros instance which landed on 2nd compute node.</div><div>ssh'ing into the instances over the floating ip works fine too.</div><div><br></div><div>Before running packstack again, I set:</div><div><br>EXCLUDE_SERVERS=<ip of controller><br></div><div><br></div><div><span>[root@csky01 ~(keystone_osx)]# virsh list --all</span><br><span> Id    Name                           Status</span><br><span>----------------------------------------------------</span><br><span> 2     instance-00000001              laufend </span>--> means running in German<br><span></span><br><span> 3     instance-00000002              laufend </span>--> means running in German<br><br><br></div><div>







<span>[root@csky06 ~]# virsh list --all</span><br>
<span> Id    Name                           Status</span><br>
<span>----------------------------------------------------</span><br>
<span> 2     instance-00000003              laufend --> means running in German</span><br><br><br></div><div>== Nova managed services ==<br></div><div>
<span>+----+------------------+----------------+----------+---------+-------+----------------------------+-----------------+</span><br>
<span>| Id | Binary           | Host           | Zone     | Status  | State | Updated_at                 | Disabled Reason |</span><br>
<span>+----+------------------+----------------+----------+---------+-------+----------------------------+-----------------+</span><br>
<span>| 1  | nova-consoleauth | <a href="http://csky01.csg.net" target="_blank">csky01.csg.net</a> | internal | enabled | up    | 2015-05-01T19:46:42.000000 | -               |</span><br>
<span>| 2  | nova-conductor   | <a href="http://csky01.csg.net" target="_blank">csky01.csg.net</a> | internal | enabled | up    | 2015-05-01T19:46:42.000000 | -               |</span><br>
<span>| 3  | nova-scheduler   | <a href="http://csky01.csg.net" target="_blank">csky01.csg.net</a> | internal | enabled | up    | 2015-05-01T19:46:42.000000 | -               |</span><br>
<span>| 4  | nova-compute     | <a href="http://csky01.csg.net" target="_blank">csky01.csg.net</a> | nova     | enabled | up    | 2015-05-01T19:46:40.000000 | -               |</span><br>
<span>| 5  | nova-cert        | <a href="http://csky01.csg.net" target="_blank">csky01.csg.net</a> | internal | enabled | up    | 2015-05-01T19:46:42.000000 | -               |</span><br>
<span>| 6  | nova-compute     | <a href="http://csky06.csg.net" target="_blank">csky06.csg.net</a> | nova     | enabled | up    | 2015-05-01T19:46:38.000000 | -               |</span><br>
<span>+----+------------------+----------------+----------+---------+-------+----------------------------+-----------------+</span><br></div><div><br></div><div><br></div><div><div>On Fri, May 1, 2015 at 9:02 AM, Boris Derzhavets <span dir="ltr"><<a href="mailto:bderzhavets@hotmail.com" target="_blank">bderzhavets@hotmail.com</a>></span> wrote:<br><blockquote style="border-left-width:1px;border-left-color:rgb(204,204,204);border-left-style:solid;padding-left:1ex;">


<div><div dir="ltr">Ran packstack --debug --answer-file=./answer-fileRC2.txt<br>192.169.142.137_nova.pp.log.gz attached<br><br>Boris<br><br><div><hr>From: <a href="mailto:bderzhavets@hotmail.com" target="_blank">bderzhavets@hotmail.com</a><br>To: <a href="mailto:apevec@gmail.com" target="_blank">apevec@gmail.com</a><br>Date: Fri, 1 May 2015 01:44:17 -0400<br>CC: <a href="mailto:rdo-list@redhat.com" target="_blank">rdo-list@redhat.com</a><br>Subject: [Rdo-list] Failure to start openstack-nova-compute on Compute Node when testing delorean RC2 or CI repo on CentOS 7.1<br><br>


<div dir="ltr">Follow instructions <a href="https://www.redhat.com/archives/rdo-list/2015-April/msg00254.html" target="_blank">https://www.redhat.com/archives/rdo-list/2015-April/msg00254.html</a><br>packstack fails :-<br><br>Applying 192.169.142.127_nova.pp<br>Applying 192.169.142.137_nova.pp<br>192.169.142.127_nova.pp:                             [ DONE ]      <br>192.169.142.137_nova.pp:                          [ ERROR ]        <br>Applying Puppet manifests                         [ ERROR ]<br><br>ERROR : Error appeared during Puppet run: 192.169.142.137_nova.pp<br>Error: Could not start Service[nova-compute]: Execution of '/usr/bin/systemctl start openstack-nova-compute' returned 1: Job for openstack-nova-compute.service failed. See 'systemctl status openstack-nova-compute.service' and 'journalctl -xn' for details.<br>You will find full trace in log /var/tmp/packstack/20150501-081745-rIpCIr/manifests/192.169.142.137_nova.pp.log<br><br>In both cases (RC2 or CI repos)  on compute node 192.169.142.137 /var/log/nova/nova-compute.log<br>reports :-<br><br>2015-05-01 08:21:41.354 4999 INFO oslo.messaging._drivers.impl_rabbit [req-0ae34524-9ee0-4a87-aa5a-fff5d1999a9c ] Delaying reconnect for 1.0 seconds...<br>2015-05-01 08:21:42.355 4999 INFO oslo.messaging._drivers.impl_rabbit [req-0ae34524-9ee0-4a87-aa5a-fff5d1999a9c ] Connecting to AMQP server on localhost:5672<br>2015-05-01 08:21:42.360 4999 ERROR oslo.messaging._drivers.impl_rabbit [req-0ae34524-9ee0-4a87-aa5a-fff5d1999a9c ] AMQP server on localhost:5672 is unreachable: [Errno 111] ECONNREFUSED. Trying again in 11 seconds.<br><br>Seems like it is looking for AMQP Server at wrong host . Should be 192.169.142.127<br>On 192.169.142.127 :-<br><br>[root@ip-192-169-142-127 ~]# netstat -lntp | grep 5672<br>==>  tcp        0      0 <a href="http://0.0.0.0:25672" target="_blank">0.0.0.0:25672</a>           0.0.0.0:*               LISTEN      14506/beam.smp      <br>        tcp6       0      0 :::5672                              :::*                    LISTEN      14506/beam.smp   <br><br>[root@ip-192-169-142-127 ~]# iptables-save | grep 5672<br>-A INPUT -s <a href="http://192.169.142.127/32" target="_blank">192.169.142.127/32</a> -p tcp -m multiport --dports 5671,5672 -m comment --comment "001 amqp incoming amqp_192.169.142.127" -j ACCEPT<br>-A INPUT -s <a href="http://192.169.142.137/32" target="_blank">192.169.142.137/32</a> -p tcp -m multiport --dports 5671,5672 -m comment --comment "001 amqp incoming amqp_192.169.142.137" -j ACCEPT<br><br>Answer-file is attached<br><br>Thanks.<br>Boris<br>                                        </div>
<br>_______________________________________________
Rdo-list mailing list
<a href="mailto:Rdo-list@redhat.com" target="_blank">Rdo-list@redhat.com</a>
<a href="https://www.redhat.com/mailman/listinfo/rdo-list" target="_blank">https://www.redhat.com/mailman/listinfo/rdo-list</a>

To unsubscribe: <a href="mailto:rdo-list-unsubscribe@redhat.com" target="_blank">rdo-list-unsubscribe@redhat.com</a></div>                                          </div></div>
<br>_______________________________________________<br>
Rdo-list mailing list<br>
<a href="mailto:Rdo-list@redhat.com" target="_blank">Rdo-list@redhat.com</a><br>
<a href="https://www.redhat.com/mailman/listinfo/rdo-list" target="_blank">https://www.redhat.com/mailman/listinfo/rdo-list</a><br>
<br>
To unsubscribe: <a href="mailto:rdo-list-unsubscribe@redhat.com" target="_blank">rdo-list-unsubscribe@redhat.com</a><br></blockquote></div><br></div></div></div>                                       </div></div>
</blockquote></div><br></div></div></div>                                     </div></body>
</html>