<html>
<head>
<meta http-equiv="Content-Type" content="text/html; charset=Windows-1252">
</head>
<body style="word-wrap: break-word; -webkit-nbsp-mode: space; -webkit-line-break: after-white-space; color: rgb(0, 0, 0); font-size: 14px; font-family: Calibri, sans-serif;">
<div>Arash,</div>
<div><br>
</div>
<div>If your installing on devstack, please mail the openstack-dev maiilng list and place [magnum] in the mailing list header.  This list is more targeted around rdo.</div>
<div><br>
</div>
<div>Regards</div>
<div>-steve</div>
<div><br>
</div>
<span id="OLK_SRC_BODY_SECTION">
<div style="font-family:Calibri; font-size:11pt; text-align:left; color:black; BORDER-BOTTOM: medium none; BORDER-LEFT: medium none; PADDING-BOTTOM: 0in; PADDING-LEFT: 0in; PADDING-RIGHT: 0in; BORDER-TOP: #b5c4df 1pt solid; BORDER-RIGHT: medium none; PADDING-TOP: 3pt">
<span style="font-weight:bold">From: </span>Arash Kaffamanesh <<a href="mailto:ak@cloudssky.com">ak@cloudssky.com</a>><br>
<span style="font-weight:bold">Date: </span>Monday, May 11, 2015 at 11:05 AM<br>
<span style="font-weight:bold">To: </span>Steven Dake <<a href="mailto:stdake@cisco.com">stdake@cisco.com</a>><br>
<span style="font-weight:bold">Cc: </span>"<a href="mailto:rdo-list@redhat.com">rdo-list@redhat.com</a>" <<a href="mailto:rdo-list@redhat.com">rdo-list@redhat.com</a>><br>
<span style="font-weight:bold">Subject: </span>Re: [Rdo-list] RE(2) Failure to start openstack-nova-compute on Compute Node when testing delorean RC2 or CI repo on CentOS 7.1<br>
</div>
<div><br>
</div>
<blockquote id="MAC_OUTLOOK_ATTRIBUTION_BLOCKQUOTE" style="BORDER-LEFT: #b5c4df 5 solid; PADDING:0 0 0 5; MARGIN:0 0 0 5;">
<div>
<div>
<div dir="ltr">Steve,
<div><br>
</div>
<div>Thanks!</div>
<div><br>
</div>
<div>I pulled magnum from git on devstack, dropped the magnum db, created a new one</div>
<div>and tried to create a bay, now I'm getting "went to status error due to unknown" as below.</div>
<div><br>
</div>
<div>Nova  and magnum bay-list list shows:</div>
<div><br>
</div>
<div>
<p class=""><span class="">ubuntu@magnum:~/devstack$ nova list</span></p>
<p class=""><span class="">+--------------------------------------+-------------------------------------------------------+--------+------------+-------------+-----------------------------------------------------------------------+</span></p>
<p class=""><span class="">| ID                                   | Name                                                  | Status | Task State | Power State | Networks                                                              |</span></p>
<p class=""><span class="">+--------------------------------------+-------------------------------------------------------+--------+------------+-------------+-----------------------------------------------------------------------+</span></p>
<p class=""><span class="">| 797b6057-1ddf-4fe3-8688-b63e5e9109b4 | te-h5yvoiptrmx3-0-4w4j2ltnob7a-kube_node-vg7rojnafrub | ERROR  | -          | NOSTATE     | testbay-6kij6pvui3p7-fixed_network-46mvxv7yfjzw=10.0.0.5, 2001:db8::f |</span></p>
<p class=""><span class="">| c0b56f08-8a4d-428a-aee1-b29ca6e68163 | testbay-6kij6pvui3p7-kube_master-z3lifgrrdxie         | ACTIVE | -          | Running     | testbay-6kij6pvui3p7-fixed_network-46mvxv7yfjzw=10.0.0.3, 2001:db8::d |</span></p>
<p class=""><span class="">+--------------------------------------+-------------------------------------------------------+--------+------------+-------------+-----------------------------------------------------------------------+</span></p>
</div>
<div>
<p class=""><span class="">ubuntu@magnum:~/devstack$ magnum bay-list</span></p>
<p class=""><span class="">+--------------------------------------+---------+------------+---------------+</span></p>
<p class=""><span class="">| uuid                                 | name    | node_count | status        |</span></p>
<p class=""><span class="">+--------------------------------------+---------+------------+---------------+</span></p>
<p class=""><span class="">| 87e36c44-a884-4cb4-91cc-c7ae320f33b4 | testbay | 2          | CREATE_FAILED |</span></p>
<p class=""><span class="">+--------------------------------------+---------+------------+---------------+</span></p>
</div>
<div>
<p class=""><span class="">e3a65b05f", "flannel_network_subnetlen": "24", "fixed_network_cidr": "<a href="http://10.0.0.0/24">10.0.0.0/24</a>", "OS::stack_id": "d0246d48-23e0-4aa0-87e0-052b2ca363e8", "OS::stack_name": "testbay-6kij6pvui3p7", "master_flavor":
 "m1.small", "external_network_id": "e3e2a633-1638-4c11-a994-7179a24e826e", "portal_network_cidr": "<a href="http://10.254.0.0/16">10.254.0.0/16</a>", "docker_volume_size": "5", "ssh_key_name": "testkey", "kube_allow_priv": "true", "number_of_minions": "2",
 "flannel_use_vxlan": "false", "flannel_network_cidr": "<a href="http://10.100.0.0/16">10.100.0.0/16</a>", "server_flavor": "m1.medium", "dns_nameserver": "8.8.8.8", "server_image": "fedora-21-atomic-3"}, "id": "d0246d48-23e0-4aa0-87e0-052b2ca363e8", "outputs":
 [{"output_value": ["2001:db8::f", "2001:db8::e"], "description": "No description given", "output_key": "kube_minions_external"}, {"output_value": ["10.0.0.5", "10.0.0.4"], "description": "No description given", "output_key": "kube_minions"}, {"output_value":
 "2001:db8::d", "description": "No description given", "output_key": "kube_master"}], "template_description": "This template will boot a Kubernetes cluster with one or more minions (as specified by the number_of_minions parameter, which defaults to \"2\").\n"}}</span></p>
<p class=""><span class=""> log_http_response /usr/local/lib/python2.7/dist-packages/heatclient/common/http.py:141</span></p>
<p class=""><span class="">2015-05-11 17:31:15.968 30006 ERROR magnum.conductor.handlers.bay_k8s_heat [-] Unable to create bay, stack_id: d0246d48-23e0-4aa0-87e0-052b2ca363e8, reason: Resource CREATE failed: ResourceUnknownStatus: Resource failed - Unknown
 status FAILED due to "Resource CREATE failed: ResourceUnknownStatus: Resource failed - Unknown status FAILED due to "Resource CREATE failed: ResourceInError: Went to status error due to "Unknown"""</span></p>
<p class=""><span class="">Any Idea?</span></p>
<p class=""><span class="">Thanks!<br>
-Arash</span></p>
<p class=""><span class=""><br>
</span></p>
</div>
<div class="gmail_extra">
<div class="gmail_quote">On Mon, May 11, 2015 at 2:04 AM, Steven Dake (stdake) <span dir="ltr">
<<a href="mailto:stdake@cisco.com" target="_blank">stdake@cisco.com</a>></span> wrote:<br>
<blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left-width:1px;border-left-color:rgb(204,204,204);border-left-style:solid;padding-left:1ex">
<div style="word-wrap:break-word;color:rgb(0,0,0);font-size:14px;font-family:Calibri,sans-serif">
<div>Arash,</div>
<div><br>
</div>
<div>The short of it is Magnum 2015.1.0 is DOA.</div>
<div><br>
</div>
<div>Four commits have hit the repository in the last hour to fix these problems.  Now Magnum works with v1beta3 of the kubernetes 0.15 v1betav3 examples with the exception of the service object.  We are actively working on that problem upstream – I’ll update
 when its fixed.</div>
<div><br>
</div>
<div>To see my run check out:</div>
<div>
<p style="margin:0px;font-size:11px;font-family:Menlo"><a href="http://ur1.ca/kc613" target="_blank">http://ur1.ca/kc613</a> ->
<a href="http://paste.fedoraproject.org/220479/13022911" target="_blank">http://paste.fedoraproject.org/220479/13022911</a></p>
</div>
<div><br>
</div>
<div>To upgrade and see everything working but the service object, you will have to remove your openstack-magnum package if using my COPR repo or git pull on your Magnum repo if using devstack.</div>
<div><br>
</div>
<div>Boris - interested to hear the feedback on a CentOS distro operation once we get that service bug fixed.</div>
<div><br>
</div>
<div>Regards</div>
<div>-steve</div>
<div><br>
</div>
<div><br>
</div>
<span>
<div style="font-family:Calibri;font-size:11pt;text-align:left;color:black;border-width:1pt medium medium;border-style:solid none none;padding:3pt 0in 0in;border-top-color:rgb(181,196,223)">
<span style="font-weight:bold">From: </span>Arash Kaffamanesh <<a href="mailto:ak@cloudssky.com" target="_blank">ak@cloudssky.com</a>><br>
<span style="font-weight:bold">Date: </span>Sunday, May 10, 2015 at 4:10 PM<br>
<span style="font-weight:bold">To: </span>Steven Dake <<a href="mailto:stdake@cisco.com" target="_blank">stdake@cisco.com</a>>
<div>
<div class="h5"><br>
<span style="font-weight:bold">Cc: </span>"<a href="mailto:rdo-list@redhat.com" target="_blank">rdo-list@redhat.com</a>" <<a href="mailto:rdo-list@redhat.com" target="_blank">rdo-list@redhat.com</a>><br>
<span style="font-weight:bold">Subject: </span>Re: [Rdo-list] RE(2) Failure to start openstack-nova-compute on Compute Node when testing delorean RC2 or CI repo on CentOS 7.1<br>
</div>
</div>
</div>
<div>
<div class="h5">
<div><br>
</div>
<blockquote style="BORDER-LEFT:#b5c4df 5 solid;PADDING:0 0 0 5;MARGIN:0 0 0 5">
<div>
<div>
<div dir="ltr">Steve,
<div><br>
</div>
<div>Thanks for your kind advice.<br>
<div><br>
</div>
<div>I'm trying to go first through the quick start for magnum with devstack on ubuntu and I'm also</div>
<div>following this guide to create a bay with 2 nodes:</div>
<div><br>
</div>
<div><a href="http://git.openstack.org/cgit/openstack/magnum/tree/doc/source/dev/dev-quickstart.rst" target="_blank">http://git.openstack.org/cgit/openstack/magnum/tree/doc/source/dev/dev-quickstart.rst</a><br>
</div>
<div><br>
</div>
<div>I got somehow far, but by running this step to run the service tp provide a discoverable endpoint for the redis sentinels in the cluster:</div>
<div><br>
</div>
<div>
<pre style="padding:0px;margin-top:0px;margin-bottom:0px;color:rgb(0,0,0);font-size:14px"><pre style="padding:0px;margin-top:0px;margin-bottom:0px"><code> magnum service-create --manifest ./redis-sentinel-service.yaml --bay testbay</code></pre></pre>
<pre style="padding:0px;margin-top:0px;margin-bottom:0px;color:rgb(0,0,0);font-size:14px"><code><br></code></pre>
<pre style="padding:0px;margin-top:0px;margin-bottom:0px;color:rgb(0,0,0);font-size:14px">I'm getting:</pre>
<pre style="padding:0px;margin-top:0px;margin-bottom:0px;color:rgb(0,0,0);font-size:14px"><span style="font-family: arial, sans-serif;">
ERROR: Invalid resource state. (HTTP 409)</span><br></pre>
<pre style="padding:0px;margin-top:0px;margin-bottom:0px;color:rgb(0,0,0);font-size:14px"><span style="font-family: arial, sans-serif;"><br></span></pre>
<pre style="padding:0px;margin-top:0px;margin-bottom:0px;color:rgb(0,0,0);font-size:14px"><span style="font-family: arial, sans-serif;">In the console, I see:</span></pre>
<pre style="padding:0px;margin-top:0px;margin-bottom:0px;color:rgb(0,0,0);font-size:14px"><span style="font-family: arial, sans-serif;">
2015-05-10 22:19:44.010 4967 INFO oslo_messaging._drivers.impl_rabbit [-] Connected to AMQP server on <a href="http://127.0.0.1:5672" target="_blank">127.0.0.1:5672</a></span><br></pre>
<pre style="padding:0px;margin-top:0px;margin-bottom:0px;color:rgb(0,0,0);font-size:14px"><p><span>2015-05-10 22:19:44.050 4967 WARNING wsme.api [-] Client-side error: Invalid resource state.</span></p><p><span>127.0.0.1 - - [10/May/2015 22:19:44] "POST /v1/rcs HTTP/1.1" 409 115</span></p></pre>
<pre style="padding:0px;margin-top:0px;margin-bottom:0px;color:rgb(0,0,0);font-size:14px"><span style="font-family: arial, sans-serif;"><br></span></pre>
<pre style="padding:0px;margin-top:0px;margin-bottom:0px;color:rgb(0,0,0);font-size:14px"><span style="font-family: arial, sans-serif;">The testbay is running with 2 nodes properly:</span><br></pre>
<pre style="padding:0px;margin-top:0px;margin-bottom:0px;color:rgb(0,0,0);font-size:14px"><p><span>ubuntu@magnum:~/kubernetes/examples/redis$ magnum bay-list</span></p><p><span>| 4fa480a7-2d96-4a3e-876b-1c59d67257d6 | testbay | 2          | CREATE_COMPLETE |</span></p><pre style="padding:0px;margin-top:0px;margin-bottom:0px;color:rgb(0,0,0);font-size:14px"><br></pre>Any ideas, where I could dig for the problem?</pre>
<pre style="padding:0px;margin-top:0px;margin-bottom:0px;color:rgb(0,0,0);font-size:14px"><br></pre>
<pre style="padding:0px;margin-top:0px;margin-bottom:0px;color:rgb(0,0,0);font-size:14px"><span style="font-family: arial, sans-serif;">By the way after running "magnum pod-create .." the status shows "failed"</span><br></pre>
<pre style="padding:0px;margin-top:0px;margin-bottom:0px;color:rgb(0,0,0);font-size:14px"><span style="font-family: arial, sans-serif;">
ubuntu@magnum:~/kubernetes/examples/redis/v1beta3$ magnum pod-create --manifest ./redis-master.yaml --bay testbay</span><br></pre>
<pre style="padding:0px;margin-top:0px;margin-bottom:0px;color:rgb(0,0,0);font-size:14px"><p><span>+--------------+---------------------------------------------------------------------+</span></p><p><span>| Property     | Value                                                               |</span></p><p><span>+--------------+---------------------------------------------------------------------+</span></p><p><span>| status       | failed                                                              |</span></p><p><br></p><p>And the pod-list shows:</p><p><span>ubuntu@magnum:~$ magnum pod-list</span></p><p><span style="font-family: arial, sans-serif;">+--------------------------------------+--------------+</span><br></p><p><span style="font-family: arial, sans-serif;">| uuid </span><span style="font-family: arial, sans-serif;">                                </span><span style="font-family: arial, sans-serif;">| name </span><span style="font-family: arial, sans-serif;">        </span><span style="font-family: arial, sans-serif;">|</span><br></p><p><span style="font-family: arial, sans-serif;">+--------------------------------------+--------------+</span><br></p><p><span style="font-family: arial, sans-serif;">| 8d6977c1-a88f-45ee-be6c-fd869874c588 | redis-master |</span><br></p><p>
I tried also to set the status to running in the pod database table, but it didn't help.</p><p><span style="font-family: arial, sans-serif;">P.S.: I tried also to run the whole thing on fedora 21 with devstack, but I got more problems </span><span style="font-family: arial, sans-serif;">as on Ubuntu.</span><br></p></pre>
<pre style="padding:0px;margin-top:0px;margin-bottom:0px;color:rgb(0,0,0);font-size:14px"><br></pre>
<pre style="padding:0px;margin-top:0px;margin-bottom:0px;color:rgb(0,0,0);font-size:14px"><pre style="padding:0px;margin-top:0px;margin-bottom:0px">Many thanks in advance for your help!</pre><pre style="padding:0px;margin-top:0px;margin-bottom:0px">Arash</pre><div><br></div></pre>
</div>
<div class="gmail_extra"><br>
<div class="gmail_quote">On Mon, May 4, 2015 at 12:54 AM, Steven Dake (stdake) <span dir="ltr">
<<a href="mailto:stdake@cisco.com" target="_blank">stdake@cisco.com</a>></span> wrote:<br>
<blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left-width:1px;border-left-color:rgb(204,204,204);border-left-style:solid;padding-left:1ex">
<div style="word-wrap:break-word;color:rgb(0,0,0);font-size:14px;font-family:Calibri,sans-serif">
<div>Boris,</div>
<div><br>
</div>
<div>Feel free to try out my Magnum packages here.  They work in containers, not sure about CentOS.  I’m not certain the systemd files are correct (I didn’t test that part) but the dependencies are correct:</div>
<div><br>
</div>
<div><a href="https://copr.fedoraproject.org/coprs/sdake/openstack-magnum/" target="_blank">https://copr.fedoraproject.org/coprs/sdake/openstack-magnum/</a></div>
<div><br>
</div>
<div>NB you will have to run through the quickstart configuration guide here:</div>
<div><br>
</div>
<div><a href="https://github.com/openstack/magnum/blob/master/doc/source/dev/dev-manual-devstack.rst" target="_blank">https://github.com/openstack/magnum/blob/master/doc/source/dev/dev-manual-devstack.rst</a></div>
<div><br>
</div>
<div><u>Regards</u></div>
<div><u>-steve</u></div>
<div><u><br>
</u></div>
<span>
<div style="font-family:Calibri;font-size:11pt;text-align:left;color:black;border-width:1pt medium medium;border-style:solid none none;padding:3pt 0in 0in;border-top-color:rgb(181,196,223)">
<span style="font-weight:bold">From: </span>Boris Derzhavets <<a href="mailto:bderzhavets@hotmail.com" target="_blank">bderzhavets@hotmail.com</a>><br>
<span style="font-weight:bold">Date: </span>Sunday, May 3, 2015 at 11:20 AM<br>
<span style="font-weight:bold">To: </span>Arash Kaffamanesh <<a href="mailto:ak@cloudssky.com" target="_blank">ak@cloudssky.com</a>><br>
<span style="font-weight:bold">Cc: </span>"<a href="mailto:rdo-list@redhat.com" target="_blank">rdo-list@redhat.com</a>" <<a href="mailto:rdo-list@redhat.com" target="_blank">rdo-list@redhat.com</a>>
<div>
<div><br>
<span style="font-weight:bold">Subject: </span>Re: [Rdo-list] RE(2) Failure to start openstack-nova-compute on Compute Node when testing delorean RC2 or CI repo on CentOS 7.1<br>
</div>
</div>
</div>
<div>
<div>
<div><br>
</div>
<blockquote style="BORDER-LEFT:#b5c4df 5 solid;PADDING:0 0 0 5;MARGIN:0 0 0 5">
<div>
<div>
<div dir="ltr">Arash,<br>
<br>
Please, disregard this notice :-<br>
<br>
>You wrote :-<br>
<br>
>> What I noticed here, if I associate a floating ip to a VM with 2 interfaces, then I'll lose the
<br>
>> connectivity >to the instance and Kilo<br>
<br>
Different types of VMs  in yours and mine environments.<br>
<br>
Boris.<br>
<br>
<div>
<hr>
Date: Sun, 3 May 2015 16:51:54 +0200<br>
Subject: Re: [Rdo-list] RE(2) Failure to start openstack-nova-compute on Compute Node when testing delorean RC2 or CI repo on CentOS 7.1<br>
From: <a href="mailto:ak@cloudssky.com" target="_blank">ak@cloudssky.com</a><br>
To: <a href="mailto:bderzhavets@hotmail.com" target="_blank">bderzhavets@hotmail.com</a><br>
CC: <a href="mailto:apevec@gmail.com" target="_blank">apevec@gmail.com</a>; <a href="mailto:rdo-list@redhat.com" target="_blank">
rdo-list@redhat.com</a><br>
<br>
<div dir="ltr">Boris, thanks for your kind feedback.
<div><br>
I did a 3 node Kilo RC2 virt setup on top of my Kilo RC2 which was installed on bare metal.</div>
<div>The installation was successful by the first run.</div>
<div><br>
The network looks like this:</div>
<div><a href="https://cloudssky.com/.galleries/images/kilo-virt-setup.png" target="_blank">https://cloudssky.com/.galleries/images/kilo-virt-setup.png</a><br>
</div>
<div><br>
</div>
<div>For this setup I added the latest CentOS cloud image to glance, ran an instance (controller), enabled root login,</div>
<div>added ifcfg-eth1 to the instance, created a snapshot from the controller, added the repos to this instance, yum updated,</div>
<div>rebooted and spawn the network and compute1 vm nodes from that snapshot.</div>
<div>(To be able to ssh into the VMs over 20.0.1.0 network, I created the gate VM with a floating ip assigned and installed OpenVPN<br>
</div>
<div>on it.)</div>
<div><br>
</div>
<div>What I noticed here, if I associate a floating ip to a VM with 2 interfaces, then I'll lose the connectivity to the instance and Kilo</div>
<div>becomes crazy (the AIO controller on bare metal lose somehow its br-ex interface, but I didn't try to reproduce it again).</div>
<div><br>
</div>
<div>The packstack file was created in interactive mode with:<br>
</div>
<div><br>
</div>
<div>packstack --answer-file= --> press enter</div>
<div><br>
</div>
<div>I accepted most default values and selected trove and heat to be installed.</div>
<div><br>
</div>
<div>The answers are on pastebin:</div>
<div><br>
</div>
<div><a href="http://pastebin.com/SYp8Qf7d" target="_blank">http://pastebin.com/SYp8Qf7d</a><br>
</div>
<div><br>
The generated packstack file is here:<br>
</div>
<div><br>
</div>
<div><a href="http://pastebin.com/XqJuvQxf" target="_blank">http://pastebin.com/XqJuvQxf</a><br>
</div>
<div>The br-ex interfaces and changes to eth0 are created on network and compute nodes correctly (output below).<br>
And one nice thing for me coming from Havana was to see how easy has got to create an image in Horizon<br>
by uploading an image file (in my case rancheros.iso and centos.qcow2 worked like a charm).<br>
Now its time to discover Ironic, Trove and Manila and if someone has some tips or guidelines on how to test these<br>
new exciting things or has any news about Murano or Magnum on RDO, then I'll be more lucky and excited<br>
as I'm now about Kilo :-)<br>
Thanks!<br>
Arash<br>
---<br>
Some outputs here:<br>
<span>[root@controller ~(keystone_admin)]# nova hypervisor-list</span><br>
<span>+----+---------------------+-------+---------+</span><br>
<span>| ID | Hypervisor hostname | State | Status  |</span><br>
<span>+----+---------------------+-------+---------+</span><br>
<span>| 1  | compute1.novalocal   | up    | enabled |</span><br>
<br>
<span>+----+---------------------+-------+---------+</span><br>
<span>[root@network ~]# ovs-vsctl show</span><br>
<span>436a6114-d489-4160-b469-f088d66bd752</span><br>
<span>    Bridge br-tun</span><br>
<span>        fail_mode: secure</span><br>
<span>        Port "vxlan-14000212"</span><br>
<span>            Interface "vxlan-14000212"</span><br>
<span>                type: vxlan</span><br>
<span>                options: {df_default="true", in_key=flow, local_ip="20.0.2.19", out_key=flow, remote_ip="20.0.2.18"}</span><br>
<span>        Port br-tun</span><br>
<span>            Interface br-tun</span><br>
<span>                type: internal</span><br>
<span>        Port patch-int</span><br>
<span>            Interface patch-int</span><br>
<span>                type: patch</span><br>
<span>                options: {peer=patch-tun}</span><br>
<span>    Bridge br-int</span><br>
<span>        fail_mode: secure</span><br>
<span>        Port br-int</span><br>
<span>            Interface br-int</span><br>
<span>                type: internal</span><br>
<span>        Port int-br-ex</span><br>
<span>            Interface int-br-ex</span><br>
<span>                type: patch</span><br>
<span>                options: {peer=phy-br-ex}</span><br>
<span>        Port patch-tun</span><br>
<span>            Interface patch-tun</span><br>
<span>                type: patch</span><br>
<span>                options: {peer=patch-int}</span><br>
<span>    Bridge br-ex</span><br>
<span>        Port br-ex</span><br>
<span>            Interface br-ex</span><br>
<span>                type: internal</span><br>
<span>        Port phy-br-ex</span><br>
<span>            Interface phy-br-ex</span><br>
<span>                type: patch</span><br>
<span>                options: {peer=int-br-ex}</span><br>
<span>        Port "eth0"</span><br>
<span>            Interface "eth0"</span><br>
<br>
<span>    ovs_version: "2.3.1"</span><br>
<span><br>
</span><br>
<span>[root@compute~]# ovs-vsctl show</span><br>
<span>8123433e-b477-4ef5-88aa-721487a4bd58</span><br>
<span>    Bridge br-int</span><br>
<span>        fail_mode: secure</span><br>
<span>        Port int-br-ex</span><br>
<span>            Interface int-br-ex</span><br>
<span>                type: patch</span><br>
<span>                options: {peer=phy-br-ex}</span><br>
<span>        Port patch-tun</span><br>
<span>            Interface patch-tun</span><br>
<span>                type: patch</span><br>
<span>                options: {peer=patch-int}</span><br>
<span>        Port br-int</span><br>
<span>            Interface br-int</span><br>
<span>                type: internal</span><br>
<span>    Bridge br-tun</span><br>
<span>        fail_mode: secure</span><br>
<span>        Port br-tun</span><br>
<span>            Interface br-tun</span><br>
<span>                type: internal</span><br>
<span>        Port patch-int</span><br>
<span>            Interface patch-int</span><br>
<span>                type: patch</span><br>
<span>                options: {peer=patch-tun}</span><br>
<span>        Port "vxlan-14000213"</span><br>
<span>            Interface "vxlan-14000213"</span><br>
<span>                type: vxlan</span><br>
<span>                options: {df_default="true", in_key=flow, local_ip="20.0.2.18", out_key=flow, remote_ip="20.0.2.19"}</span><br>
<span>    Bridge br-ex</span><br>
<span>        Port phy-br-ex</span><br>
<span>            Interface phy-br-ex</span><br>
<span>                type: patch</span><br>
<span>                options: {peer=int-br-ex}</span><br>
<span>        Port "eth0"</span><br>
<span>            Interface "eth0"</span><br>
<span>        Port br-ex</span><br>
<span>            Interface br-ex</span><br>
<span>                type: internal</span><br>
<span></span><br>
<span>    ovs_version: "2.3.1"</span><br>
<span><br>
</span><br>
<span><br>
</span><br>
</div>
<div><br>
</div>
<div><br>
</div>
<div>
<div>On Sat, May 2, 2015 at 9:02 AM, Boris Derzhavets <span dir="ltr"><<a href="mailto:bderzhavets@hotmail.com" target="_blank">bderzhavets@hotmail.com</a>></span> wrote:<br>
<blockquote style="border-left-width:1px;border-left-color:rgb(204,204,204);border-left-style:solid;padding-left:1ex">
<div>
<div dir="ltr">Thank you once again it really works.<br>
<br>
[root@ip-192-169-142-127 ~(keystone_admin)]# nova hypervisor-list<br>
+----+----------------------------------------+-------+---------+<br>
| ID | Hypervisor hostname                    | State | Status  |<br>
+----+----------------------------------------+-------+---------+<br>
| 1  | <a href="http://ip-192-169-142-127.ip.secureserver.net" target="_blank">ip-192-169-142-127.ip.secureserver.net</a> | up    | enabled |<br>
| 2  | <a href="http://ip-192-169-142-137.ip.secureserver.net" target="_blank">ip-192-169-142-137.ip.secureserver.net</a> | up    | enabled |<br>
+----+----------------------------------------+-------+---------+<br>
<br>
[root@ip-192-169-142-127 ~(keystone_admin)]# nova hypervisor-servers <a href="http://ip-192-169-142-137.ip.secureserver.net" target="_blank">
ip-192-169-142-137.ip.secureserver.net</a><br>
+--------------------------------------+-------------------+---------------+----------------------------------------+<br>
| ID                                   | Name              | Hypervisor ID | Hypervisor Hostname                    |<br>
+--------------------------------------+-------------------+---------------+----------------------------------------+<br>
| 16ab7825-1403-442e-b3e2-7056d14398e0 | instance-00000002 | 2             | <a href="http://ip-192-169-142-137.ip.secureserver.net" target="_blank">
ip-192-169-142-137.ip.secureserver.net</a> |<br>
| 5fa444c8-30b8-47c3-b073-6ce10dd83c5a | instance-00000004 | 2             | <a href="http://ip-192-169-142-137.ip.secureserver.net" target="_blank">
ip-192-169-142-137.ip.secureserver.net</a> |<br>
+--------------------------------------+-------------------+---------------+----------------------------------------+<br>
<br>
with only one issue:-<br>
<br>
 during AIO run CONFIG_NEUTRON_OVS_TUNNEL_IF=<br>
 during Compute Node setup CONFIG_NEUTRON_OVS_TUNNEL_IF=eth1<br>
<br>
 and finally it results mess in ml2_vxlan_endpoints table. I had manually update<br>
 ml2_vxlan_endpoints and restart   neutron-openvswitch-agent.service on both nodes<br>
 afterwards VMs on compute node obtained access to meta-data server.<br>
<br>
 I also believe that synchronized delete records from tables "compute_nodes && services" 
<br>
 ( along with disabling nova-compute on Controller)  could  turn AIO host into real Controller.<br>
<br>
Boris.<br>
<br>
<div>
<hr>
Date: Fri, 1 May 2015 22:22:41 +0200<br>
Subject: Re: [Rdo-list] RE(1) Failure to start openstack-nova-compute on Compute Node when testing delorean RC2 or CI repo on CentOS 7.1<br>
From: <a href="mailto:ak@cloudssky.com" target="_blank">ak@cloudssky.com</a><br>
To: <a href="mailto:bderzhavets@hotmail.com" target="_blank">bderzhavets@hotmail.com</a><br>
CC: <a href="mailto:apevec@gmail.com" target="_blank">apevec@gmail.com</a>; <a href="mailto:rdo-list@redhat.com" target="_blank">
rdo-list@redhat.com</a><br>
<br>
<div dir="ltr">
<div>I got the compute node working by adding the delorean-kilo.repo on compute node,</div>
<div>yum updating the compute node, rebooted and extended the packstack file from the first AIO</div>
<div>install with the IP of compute node and ran packstack again with NetworkManager enabled</div>
<div>and did a second yum update on compute node before the 3rd packstack run, and now it works :-)</div>
<div><br>
</div>
<div>In short, for RC2 we have to force by hand to get the nova-compute running on compute node,</div>
<div>before running packstack from controller again from an existing AIO install.</div>
<div><br>
</div>
<div>Now I have 2 compute nodes (controller AIO with compute + 2nd compute) and could spawn a</div>
<div>3rd cirros instance which landed on 2nd compute node.</div>
<div>ssh'ing into the instances over the floating ip works fine too.</div>
<div><br>
</div>
<div>Before running packstack again, I set:</div>
<div><br>
EXCLUDE_SERVERS=<ip of controller><br>
</div>
<div><br>
</div>
<div><span>[root@csky01 ~(keystone_osx)]# virsh list --all</span><br>
<span> Id    Name                           Status</span><br>
<span>----------------------------------------------------</span><br>
<span> 2     instance-00000001              laufend </span>--> means running in German<br>
<span></span><br>
<span> 3     instance-00000002              laufend </span>--> means running in German<br>
<br>
<br>
</div>
<div><span>[root@csky06 ~]# virsh list --all</span><br>
<span> Id    Name                           Status</span><br>
<span>----------------------------------------------------</span><br>
<span> 2     instance-00000003              laufend --> means running in German</span><br>
<br>
<br>
</div>
<div>== Nova managed services ==<br>
</div>
<div><span>+----+------------------+----------------+----------+---------+-------+----------------------------+-----------------+</span><br>
<span>| Id | Binary           | Host           | Zone     | Status  | State | Updated_at                 | Disabled Reason |</span><br>
<span>+----+------------------+----------------+----------+---------+-------+----------------------------+-----------------+</span><br>
<span>| 1  | nova-consoleauth | <a href="http://csky01.csg.net" target="_blank">csky01.csg.net</a> | internal | enabled | up    | 2015-05-01T19:46:42.000000 | -               |</span><br>
<span>| 2  | nova-conductor   | <a href="http://csky01.csg.net" target="_blank">csky01.csg.net</a> | internal | enabled | up    | 2015-05-01T19:46:42.000000 | -               |</span><br>
<span>| 3  | nova-scheduler   | <a href="http://csky01.csg.net" target="_blank">csky01.csg.net</a> | internal | enabled | up    | 2015-05-01T19:46:42.000000 | -               |</span><br>
<span>| 4  | nova-compute     | <a href="http://csky01.csg.net" target="_blank">csky01.csg.net</a> | nova     | enabled | up    | 2015-05-01T19:46:40.000000 | -               |</span><br>
<span>| 5  | nova-cert        | <a href="http://csky01.csg.net" target="_blank">csky01.csg.net</a> | internal | enabled | up    | 2015-05-01T19:46:42.000000 | -               |</span><br>
<span>| 6  | nova-compute     | <a href="http://csky06.csg.net" target="_blank">csky06.csg.net</a> | nova     | enabled | up    | 2015-05-01T19:46:38.000000 | -               |</span><br>
<span>+----+------------------+----------------+----------+---------+-------+----------------------------+-----------------+</span><br>
</div>
<div><br>
</div>
<div><br>
</div>
<div>
<div>On Fri, May 1, 2015 at 9:02 AM, Boris Derzhavets <span dir="ltr"><<a href="mailto:bderzhavets@hotmail.com" target="_blank">bderzhavets@hotmail.com</a>></span> wrote:<br>
<blockquote style="border-left-width:1px;border-left-color:rgb(204,204,204);border-left-style:solid;padding-left:1ex">
<div>
<div dir="ltr">Ran packstack --debug --answer-file=./answer-fileRC2.txt<br>
192.169.142.137_nova.pp.log.gz attached<br>
<br>
Boris<br>
<br>
<div>
<hr>
From: <a href="mailto:bderzhavets@hotmail.com" target="_blank">bderzhavets@hotmail.com</a><br>
To: <a href="mailto:apevec@gmail.com" target="_blank">apevec@gmail.com</a><br>
Date: Fri, 1 May 2015 01:44:17 -0400<br>
CC: <a href="mailto:rdo-list@redhat.com" target="_blank">rdo-list@redhat.com</a><br>
Subject: [Rdo-list] Failure to start openstack-nova-compute on Compute Node when testing delorean RC2 or CI repo on CentOS 7.1<br>
<br>
<div dir="ltr">Follow instructions <a href="https://www.redhat.com/archives/rdo-list/2015-April/msg00254.html" target="_blank">
https://www.redhat.com/archives/rdo-list/2015-April/msg00254.html</a><br>
packstack fails :-<br>
<br>
Applying 192.169.142.127_nova.pp<br>
Applying 192.169.142.137_nova.pp<br>
192.169.142.127_nova.pp:                             [ DONE ]      <br>
192.169.142.137_nova.pp:                          [ ERROR ]        <br>
Applying Puppet manifests                         [ ERROR ]<br>
<br>
ERROR : Error appeared during Puppet run: 192.169.142.137_nova.pp<br>
Error: Could not start Service[nova-compute]: Execution of '/usr/bin/systemctl start openstack-nova-compute' returned 1: Job for openstack-nova-compute.service failed. See 'systemctl status openstack-nova-compute.service' and 'journalctl -xn' for details.<br>
You will find full trace in log /var/tmp/packstack/20150501-081745-rIpCIr/manifests/192.169.142.137_nova.pp.log<br>
<br>
In both cases (RC2 or CI repos)  on compute node 192.169.142.137 /var/log/nova/nova-compute.log<br>
reports :-<br>
<br>
2015-05-01 08:21:41.354 4999 INFO oslo.messaging._drivers.impl_rabbit [req-0ae34524-9ee0-4a87-aa5a-fff5d1999a9c ] Delaying reconnect for 1.0 seconds...<br>
2015-05-01 08:21:42.355 4999 INFO oslo.messaging._drivers.impl_rabbit [req-0ae34524-9ee0-4a87-aa5a-fff5d1999a9c ] Connecting to AMQP server on localhost:5672<br>
2015-05-01 08:21:42.360 4999 ERROR oslo.messaging._drivers.impl_rabbit [req-0ae34524-9ee0-4a87-aa5a-fff5d1999a9c ] AMQP server on localhost:5672 is unreachable: [Errno 111] ECONNREFUSED. Trying again in 11 seconds.<br>
<br>
Seems like it is looking for AMQP Server at wrong host . Should be 192.169.142.127<br>
On 192.169.142.127 :-<br>
<br>
[root@ip-192-169-142-127 ~]# netstat -lntp | grep 5672<br>
==>  tcp        0      0 <a href="http://0.0.0.0:25672" target="_blank">0.0.0.0:25672</a>           0.0.0.0:*               LISTEN      14506/beam.smp     
<br>
        tcp6       0      0 :::5672                              :::*                    LISTEN      14506/beam.smp  
<br>
<br>
[root@ip-192-169-142-127 ~]# iptables-save | grep 5672<br>
-A INPUT -s <a href="http://192.169.142.127/32" target="_blank">192.169.142.127/32</a> -p tcp -m multiport --dports 5671,5672 -m comment --comment "001 amqp incoming amqp_192.169.142.127" -j ACCEPT<br>
-A INPUT -s <a href="http://192.169.142.137/32" target="_blank">192.169.142.137/32</a> -p tcp -m multiport --dports 5671,5672 -m comment --comment "001 amqp incoming amqp_192.169.142.137" -j ACCEPT<br>
<br>
Answer-file is attached<br>
<br>
Thanks.<br>
Boris<br>
</div>
<br>
_______________________________________________ Rdo-list mailing list <a href="mailto:Rdo-list@redhat.com" target="_blank">
Rdo-list@redhat.com</a> <a href="https://www.redhat.com/mailman/listinfo/rdo-list" target="_blank">
https://www.redhat.com/mailman/listinfo/rdo-list</a> To unsubscribe: <a href="mailto:rdo-list-unsubscribe@redhat.com" target="_blank">
rdo-list-unsubscribe@redhat.com</a></div>
</div>
</div>
<br>
_______________________________________________<br>
Rdo-list mailing list<br>
<a href="mailto:Rdo-list@redhat.com" target="_blank">Rdo-list@redhat.com</a><br>
<a href="https://www.redhat.com/mailman/listinfo/rdo-list" target="_blank">https://www.redhat.com/mailman/listinfo/rdo-list</a><br>
<br>
To unsubscribe: <a href="mailto:rdo-list-unsubscribe@redhat.com" target="_blank">
rdo-list-unsubscribe@redhat.com</a><br>
</blockquote>
</div>
<br>
</div>
</div>
</div>
</div>
</div>
</blockquote>
</div>
<br>
</div>
</div>
</div>
</div>
</div>
</div>
</blockquote>
</div>
</div>
</span></div>
</blockquote>
</div>
<br>
</div>
</div>
</div>
</div>
</div>
</blockquote>
</div>
</div>
</span></div>
</blockquote>
</div>
<br>
</div>
</div>
</div>
</div>
</blockquote>
</span>
</body>
</html>