[Rdo-list] RE(2) Failure to start openstack-nova-compute on Compute Node when testing delorean RC2 or CI repo on CentOS 7.1
Steven Dake (stdake)
stdake at cisco.com
Sun May 3 22:54:45 UTC 2015
Boris,
Feel free to try out my Magnum packages here. They work in containers, not sure about CentOS. I’m not certain the systemd files are correct (I didn’t test that part) but the dependencies are correct:
https://copr.fedoraproject.org/coprs/sdake/openstack-magnum/
NB you will have to run through the quickstart configuration guide here:
https://github.com/openstack/magnum/blob/master/doc/source/dev/dev-manual-devstack.rst
Regards
-steve
From: Boris Derzhavets <bderzhavets at hotmail.com<mailto:bderzhavets at hotmail.com>>
Date: Sunday, May 3, 2015 at 11:20 AM
To: Arash Kaffamanesh <ak at cloudssky.com<mailto:ak at cloudssky.com>>
Cc: "rdo-list at redhat.com<mailto:rdo-list at redhat.com>" <rdo-list at redhat.com<mailto:rdo-list at redhat.com>>
Subject: Re: [Rdo-list] RE(2) Failure to start openstack-nova-compute on Compute Node when testing delorean RC2 or CI repo on CentOS 7.1
Arash,
Please, disregard this notice :-
>You wrote :-
>> What I noticed here, if I associate a floating ip to a VM with 2 interfaces, then I'll lose the
>> connectivity >to the instance and Kilo
Different types of VMs in yours and mine environments.
Boris.
________________________________
Date: Sun, 3 May 2015 16:51:54 +0200
Subject: Re: [Rdo-list] RE(2) Failure to start openstack-nova-compute on Compute Node when testing delorean RC2 or CI repo on CentOS 7.1
From: ak at cloudssky.com<mailto:ak at cloudssky.com>
To: bderzhavets at hotmail.com<mailto:bderzhavets at hotmail.com>
CC: apevec at gmail.com<mailto:apevec at gmail.com>; rdo-list at redhat.com<mailto:rdo-list at redhat.com>
Boris, thanks for your kind feedback.
I did a 3 node Kilo RC2 virt setup on top of my Kilo RC2 which was installed on bare metal.
The installation was successful by the first run.
The network looks like this:
https://cloudssky.com/.galleries/images/kilo-virt-setup.png
For this setup I added the latest CentOS cloud image to glance, ran an instance (controller), enabled root login,
added ifcfg-eth1 to the instance, created a snapshot from the controller, added the repos to this instance, yum updated,
rebooted and spawn the network and compute1 vm nodes from that snapshot.
(To be able to ssh into the VMs over 20.0.1.0 network, I created the gate VM with a floating ip assigned and installed OpenVPN
on it.)
What I noticed here, if I associate a floating ip to a VM with 2 interfaces, then I'll lose the connectivity to the instance and Kilo
becomes crazy (the AIO controller on bare metal lose somehow its br-ex interface, but I didn't try to reproduce it again).
The packstack file was created in interactive mode with:
packstack --answer-file= --> press enter
I accepted most default values and selected trove and heat to be installed.
The answers are on pastebin:
http://pastebin.com/SYp8Qf7d
The generated packstack file is here:
http://pastebin.com/XqJuvQxf
The br-ex interfaces and changes to eth0 are created on network and compute nodes correctly (output below).
And one nice thing for me coming from Havana was to see how easy has got to create an image in Horizon
by uploading an image file (in my case rancheros.iso and centos.qcow2 worked like a charm).
Now its time to discover Ironic, Trove and Manila and if someone has some tips or guidelines on how to test these
new exciting things or has any news about Murano or Magnum on RDO, then I'll be more lucky and excited
as I'm now about Kilo :-)
Thanks!
Arash
---
Some outputs here:
[root at controller ~(keystone_admin)]# nova hypervisor-list
+----+---------------------+-------+---------+
| ID | Hypervisor hostname | State | Status |
+----+---------------------+-------+---------+
| 1 | compute1.novalocal | up | enabled |
+----+---------------------+-------+---------+
[root at network ~]# ovs-vsctl show
436a6114-d489-4160-b469-f088d66bd752
Bridge br-tun
fail_mode: secure
Port "vxlan-14000212"
Interface "vxlan-14000212"
type: vxlan
options: {df_default="true", in_key=flow, local_ip="20.0.2.19", out_key=flow, remote_ip="20.0.2.18"}
Port br-tun
Interface br-tun
type: internal
Port patch-int
Interface patch-int
type: patch
options: {peer=patch-tun}
Bridge br-int
fail_mode: secure
Port br-int
Interface br-int
type: internal
Port int-br-ex
Interface int-br-ex
type: patch
options: {peer=phy-br-ex}
Port patch-tun
Interface patch-tun
type: patch
options: {peer=patch-int}
Bridge br-ex
Port br-ex
Interface br-ex
type: internal
Port phy-br-ex
Interface phy-br-ex
type: patch
options: {peer=int-br-ex}
Port "eth0"
Interface "eth0"
ovs_version: "2.3.1"
[root at compute~]# ovs-vsctl show
8123433e-b477-4ef5-88aa-721487a4bd58
Bridge br-int
fail_mode: secure
Port int-br-ex
Interface int-br-ex
type: patch
options: {peer=phy-br-ex}
Port patch-tun
Interface patch-tun
type: patch
options: {peer=patch-int}
Port br-int
Interface br-int
type: internal
Bridge br-tun
fail_mode: secure
Port br-tun
Interface br-tun
type: internal
Port patch-int
Interface patch-int
type: patch
options: {peer=patch-tun}
Port "vxlan-14000213"
Interface "vxlan-14000213"
type: vxlan
options: {df_default="true", in_key=flow, local_ip="20.0.2.18", out_key=flow, remote_ip="20.0.2.19"}
Bridge br-ex
Port phy-br-ex
Interface phy-br-ex
type: patch
options: {peer=int-br-ex}
Port "eth0"
Interface "eth0"
Port br-ex
Interface br-ex
type: internal
ovs_version: "2.3.1"
On Sat, May 2, 2015 at 9:02 AM, Boris Derzhavets <bderzhavets at hotmail.com<mailto:bderzhavets at hotmail.com>> wrote:
Thank you once again it really works.
[root at ip-192-169-142-127 ~(keystone_admin)]# nova hypervisor-list
+----+----------------------------------------+-------+---------+
| ID | Hypervisor hostname | State | Status |
+----+----------------------------------------+-------+---------+
| 1 | ip-192-169-142-127.ip.secureserver.net<http://ip-192-169-142-127.ip.secureserver.net> | up | enabled |
| 2 | ip-192-169-142-137.ip.secureserver.net<http://ip-192-169-142-137.ip.secureserver.net> | up | enabled |
+----+----------------------------------------+-------+---------+
[root at ip-192-169-142-127 ~(keystone_admin)]# nova hypervisor-servers ip-192-169-142-137.ip.secureserver.net<http://ip-192-169-142-137.ip.secureserver.net>
+--------------------------------------+-------------------+---------------+----------------------------------------+
| ID | Name | Hypervisor ID | Hypervisor Hostname |
+--------------------------------------+-------------------+---------------+----------------------------------------+
| 16ab7825-1403-442e-b3e2-7056d14398e0 | instance-00000002 | 2 | ip-192-169-142-137.ip.secureserver.net<http://ip-192-169-142-137.ip.secureserver.net> |
| 5fa444c8-30b8-47c3-b073-6ce10dd83c5a | instance-00000004 | 2 | ip-192-169-142-137.ip.secureserver.net<http://ip-192-169-142-137.ip.secureserver.net> |
+--------------------------------------+-------------------+---------------+----------------------------------------+
with only one issue:-
during AIO run CONFIG_NEUTRON_OVS_TUNNEL_IF=
during Compute Node setup CONFIG_NEUTRON_OVS_TUNNEL_IF=eth1
and finally it results mess in ml2_vxlan_endpoints table. I had manually update
ml2_vxlan_endpoints and restart neutron-openvswitch-agent.service on both nodes
afterwards VMs on compute node obtained access to meta-data server.
I also believe that synchronized delete records from tables "compute_nodes && services"
( along with disabling nova-compute on Controller) could turn AIO host into real Controller.
Boris.
________________________________
Date: Fri, 1 May 2015 22:22:41 +0200
Subject: Re: [Rdo-list] RE(1) Failure to start openstack-nova-compute on Compute Node when testing delorean RC2 or CI repo on CentOS 7.1
From: ak at cloudssky.com<mailto:ak at cloudssky.com>
To: bderzhavets at hotmail.com<mailto:bderzhavets at hotmail.com>
CC: apevec at gmail.com<mailto:apevec at gmail.com>; rdo-list at redhat.com<mailto:rdo-list at redhat.com>
I got the compute node working by adding the delorean-kilo.repo on compute node,
yum updating the compute node, rebooted and extended the packstack file from the first AIO
install with the IP of compute node and ran packstack again with NetworkManager enabled
and did a second yum update on compute node before the 3rd packstack run, and now it works :-)
In short, for RC2 we have to force by hand to get the nova-compute running on compute node,
before running packstack from controller again from an existing AIO install.
Now I have 2 compute nodes (controller AIO with compute + 2nd compute) and could spawn a
3rd cirros instance which landed on 2nd compute node.
ssh'ing into the instances over the floating ip works fine too.
Before running packstack again, I set:
EXCLUDE_SERVERS=<ip of controller>
[root at csky01 ~(keystone_osx)]# virsh list --all
Id Name Status
----------------------------------------------------
2 instance-00000001 laufend --> means running in German
3 instance-00000002 laufend --> means running in German
[root at csky06 ~]# virsh list --all
Id Name Status
----------------------------------------------------
2 instance-00000003 laufend --> means running in German
== Nova managed services ==
+----+------------------+----------------+----------+---------+-------+----------------------------+-----------------+
| Id | Binary | Host | Zone | Status | State | Updated_at | Disabled Reason |
+----+------------------+----------------+----------+---------+-------+----------------------------+-----------------+
| 1 | nova-consoleauth | csky01.csg.net<http://csky01.csg.net> | internal | enabled | up | 2015-05-01T19:46:42.000000 | - |
| 2 | nova-conductor | csky01.csg.net<http://csky01.csg.net> | internal | enabled | up | 2015-05-01T19:46:42.000000 | - |
| 3 | nova-scheduler | csky01.csg.net<http://csky01.csg.net> | internal | enabled | up | 2015-05-01T19:46:42.000000 | - |
| 4 | nova-compute | csky01.csg.net<http://csky01.csg.net> | nova | enabled | up | 2015-05-01T19:46:40.000000 | - |
| 5 | nova-cert | csky01.csg.net<http://csky01.csg.net> | internal | enabled | up | 2015-05-01T19:46:42.000000 | - |
| 6 | nova-compute | csky06.csg.net<http://csky06.csg.net> | nova | enabled | up | 2015-05-01T19:46:38.000000 | - |
+----+------------------+----------------+----------+---------+-------+----------------------------+-----------------+
On Fri, May 1, 2015 at 9:02 AM, Boris Derzhavets <bderzhavets at hotmail.com<mailto:bderzhavets at hotmail.com>> wrote:
Ran packstack --debug --answer-file=./answer-fileRC2.txt
192.169.142.137_nova.pp.log.gz attached
Boris
________________________________
From: bderzhavets at hotmail.com<mailto:bderzhavets at hotmail.com>
To: apevec at gmail.com<mailto:apevec at gmail.com>
Date: Fri, 1 May 2015 01:44:17 -0400
CC: rdo-list at redhat.com<mailto:rdo-list at redhat.com>
Subject: [Rdo-list] Failure to start openstack-nova-compute on Compute Node when testing delorean RC2 or CI repo on CentOS 7.1
Follow instructions https://www.redhat.com/archives/rdo-list/2015-April/msg00254.html
packstack fails :-
Applying 192.169.142.127_nova.pp
Applying 192.169.142.137_nova.pp
192.169.142.127_nova.pp: [ DONE ]
192.169.142.137_nova.pp: [ ERROR ]
Applying Puppet manifests [ ERROR ]
ERROR : Error appeared during Puppet run: 192.169.142.137_nova.pp
Error: Could not start Service[nova-compute]: Execution of '/usr/bin/systemctl start openstack-nova-compute' returned 1: Job for openstack-nova-compute.service failed. See 'systemctl status openstack-nova-compute.service' and 'journalctl -xn' for details.
You will find full trace in log /var/tmp/packstack/20150501-081745-rIpCIr/manifests/192.169.142.137_nova.pp.log
In both cases (RC2 or CI repos) on compute node 192.169.142.137 /var/log/nova/nova-compute.log
reports :-
2015-05-01 08:21:41.354 4999 INFO oslo.messaging._drivers.impl_rabbit [req-0ae34524-9ee0-4a87-aa5a-fff5d1999a9c ] Delaying reconnect for 1.0 seconds...
2015-05-01 08:21:42.355 4999 INFO oslo.messaging._drivers.impl_rabbit [req-0ae34524-9ee0-4a87-aa5a-fff5d1999a9c ] Connecting to AMQP server on localhost:5672
2015-05-01 08:21:42.360 4999 ERROR oslo.messaging._drivers.impl_rabbit [req-0ae34524-9ee0-4a87-aa5a-fff5d1999a9c ] AMQP server on localhost:5672 is unreachable: [Errno 111] ECONNREFUSED. Trying again in 11 seconds.
Seems like it is looking for AMQP Server at wrong host . Should be 192.169.142.127
On 192.169.142.127 :-
[root at ip-192-169-142-127 ~]# netstat -lntp | grep 5672
==> tcp 0 0 0.0.0.0:25672<http://0.0.0.0:25672> 0.0.0.0:* LISTEN 14506/beam.smp
tcp6 0 0 :::5672 :::* LISTEN 14506/beam.smp
[root at ip-192-169-142-127 ~]# iptables-save | grep 5672
-A INPUT -s 192.169.142.127/32<http://192.169.142.127/32> -p tcp -m multiport --dports 5671,5672 -m comment --comment "001 amqp incoming amqp_192.169.142.127" -j ACCEPT
-A INPUT -s 192.169.142.137/32<http://192.169.142.137/32> -p tcp -m multiport --dports 5671,5672 -m comment --comment "001 amqp incoming amqp_192.169.142.137" -j ACCEPT
Answer-file is attached
Thanks.
Boris
_______________________________________________ Rdo-list mailing list Rdo-list at redhat.com<mailto:Rdo-list at redhat.com> https://www.redhat.com/mailman/listinfo/rdo-list To unsubscribe: rdo-list-unsubscribe at redhat.com<mailto:rdo-list-unsubscribe at redhat.com>
_______________________________________________
Rdo-list mailing list
Rdo-list at redhat.com<mailto:Rdo-list at redhat.com>
https://www.redhat.com/mailman/listinfo/rdo-list
To unsubscribe: rdo-list-unsubscribe at redhat.com<mailto:rdo-list-unsubscribe at redhat.com>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.rdoproject.org/pipermail/dev/attachments/20150503/3447e840/attachment.html>
More information about the dev
mailing list