<html>
<head>
<style><!--
.hmmessage P
{
margin:0px;
padding:0px
}
body.hmmessage
{
font-size: 12pt;
font-family:Calibri
}
--></style></head>
<body class='hmmessage'><div dir='ltr'>Arash,<br><br>System was yum updated, I ran packstack and disabled NetworkManager only after completion.<br>As of now same procedure reproduced successfully 3 times on different VMs ( KVM Hypervisors of F22,F21, Ubuntu 15.04) . Nested KVM enabled for each VM.<br><br>[root@centos71 ~(keystone_admin)]# uname -a<br>Linux centos71.localdomain 3.10.0-229.1.2.el7.x86_64 #1 SMP Fri Mar 27 03:04:26 UTC 2015 x86_64 x86_64 x86_64 GNU/Linux<br><br>[root@centos71 ~(keystone_admin)]# openstack-status<br>== Nova services ==<br>openstack-nova-api: active<br>openstack-nova-cert: active<br>openstack-nova-compute: active<br>openstack-nova-network: inactive (disabled on boot)<br>openstack-nova-scheduler: active<br>openstack-nova-conductor: active<br>== Glance services ==<br>openstack-glance-api: active<br>openstack-glance-registry: active<br>== Keystone service ==<br>openstack-keystone: inactive (disabled on boot)<br>== Horizon service ==<br>openstack-dashboard: active<br>== neutron services ==<br>neutron-server: active<br>neutron-dhcp-agent: active<br>neutron-l3-agent: active<br>neutron-metadata-agent: active<br>neutron-openvswitch-agent: active<br>== Swift services ==<br>openstack-swift-proxy: active<br>openstack-swift-account: active<br>openstack-swift-container: active<br>openstack-swift-object: active<br>== Cinder services ==<br>openstack-cinder-api: active<br>openstack-cinder-scheduler: active<br>openstack-cinder-volume: active<br>openstack-cinder-backup: active<br>== Ceilometer services ==<br>openstack-ceilometer-api: active<br>openstack-ceilometer-central: active<br>openstack-ceilometer-compute: active<br>openstack-ceilometer-collector: active<br>openstack-ceilometer-alarm-notifier: active<br>openstack-ceilometer-alarm-evaluator: active<br>openstack-ceilometer-notification: active<br>== Support services ==<br>mysqld: inactive (disabled on boot)<br>libvirtd: active<br>openvswitch: active<br>dbus: active<br>target: active<br>rabbitmq-server: active<br>memcached: active<br>== Keystone users ==<br>/usr/lib/python2.7/site-packages/keystoneclient/shell.py:65: DeprecationWarning: The keystone CLI is deprecated in favor of python-openstackclient. For a Python library, continue using python-keystoneclient.<br> 'python-keystoneclient.', DeprecationWarning)<br>+----------------------------------+------------+---------+----------------------+<br>| id | name | enabled | email |<br>+----------------------------------+------------+---------+----------------------+<br>| 1fb446ec99184947bff342188028fddd | admin | True | root@localhost |<br>| 3e76f14038724ef19e804ef99919ae75 | ceilometer | True | ceilometer@localhost |<br>| d63e40e71da84778bdbc89cd0645109c | cinder | True | cinder@localhost |<br>| 75b0b000562f491284043b5c74afbb1e | demo | True | |<br>| bb3d35d9a23443bfb3791545a7aa03b4 | glance | True | glance@localhost |<br>| 573eb12b92fd48e68e5635f3c79b3dec | neutron | True | neutron@localhost |<br>| be6b2d41f55f4c3fab8e02a779de4a63 | nova | True | nova@localhost |<br>| 53e9e3a493244c5e801ba92446c969bc | swift | True | swift@localhost |<br>+----------------------------------+------------+---------+----------------------+<br>== Glance images ==<br>+--------------------------------------+--------------------+-------------+------------------+-----------+--------+<br>| ID | Name | Disk Format | Container Format | Size | Status |<br>+--------------------------------------+--------------------+-------------+------------------+-----------+--------+<br>| 0c73a315-8867-472c-bba6-e73a43b9b98d | cirros | qcow2 | bare | 13200896 | active |<br>| 52df1d6d-9eb0-4c09-a9bb-ec5a07bd62eb | Fedora 21 image | qcow2 | bare | 158443520 | active |<br>| 7f128f54-727c-45ad-8891-777aa39ff3e1 | Ubuntu 15.04 image | qcow2 | bare | 284361216 | active |<br>+--------------------------------------+--------------------+-------------+------------------+-----------+--------+<br>== Nova managed services ==<br>+----+------------------+----------------------+----------+---------+-------+----------------------------+-----------------+<br>| Id | Binary | Host | Zone | Status | State | Updated_at | Disabled Reason |<br>+----+------------------+----------------------+----------+---------+-------+----------------------------+-----------------+<br>| 1 | nova-consoleauth | centos71.localdomain | internal | enabled | up | 2015-04-23T09:36:57.000000 | - |<br>| 2 | nova-scheduler | centos71.localdomain | internal | enabled | up | 2015-04-23T09:36:57.000000 | - |<br>| 3 | nova-conductor | centos71.localdomain | internal | enabled | up | 2015-04-23T09:36:58.000000 | - |<br>| 4 | nova-compute | centos71.localdomain | nova | enabled | up | 2015-04-23T09:36:58.000000 | - |<br>| 5 | nova-cert | centos71.localdomain | internal | enabled | up | 2015-04-23T09:36:57.000000 | - |<br>+----+------------------+----------------------+----------+---------+-------+----------------------------+-----------------+<br>== Nova networks ==<br>+--------------------------------------+----------+------+<br>| ID | Label | Cidr |<br>+--------------------------------------+----------+------+<br>| d3bcf265-2429-4556-b799-16579ba367cf | public | - |<br>| b25422bc-aa87-4007-bf5a-64dde97dd6f7 | demo_net | - |<br>+--------------------------------------+----------+------+<br>== Nova instance flavors ==<br>+----+-----------+-----------+------+-----------+------+-------+-------------+-----------+<br>| ID | Name | Memory_MB | Disk | Ephemeral | Swap | VCPUs | RXTX_Factor | Is_Public |<br>+----+-----------+-----------+------+-----------+------+-------+-------------+-----------+<br>| 1 | m1.tiny | 512 | 1 | 0 | | 1 | 1.0 | True |<br>| 2 | m1.small | 2048 | 20 | 0 | | 1 | 1.0 | True |<br>| 3 | m1.medium | 4096 | 40 | 0 | | 2 | 1.0 | True |<br>| 4 | m1.large | 8192 | 80 | 0 | | 4 | 1.0 | True |<br>| 5 | m1.xlarge | 16384 | 160 | 0 | | 8 | 1.0 | True |<br>+----+-----------+-----------+------+-----------+------+-------+-------------+-----------+<br>== Nova instances ==<br>+----+------+--------+------------+-------------+----------+<br>| ID | Name | Status | Task State | Power State | Networks |<br>+----+------+--------+------------+-------------+----------+<br>+----+------+--------+------------+-------------+----------+<br><br>[root@centos71 ~(keystone_admin)]# rpm -qa | grep openstack<br>openstack-nova-novncproxy-2015.1-dev19.el7.centos.noarch<br>python-openstackclient-1.0.3-post3.el7.centos.noarch<br>openstack-keystone-2015.1-dev14.el7.centos.noarch<br>openstack-nova-console-2015.1-dev19.el7.centos.noarch<br>openstack-nova-api-2015.1-dev19.el7.centos.noarch<br>openstack-packstack-2015.1-dev1529.g0605728.el7.centos.noarch<br>openstack-ceilometer-compute-2015.1-dev2.el7.centos.noarch<br>openstack-swift-plugin-swift3-1.7-4.el7.centos.noarch<br>openstack-selinux-0.6.25-1.el7.noarch<br>openstack-cinder-2015.1-dev2.el7.centos.noarch<br>openstack-neutron-openvswitch-2015.1-dev1.el7.centos.noarch<br>openstack-swift-account-2.3.0rc1-post1.el7.centos.noarch<br>openstack-ceilometer-alarm-2015.1-dev2.el7.centos.noarch<br>openstack-utils-2014.2-1.el7.centos.noarch<br>openstack-packstack-puppet-2015.1-dev1529.g0605728.el7.centos.noarch<br>openstack-nova-common-2015.1-dev19.el7.centos.noarch<br>openstack-nova-scheduler-2015.1-dev19.el7.centos.noarch<br>openstack-ceilometer-common-2015.1-dev2.el7.centos.noarch<br>openstack-nova-conductor-2015.1-dev19.el7.centos.noarch<br>openstack-neutron-common-2015.1-dev1.el7.centos.noarch<br>openstack-swift-object-2.3.0rc1-post1.el7.centos.noarch<br>openstack-ceilometer-central-2015.1-dev2.el7.centos.noarch<br>openstack-glance-2015.1-dev1.el7.centos.noarch<br>openstack-nova-compute-2015.1-dev19.el7.centos.noarch<br>openstack-neutron-ml2-2015.1-dev1.el7.centos.noarch<br>python-django-openstack-auth-1.3.0-0.99.20150421.2158git.el7.centos.noarch<br>openstack-swift-2.3.0rc1-post1.el7.centos.noarch<br>openstack-ceilometer-api-2015.1-dev2.el7.centos.noarch<br>openstack-swift-proxy-2.3.0rc1-post1.el7.centos.noarch<br>openstack-swift-container-2.3.0rc1-post1.el7.centos.noarch<br>openstack-ceilometer-collector-2015.1-dev2.el7.centos.noarch<br>openstack-nova-cert-2015.1-dev19.el7.centos.noarch<br>openstack-ceilometer-notification-2015.1-dev2.el7.centos.noarch<br>openstack-puppet-modules-2015.1-dev.2d3528a51091931caef06a5a8d1cfdaaa79d25ec_75763dd0.el7.centos.noarch<br>openstack-neutron-2015.1-dev1.el7.centos.noarch<br>openstack-dashboard-2015.1-dev2.el7.centos.noarch<br><br>Boris.<br><br><div><hr id="stopSpelling">Date: Thu, 23 Apr 2015 00:21:03 +0200<br>Subject: Re: [Rdo-list] RE(2) : RDO Kilo RC snapshot - core packages<br>From: ak@cloudssky.com<br>To: bderzhavets@hotmail.com<br>CC: apevec@gmail.com; rdo-list@redhat.com<br><br><div dir="ltr">Hi,<div><br></div><div>I'm running CentOS Linux release 7.1.1503 (Core) VM on OpenStack and followed</div><div>the steps and I'm getting:</div><div><br></div><div>
<span>10.0.0.16_prescript.pp: [ </span><span>ERROR</span><span> ] </span><BR>
<span>Applying Puppet manifests [ </span><span>ERROR</span><span> ]</span><BR>
ERROR : Error appeared during Puppet run: 10.0.0.16_prescript.pp<br><span></span><BR>
<span>Error: nmcli (1.0.0) and NetworkManager (0.9.9.1) versions don't match. Force execution using --nocheck, but the results are unpredictable.</span><BR>Thanks,<br>Arash<BR></div><div class="ecxgmail_extra"><div><div class="ecxgmail_signature"><div dir="ltr"><div><div dir="ltr"><div><div dir="ltr"><div dir="ltr"><div dir="ltr"><div dir="ltr"><div dir="ltr"><div><div style="font-family:arial;font-size:small;"><div><div><br></div></div><div><p style="color:rgb(34,34,34);"><span style="color:rgb(11,83,148);"><b></b></span> </p><font size="1"></font></div></div></div></div></div></div></div></div></div></div></div></div></div></div>
<br><div class="ecxgmail_quote">On Wed, Apr 22, 2015 at 6:29 PM, Boris Derzhavets <span dir="ltr"><<a href="mailto:bderzhavets@hotmail.com" target="_blank">bderzhavets@hotmail.com</a>></span> wrote:<br><blockquote class="ecxgmail_quote" style="border-left-width:1px;border-left-color:rgb(204,204,204);border-left-style:solid;padding-left:1ex;">
<div><div dir="ltr"> I made one more attempt of `packstack --allinone` install on CentOS 7.1 KVM running on F22 Host.<br>Finally, when new "demo_net" created after install completed with interface in "down" state, I've dropped "private" subnet from the same tenant "demo" (the one created by installer) , what resulted switching interface of "demo_net" to "Active" status and allowed to launch CirrOS VM via Horizon completely functional.<br><br> Then I reproduced same procedure in first time environment been created on CentOS 7.1 KVM running on Ubuntu 15.04 Host and got same results . As soon as I dropped "private" network created by installer for demo tenant , interface for "demo_net" ( created manually as post installation step) switched to "Active" status.<br><br>Still have issue with openstack-nova-novncproxy.service :-<br><br>[root@centos71 nova(keystone_admin)]# systemctl status openstack-nova-novncproxy.service -l<br>openstack-nova-novncproxy.service - OpenStack Nova NoVNC Proxy Server<br> Loaded: loaded (/usr/lib/systemd/system/openstack-nova-novncproxy.service; enabled)<br> Active: failed (Result: exit-code) since Wed 2015-04-22 18:41:51 MSK; 18min ago<br> Process: 25663 ExecStart=/usr/bin/nova-novncproxy --web /usr/share/novnc/ $OPTIONS (code=exited, status=1/FAILURE)<br> Main PID: 25663 (code=exited, status=1/FAILURE)<br><br>Apr 22 18:41:51 centos71.localdomain nova-novncproxy[25663]: from nova.cmd.novncproxy import main<br>Apr 22 18:41:51 centos71.localdomain nova-novncproxy[25663]: File "/usr/lib/python2.7/site-packages/nova/cmd/novncproxy.py", line 25, in <module><br>Apr 22 18:41:51 centos71.localdomain nova-novncproxy[25663]: from nova.cmd import baseproxy<br>Apr 22 18:41:51 centos71.localdomain nova-novncproxy[25663]: File "/usr/lib/python2.7/site-packages/nova/cmd/baseproxy.py", line 26, in <module><br>Apr 22 18:41:51 centos71.localdomain nova-novncproxy[25663]: from nova.console import websocketproxy<br>Apr 22 18:41:51 centos71.localdomain nova-novncproxy[25663]: File "/usr/lib/python2.7/site-packages/nova/console/websocketproxy.py", line 154, in <module><br>Apr 22 18:41:51 centos71.localdomain nova-novncproxy[25663]: websockify.ProxyRequestHandler):<br>Apr 22 18:41:51 centos71.localdomain nova-novncproxy[25663]: AttributeError: 'module' object has no attribute 'ProxyRequestHandler'<br>Apr 22 18:41:51 centos71.localdomain systemd[1]: openstack-nova-novncproxy.service: main process exited, code=exited, status=1/FAILURE<br>Apr 22 18:41:51 centos71.localdomain systemd[1]: Unit openstack-nova-novncproxy.service entered failed state.<br><br> Boris<br><br><div><hr>From: <a href="mailto:bderzhavets@hotmail.com" target="_blank">bderzhavets@hotmail.com</a><br>To: <a href="mailto:apevec@gmail.com" target="_blank">apevec@gmail.com</a>; <a href="mailto:rdo-list@redhat.com" target="_blank">rdo-list@redhat.com</a><br>Date: Wed, 22 Apr 2015 07:02:32 -0400<br>Subject: Re: [Rdo-list] RDO Kilo RC snapshot - core packages<br><br>
<div dir="ltr">Alan,<br><br># packstack --allinone <br><br>completes successfully on CentOS 7.1<br><br>However, when attaching interface to private subnet to neutron router<br>(as demo or as admin ) port status is down . I tested it via Horizon and <br>via Neutron CLI result was the same. Instance (cirros) been launched cannot access nova meta-data server and obtain instance-id<br><br>Lease of 50.0.0.12 obtained, lease time 86400<br>cirros-ds 'net' up at 7.14<br>checking <a href="http://169.254.169.254/2009-04-04/instance-id" target="_blank">http://169.254.169.254/2009-04-04/instance-id</a><br>failed 1/20: up 7.47. request failed<br>failed 2/20: up 12.81. request failed<br>failed 3/20: up 15.82. request failed<br>. . . . . . . . .<br>failed 18/20: up 78.28. request failed<br>failed 19/20: up 81.27. request failed<br>failed 20/20: up 86.50. request failed<br>failed to read iid from metadata. tried 20<br>no results found for mode=net. up 89.53. searched: nocloud configdrive ec2<br>failed to get instance-id of datasource<br><br><br>Thanks.<br>Boris<br><br><div>> Date: Wed, 22 Apr 2015 04:15:54 +0200<br>> From: <a href="mailto:apevec@gmail.com" target="_blank">apevec@gmail.com</a><br>> To: <a href="mailto:rdo-list@redhat.com" target="_blank">rdo-list@redhat.com</a><br>> Subject: [Rdo-list] RDO Kilo RC snapshot - core packages<br>> <br>> Hi all,<br>> <br>> unofficial[*] Kilo RC builds are now available for testing. This<br>> snapshot completes packstack --allinone i.e. issue in provision_glance<br>> reported on IRC has been fixed.<br>> <br>> Quick installation HOWTO<br>> <br>> yum install <a href="http://rdoproject.org/repos/openstack-kilo/rdo-release-kilo.rpm" target="_blank">http://rdoproject.org/repos/openstack-kilo/rdo-release-kilo.rpm</a><br>> # Following works out-of-the-box on CentOS7<br>> # For RHEL see <a href="http://fedoraproject.org/wiki/EPEL#How_can_I_use_these_extra_packages.3F" target="_blank">http://fedoraproject.org/wiki/EPEL#How_can_I_use_these_extra_packages.3F</a><br>> yum install epel-release<br>> cd /etc/yum.repos.d<br>> curl -O <a href="https://repos.fedorapeople.org/repos/openstack/openstack-trunk/epel-7/rc1-Apr21/delorean-kilo.repo" target="_blank">https://repos.fedorapeople.org/repos/openstack/openstack-trunk/epel-7/rc1-Apr21/delorean-kilo.repo</a><br>> <br>> After above steps, regular Quickstart continues:<br>> yum install openstack-packstack<br>> packstack --allinone<br>> <br>> NB this snapshot has NOT been tested with rdo-management! If testing<br>> rdo-management, please follow their instructions.<br>> <br>> <br>> Cheers,<br>> Alan<br>> <br>> [*] Apr21 evening snapshot built from stable/kilo branches in<br>> Delorean Kilo instance, official RDO Kilo builds will come from CentOS<br>> CloudSIG CBS<br>> <br>> _______________________________________________<br>> Rdo-list mailing list<br>> <a href="mailto:Rdo-list@redhat.com" target="_blank">Rdo-list@redhat.com</a><br>> <a href="https://www.redhat.com/mailman/listinfo/rdo-list" target="_blank">https://www.redhat.com/mailman/listinfo/rdo-list</a><br>> <br>> To unsubscribe: <a href="mailto:rdo-list-unsubscribe@redhat.com" target="_blank">rdo-list-unsubscribe@redhat.com</a><br></div> </div>
<br>_______________________________________________
Rdo-list mailing list
<a href="mailto:Rdo-list@redhat.com" target="_blank">Rdo-list@redhat.com</a>
<a href="https://www.redhat.com/mailman/listinfo/rdo-list" target="_blank">https://www.redhat.com/mailman/listinfo/rdo-list</a>
To unsubscribe: <a href="mailto:rdo-list-unsubscribe@redhat.com" target="_blank">rdo-list-unsubscribe@redhat.com</a></div> </div></div>
<br>_______________________________________________<br>
Rdo-list mailing list<br>
<a href="mailto:Rdo-list@redhat.com">Rdo-list@redhat.com</a><br>
<a href="https://www.redhat.com/mailman/listinfo/rdo-list" target="_blank">https://www.redhat.com/mailman/listinfo/rdo-list</a><br>
<br>
To unsubscribe: <a href="mailto:rdo-list-unsubscribe@redhat.com">rdo-list-unsubscribe@redhat.com</a><br></blockquote></div><br></div></div></div> </div></body>
</html>