<html>
<head>
<style><!--
.hmmessage P
{
margin:0px;
padding:0px
}
body.hmmessage
{
font-size: 12pt;
font-family:Calibri
}
--></style></head>
<body class='hmmessage'><div dir='ltr'> I made one more attempt of `packstack --allinone` install on CentOS 7.1 KVM running on F22 Host.<br>Finally, when new "demo_net" created after install completed with interface in "down" state, I've dropped "private" subnet from the same tenant "demo" (the one created by installer) , what resulted switching interface of "demo_net" to "Active" status and allowed to launch CirrOS VM via Horizon completely functional.<br><br> Then I reproduced same procedure in first time environment been created on CentOS 7.1 KVM running on Ubuntu 15.04 Host and got same results . As soon as I dropped "private" network created by installer for demo tenant , interface for "demo_net" ( created manually as post installation step) switched to "Active" status.<br><br>Still have issue with openstack-nova-novncproxy.service :-<br><br>[root@centos71 nova(keystone_admin)]# systemctl status openstack-nova-novncproxy.service -l<br>openstack-nova-novncproxy.service - OpenStack Nova NoVNC Proxy Server<br> Loaded: loaded (/usr/lib/systemd/system/openstack-nova-novncproxy.service; enabled)<br> Active: failed (Result: exit-code) since Wed 2015-04-22 18:41:51 MSK; 18min ago<br> Process: 25663 ExecStart=/usr/bin/nova-novncproxy --web /usr/share/novnc/ $OPTIONS (code=exited, status=1/FAILURE)<br> Main PID: 25663 (code=exited, status=1/FAILURE)<br><br>Apr 22 18:41:51 centos71.localdomain nova-novncproxy[25663]: from nova.cmd.novncproxy import main<br>Apr 22 18:41:51 centos71.localdomain nova-novncproxy[25663]: File "/usr/lib/python2.7/site-packages/nova/cmd/novncproxy.py", line 25, in <module><br>Apr 22 18:41:51 centos71.localdomain nova-novncproxy[25663]: from nova.cmd import baseproxy<br>Apr 22 18:41:51 centos71.localdomain nova-novncproxy[25663]: File "/usr/lib/python2.7/site-packages/nova/cmd/baseproxy.py", line 26, in <module><br>Apr 22 18:41:51 centos71.localdomain nova-novncproxy[25663]: from nova.console import websocketproxy<br>Apr 22 18:41:51 centos71.localdomain nova-novncproxy[25663]: File "/usr/lib/python2.7/site-packages/nova/console/websocketproxy.py", line 154, in <module><br>Apr 22 18:41:51 centos71.localdomain nova-novncproxy[25663]: websockify.ProxyRequestHandler):<br>Apr 22 18:41:51 centos71.localdomain nova-novncproxy[25663]: AttributeError: 'module' object has no attribute 'ProxyRequestHandler'<br>Apr 22 18:41:51 centos71.localdomain systemd[1]: openstack-nova-novncproxy.service: main process exited, code=exited, status=1/FAILURE<br>Apr 22 18:41:51 centos71.localdomain systemd[1]: Unit openstack-nova-novncproxy.service entered failed state.<br><br> Boris<br><br><div><hr id="stopSpelling">From: bderzhavets@hotmail.com<br>To: apevec@gmail.com; rdo-list@redhat.com<br>Date: Wed, 22 Apr 2015 07:02:32 -0400<br>Subject: Re: [Rdo-list] RDO Kilo RC snapshot - core packages<br><br>
<style><!--
.ExternalClass .ecxhmmessage P {
padding:0px;
}
.ExternalClass body.ecxhmmessage {
font-size:12pt;
font-family:Calibri;
}
--></style>
<div dir="ltr">Alan,<br><br># packstack --allinone <br><br>completes successfully on CentOS 7.1<br><br>However, when attaching interface to private subnet to neutron router<br>(as demo or as admin ) port status is down . I tested it via Horizon and <br>via Neutron CLI result was the same. Instance (cirros) been launched cannot access nova meta-data server and obtain instance-id<br><br>Lease of 50.0.0.12 obtained, lease time 86400<br>cirros-ds 'net' up at 7.14<br>checking http://169.254.169.254/2009-04-04/instance-id<br>failed 1/20: up 7.47. request failed<br>failed 2/20: up 12.81. request failed<br>failed 3/20: up 15.82. request failed<br>. . . . . . . . .<br>failed 18/20: up 78.28. request failed<br>failed 19/20: up 81.27. request failed<br>failed 20/20: up 86.50. request failed<br>failed to read iid from metadata. tried 20<br>no results found for mode=net. up 89.53. searched: nocloud configdrive ec2<br>failed to get instance-id of datasource<br><br><br>Thanks.<br>Boris<br><br><div>> Date: Wed, 22 Apr 2015 04:15:54 +0200<br>> From: apevec@gmail.com<br>> To: rdo-list@redhat.com<br>> Subject: [Rdo-list] RDO Kilo RC snapshot - core packages<br>> <br>> Hi all,<br>> <br>> unofficial[*] Kilo RC builds are now available for testing. This<br>> snapshot completes packstack --allinone i.e. issue in provision_glance<br>> reported on IRC has been fixed.<br>> <br>> Quick installation HOWTO<br>> <br>> yum install http://rdoproject.org/repos/openstack-kilo/rdo-release-kilo.rpm<br>> # Following works out-of-the-box on CentOS7<br>> # For RHEL see http://fedoraproject.org/wiki/EPEL#How_can_I_use_these_extra_packages.3F<br>> yum install epel-release<br>> cd /etc/yum.repos.d<br>> curl -O https://repos.fedorapeople.org/repos/openstack/openstack-trunk/epel-7/rc1-Apr21/delorean-kilo.repo<br>> <br>> After above steps, regular Quickstart continues:<br>> yum install openstack-packstack<br>> packstack --allinone<br>> <br>> NB this snapshot has NOT been tested with rdo-management! If testing<br>> rdo-management, please follow their instructions.<br>> <br>> <br>> Cheers,<br>> Alan<br>> <br>> [*] Apr21 evening snapshot built from stable/kilo branches in<br>> Delorean Kilo instance, official RDO Kilo builds will come from CentOS<br>> CloudSIG CBS<br>> <br>> _______________________________________________<br>> Rdo-list mailing list<br>> Rdo-list@redhat.com<br>> https://www.redhat.com/mailman/listinfo/rdo-list<br>> <br>> To unsubscribe: rdo-list-unsubscribe@redhat.com<br></div> </div>
<br>_______________________________________________
Rdo-list mailing list
Rdo-list@redhat.com
https://www.redhat.com/mailman/listinfo/rdo-list
To unsubscribe: rdo-list-unsubscribe@redhat.com</div> </div></body>
</html>