[Rdo-list] RE(2) : RDO Kilo RC snapshot - core packages

Boris Derzhavets bderzhavets at hotmail.com
Wed Apr 22 16:29:45 UTC 2015


  I made one more attempt of `packstack --allinone` install  on  CentOS 7.1 KVM running on F22 Host.
Finally,  when new "demo_net"  created after install completed  with interface in "down" state, I've dropped "private" subnet from the same tenant  "demo" (the one created by installer) , what resulted switching interface of  "demo_net" to "Active" status and allowed to launch CirrOS VM via Horizon completely functional.

  Then I reproduced same procedure in first time environment been created on CentOS 7.1 KVM running on Ubuntu 15.04 Host and got same results . As soon as I dropped "private" network created by installer for demo tenant , interface for "demo_net" ( created manually as post installation  step)  switched to "Active" status.

Still have issue with openstack-nova-novncproxy.service :-

[root at centos71 nova(keystone_admin)]# systemctl status  openstack-nova-novncproxy.service -l
openstack-nova-novncproxy.service - OpenStack Nova NoVNC Proxy Server
   Loaded: loaded (/usr/lib/systemd/system/openstack-nova-novncproxy.service; enabled)
   Active: failed (Result: exit-code) since Wed 2015-04-22 18:41:51 MSK; 18min ago
  Process: 25663 ExecStart=/usr/bin/nova-novncproxy --web /usr/share/novnc/ $OPTIONS (code=exited, status=1/FAILURE)
 Main PID: 25663 (code=exited, status=1/FAILURE)

Apr 22 18:41:51 centos71.localdomain nova-novncproxy[25663]: from nova.cmd.novncproxy import main
Apr 22 18:41:51 centos71.localdomain nova-novncproxy[25663]: File "/usr/lib/python2.7/site-packages/nova/cmd/novncproxy.py", line 25, in <module>
Apr 22 18:41:51 centos71.localdomain nova-novncproxy[25663]: from nova.cmd import baseproxy
Apr 22 18:41:51 centos71.localdomain nova-novncproxy[25663]: File "/usr/lib/python2.7/site-packages/nova/cmd/baseproxy.py", line 26, in <module>
Apr 22 18:41:51 centos71.localdomain nova-novncproxy[25663]: from nova.console import websocketproxy
Apr 22 18:41:51 centos71.localdomain nova-novncproxy[25663]: File "/usr/lib/python2.7/site-packages/nova/console/websocketproxy.py", line 154, in <module>
Apr 22 18:41:51 centos71.localdomain nova-novncproxy[25663]: websockify.ProxyRequestHandler):
Apr 22 18:41:51 centos71.localdomain nova-novncproxy[25663]: AttributeError: 'module' object has no attribute 'ProxyRequestHandler'
Apr 22 18:41:51 centos71.localdomain systemd[1]: openstack-nova-novncproxy.service: main process exited, code=exited, status=1/FAILURE
Apr 22 18:41:51 centos71.localdomain systemd[1]: Unit openstack-nova-novncproxy.service entered failed state.

 Boris

From: bderzhavets at hotmail.com
To: apevec at gmail.com; rdo-list at redhat.com
Date: Wed, 22 Apr 2015 07:02:32 -0400
Subject: Re: [Rdo-list] RDO Kilo RC snapshot - core packages




Alan,

# packstack --allinone 

completes successfully on CentOS 7.1

However, when attaching interface to private subnet  to neutron router
(as demo or as admin )  port status is down .  I tested it via Horizon and 
via Neutron CLI  result was the  same.  Instance (cirros) been launched cannot  access nova meta-data server and obtain instance-id

Lease of 50.0.0.12 obtained, lease time 86400
cirros-ds 'net' up at 7.14
checking http://169.254.169.254/2009-04-04/instance-id
failed 1/20: up 7.47. request failed
failed 2/20: up 12.81. request failed
failed 3/20: up 15.82. request failed
.  .  .  .  .   .  .   .  .
failed 18/20: up 78.28. request failed
failed 19/20: up 81.27. request failed
failed 20/20: up 86.50. request failed
failed to read iid from metadata. tried 20
no results found for mode=net. up 89.53. searched: nocloud configdrive ec2
failed to get instance-id of datasource


Thanks.
Boris

> Date: Wed, 22 Apr 2015 04:15:54 +0200
> From: apevec at gmail.com
> To: rdo-list at redhat.com
> Subject: [Rdo-list] RDO Kilo RC snapshot - core packages
> 
> Hi all,
> 
> unofficial[*] Kilo RC builds are now available for testing. This
> snapshot completes packstack --allinone i.e. issue in provision_glance
> reported on IRC has been fixed.
> 
> Quick installation HOWTO
> 
> yum install http://rdoproject.org/repos/openstack-kilo/rdo-release-kilo.rpm
> # Following works out-of-the-box on CentOS7
> # For RHEL see http://fedoraproject.org/wiki/EPEL#How_can_I_use_these_extra_packages.3F
> yum install epel-release
> cd /etc/yum.repos.d
> curl -O https://repos.fedorapeople.org/repos/openstack/openstack-trunk/epel-7/rc1-Apr21/delorean-kilo.repo
> 
> After above steps, regular Quickstart continues:
> yum install openstack-packstack
> packstack --allinone
> 
> NB this snapshot has NOT been tested with rdo-management! If testing
> rdo-management, please follow their instructions.
> 
> 
> Cheers,
> Alan
> 
> [*]  Apr21 evening snapshot built from stable/kilo branches in
> Delorean Kilo instance, official RDO Kilo builds will come from CentOS
> CloudSIG CBS
> 
> _______________________________________________
> Rdo-list mailing list
> Rdo-list at redhat.com
> https://www.redhat.com/mailman/listinfo/rdo-list
> 
> To unsubscribe: rdo-list-unsubscribe at redhat.com
 		 	   		  

_______________________________________________
Rdo-list mailing list
Rdo-list at redhat.com
https://www.redhat.com/mailman/listinfo/rdo-list

To unsubscribe: rdo-list-unsubscribe at redhat.com 		 	   		  
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.rdoproject.org/pipermail/dev/attachments/20150422/d2716a03/attachment.html>


More information about the dev mailing list