[Rdo-list] RE(4): RE(2) : RDO Kilo RC snapshot - core packages

Boris Derzhavets bderzhavets at hotmail.com
Thu Apr 23 17:51:02 UTC 2015


Arash ,

I was able to reach dashboard, creating public network same as management ( due to test on libvirt's 
non-default   subnet  192.169.142.0/24)   via    http://192.169.142.57/dashboard :-

# cat ifcfg-br-ex

DEVICE="br-ex"

BOOTPROTO="static"

IPADDR="192.169.142.57"

NETMASK="255.255.255.0"

DNS1="83.221.202.254"

BROADCAST="192.169.142.255"

GATEWAY="192.168.142.1"

NM_CONTROLLED="no"

DEFROUTE="yes"

IPV4_FAILURE_FATAL="yes"

IPV6INIT=no

ONBOOT="yes"

TYPE="OVSIntPort"
OVS_BRIDGE=br-ex

DEVICETYPE="ovs"


# cat ifcfg-eth0
DEVICE="eth0"

ONBOOT="yes"

TYPE="OVSPort"

DEVICETYPE="ovs"

OVS_BRIDGE=br-ex

NM_CONTROLLED=no

IPV6INIT=no

# service network restart

Actually , making eth0 OVS port of OVS bridge br-ex (original VM's IP 192.169.142.57 moved to br-ex )


Boris.
--------------------------------------------------------------------------------------------------------------------------------------

Date: Thu, 23 Apr 2015 18:36:23 +0200
Subject: Re: RE(3): [Rdo-list] RE(2) : RDO Kilo RC snapshot - core packages
From: ak at cloudssky.com
To: bderzhavets at hotmail.com
CC: apevec at gmail.com; rdo-list at redhat.com

Alen, Boris,
Thanks!
Yes, system was yum updated, now I did the following:
yum install http://rdoproject.org/repos/openstack-kilo/rdo-release-kilo.rpmyum install epel-releasecd /etc/yum.repos.d/curl -O https://repos.fedorapeople.org/repos/openstack/openstack-trunk/epel-7/rc1-Apr21/delorean-kilo.repoyum install openstack-packstacksetenforce 0packstack --allinone
and the result was:








10.0.0.16_postscript.pp:                             [ DONE ]      
Applying Puppet manifests                            [ DONE ]
Finalizing                                           [ DONE ]

 **** Installation completed successfully ******
But I can't access the dashboard, over the external IP, I'm getting:
Not FoundThe requested URL /dashboard was not found on this server.and over the internal IP with:http://10.0.0.16/dashboard
I'm getting: Internal Server ErrorThe server encountered an internal error ...Also disabled the NetworkManager and rebooted, didn't help to get horizon working.Thx again,Arash


On Thu, Apr 23, 2015 at 11:46 AM, Boris Derzhavets <bderzhavets at hotmail.com> wrote:



Arash,

System was yum updated, I ran packstack  and disabled  NetworkManager only after completion.
As of now same procedure reproduced successfully 3 times on different VMs  ( KVM Hypervisors of F22,F21, Ubuntu 15.04) . Nested KVM enabled for each VM.

[root at centos71 ~(keystone_admin)]# uname -a
Linux centos71.localdomain 3.10.0-229.1.2.el7.x86_64 #1 SMP Fri Mar 27 03:04:26 UTC 2015 x86_64 x86_64 x86_64 GNU/Linux

[root at centos71 ~(keystone_admin)]# openstack-status
== Nova services ==
openstack-nova-api:                     active
openstack-nova-cert:                    active
openstack-nova-compute:                 active
openstack-nova-network:                 inactive  (disabled on boot)
openstack-nova-scheduler:               active
openstack-nova-conductor:               active
== Glance services ==
openstack-glance-api:                   active
openstack-glance-registry:              active
== Keystone service ==
openstack-keystone:                     inactive  (disabled on boot)
== Horizon service ==
openstack-dashboard:                    active
== neutron services ==
neutron-server:                         active
neutron-dhcp-agent:                     active
neutron-l3-agent:                       active
neutron-metadata-agent:                 active
neutron-openvswitch-agent:              active
== Swift services ==
openstack-swift-proxy:                  active
openstack-swift-account:                active
openstack-swift-container:              active
openstack-swift-object:                 active
== Cinder services ==
openstack-cinder-api:                   active
openstack-cinder-scheduler:             active
openstack-cinder-volume:                active
openstack-cinder-backup:                active
== Ceilometer services ==
openstack-ceilometer-api:               active
openstack-ceilometer-central:           active
openstack-ceilometer-compute:           active
openstack-ceilometer-collector:         active
openstack-ceilometer-alarm-notifier:    active
openstack-ceilometer-alarm-evaluator:   active
openstack-ceilometer-notification:      active
== Support services ==
mysqld:                                 inactive  (disabled on boot)
libvirtd:                               active
openvswitch:                            active
dbus:                                   active
target:                                 active
rabbitmq-server:                        active
memcached:                              active
== Keystone users ==
/usr/lib/python2.7/site-packages/keystoneclient/shell.py:65: DeprecationWarning: The keystone CLI is deprecated in favor of python-openstackclient. For a Python library, continue using python-keystoneclient.
  'python-keystoneclient.', DeprecationWarning)
+----------------------------------+------------+---------+----------------------+
|                id                |    name    | enabled |        email         |
+----------------------------------+------------+---------+----------------------+
| 1fb446ec99184947bff342188028fddd |   admin    |   True  |    root at localhost    |
| 3e76f14038724ef19e804ef99919ae75 | ceilometer |   True  | ceilometer at localhost |
| d63e40e71da84778bdbc89cd0645109c |   cinder   |   True  |   cinder at localhost   |
| 75b0b000562f491284043b5c74afbb1e |    demo    |   True  |                      |
| bb3d35d9a23443bfb3791545a7aa03b4 |   glance   |   True  |   glance at localhost   |
| 573eb12b92fd48e68e5635f3c79b3dec |  neutron   |   True  |  neutron at localhost   |
| be6b2d41f55f4c3fab8e02a779de4a63 |    nova    |   True  |    nova at localhost    |
| 53e9e3a493244c5e801ba92446c969bc |   swift    |   True  |   swift at localhost    |
+----------------------------------+------------+---------+----------------------+
== Glance images ==
+--------------------------------------+--------------------+-------------+------------------+-----------+--------+
| ID                                   | Name               | Disk Format | Container Format | Size      | Status |
+--------------------------------------+--------------------+-------------+------------------+-----------+--------+
| 0c73a315-8867-472c-bba6-e73a43b9b98d | cirros             | qcow2       | bare             | 13200896  | active |
| 52df1d6d-9eb0-4c09-a9bb-ec5a07bd62eb | Fedora 21 image    | qcow2       | bare             | 158443520 | active |
| 7f128f54-727c-45ad-8891-777aa39ff3e1 | Ubuntu 15.04 image | qcow2       | bare             | 284361216 | active |
+--------------------------------------+--------------------+-------------+------------------+-----------+--------+
== Nova managed services ==
+----+------------------+----------------------+----------+---------+-------+----------------------------+-----------------+
| Id | Binary           | Host                 | Zone     | Status  | State | Updated_at                 | Disabled Reason |
+----+------------------+----------------------+----------+---------+-------+----------------------------+-----------------+
| 1  | nova-consoleauth | centos71.localdomain | internal | enabled | up    | 2015-04-23T09:36:57.000000 | -               |
| 2  | nova-scheduler   | centos71.localdomain | internal | enabled | up    | 2015-04-23T09:36:57.000000 | -               |
| 3  | nova-conductor   | centos71.localdomain | internal | enabled | up    | 2015-04-23T09:36:58.000000 | -               |
| 4  | nova-compute     | centos71.localdomain | nova     | enabled | up    | 2015-04-23T09:36:58.000000 | -               |
| 5  | nova-cert        | centos71.localdomain | internal | enabled | up    | 2015-04-23T09:36:57.000000 | -               |
+----+------------------+----------------------+----------+---------+-------+----------------------------+-----------------+
== Nova networks ==
+--------------------------------------+----------+------+
| ID                                   | Label    | Cidr |
+--------------------------------------+----------+------+
| d3bcf265-2429-4556-b799-16579ba367cf | public   | -    |
| b25422bc-aa87-4007-bf5a-64dde97dd6f7 | demo_net | -    |
+--------------------------------------+----------+------+
== Nova instance flavors ==
+----+-----------+-----------+------+-----------+------+-------+-------------+-----------+
| ID | Name      | Memory_MB | Disk | Ephemeral | Swap | VCPUs | RXTX_Factor | Is_Public |
+----+-----------+-----------+------+-----------+------+-------+-------------+-----------+
| 1  | m1.tiny   | 512       | 1    | 0         |      | 1     | 1.0         | True      |
| 2  | m1.small  | 2048      | 20   | 0         |      | 1     | 1.0         | True      |
| 3  | m1.medium | 4096      | 40   | 0         |      | 2     | 1.0         | True      |
| 4  | m1.large  | 8192      | 80   | 0         |      | 4     | 1.0         | True      |
| 5  | m1.xlarge | 16384     | 160  | 0         |      | 8     | 1.0         | True      |
+----+-----------+-----------+------+-----------+------+-------+-------------+-----------+
== Nova instances ==
+----+------+--------+------------+-------------+----------+
| ID | Name | Status | Task State | Power State | Networks |
+----+------+--------+------------+-------------+----------+
+----+------+--------+------------+-------------+----------+

[root at centos71 ~(keystone_admin)]# rpm -qa | grep openstack
openstack-nova-novncproxy-2015.1-dev19.el7.centos.noarch
python-openstackclient-1.0.3-post3.el7.centos.noarch
openstack-keystone-2015.1-dev14.el7.centos.noarch
openstack-nova-console-2015.1-dev19.el7.centos.noarch
openstack-nova-api-2015.1-dev19.el7.centos.noarch
openstack-packstack-2015.1-dev1529.g0605728.el7.centos.noarch
openstack-ceilometer-compute-2015.1-dev2.el7.centos.noarch
openstack-swift-plugin-swift3-1.7-4.el7.centos.noarch
openstack-selinux-0.6.25-1.el7.noarch
openstack-cinder-2015.1-dev2.el7.centos.noarch
openstack-neutron-openvswitch-2015.1-dev1.el7.centos.noarch
openstack-swift-account-2.3.0rc1-post1.el7.centos.noarch
openstack-ceilometer-alarm-2015.1-dev2.el7.centos.noarch
openstack-utils-2014.2-1.el7.centos.noarch
openstack-packstack-puppet-2015.1-dev1529.g0605728.el7.centos.noarch
openstack-nova-common-2015.1-dev19.el7.centos.noarch
openstack-nova-scheduler-2015.1-dev19.el7.centos.noarch
openstack-ceilometer-common-2015.1-dev2.el7.centos.noarch
openstack-nova-conductor-2015.1-dev19.el7.centos.noarch
openstack-neutron-common-2015.1-dev1.el7.centos.noarch
openstack-swift-object-2.3.0rc1-post1.el7.centos.noarch
openstack-ceilometer-central-2015.1-dev2.el7.centos.noarch
openstack-glance-2015.1-dev1.el7.centos.noarch
openstack-nova-compute-2015.1-dev19.el7.centos.noarch
openstack-neutron-ml2-2015.1-dev1.el7.centos.noarch
python-django-openstack-auth-1.3.0-0.99.20150421.2158git.el7.centos.noarch
openstack-swift-2.3.0rc1-post1.el7.centos.noarch
openstack-ceilometer-api-2015.1-dev2.el7.centos.noarch
openstack-swift-proxy-2.3.0rc1-post1.el7.centos.noarch
openstack-swift-container-2.3.0rc1-post1.el7.centos.noarch
openstack-ceilometer-collector-2015.1-dev2.el7.centos.noarch
openstack-nova-cert-2015.1-dev19.el7.centos.noarch
openstack-ceilometer-notification-2015.1-dev2.el7.centos.noarch
openstack-puppet-modules-2015.1-dev.2d3528a51091931caef06a5a8d1cfdaaa79d25ec_75763dd0.el7.centos.noarch
openstack-neutron-2015.1-dev1.el7.centos.noarch
openstack-dashboard-2015.1-dev2.el7.centos.noarch

Boris.

Date: Thu, 23 Apr 2015 00:21:03 +0200
Subject: Re: [Rdo-list] RE(2) : RDO Kilo RC snapshot - core packages
From: ak at cloudssky.com
To: bderzhavets at hotmail.com
CC: apevec at gmail.com; rdo-list at redhat.com

Hi,
I'm running CentOS Linux release 7.1.1503 (Core) VM on OpenStack and followedthe steps and I'm getting:








10.0.0.16_prescript.pp:                           [ ERROR ]       

Applying Puppet manifests                         [ ERROR ]

ERROR : Error appeared during Puppet run: 10.0.0.16_prescript.pp


Error: nmcli (1.0.0) and NetworkManager (0.9.9.1) versions don't match. Force execution using --nocheck, but the results are unpredictable.
Thanks,
Arash

 

On Wed, Apr 22, 2015 at 6:29 PM, Boris Derzhavets <bderzhavets at hotmail.com> wrote:



  I made one more attempt of `packstack --allinone` install  on  CentOS 7.1 KVM running on F22 Host.
Finally,  when new "demo_net"  created after install completed  with interface in "down" state, I've dropped "private" subnet from the same tenant  "demo" (the one created by installer) , what resulted switching interface of  "demo_net" to "Active" status and allowed to launch CirrOS VM via Horizon completely functional.

  Then I reproduced same procedure in first time environment been created on CentOS 7.1 KVM running on Ubuntu 15.04 Host and got same results . As soon as I dropped "private" network created by installer for demo tenant , interface for "demo_net" ( created manually as post installation  step)  switched to "Active" status.

Still have issue with openstack-nova-novncproxy.service :-

[root at centos71 nova(keystone_admin)]# systemctl status  openstack-nova-novncproxy.service -l
openstack-nova-novncproxy.service - OpenStack Nova NoVNC Proxy Server
   Loaded: loaded (/usr/lib/systemd/system/openstack-nova-novncproxy.service; enabled)
   Active: failed (Result: exit-code) since Wed 2015-04-22 18:41:51 MSK; 18min ago
  Process: 25663 ExecStart=/usr/bin/nova-novncproxy --web /usr/share/novnc/ $OPTIONS (code=exited, status=1/FAILURE)
 Main PID: 25663 (code=exited, status=1/FAILURE)

Apr 22 18:41:51 centos71.localdomain nova-novncproxy[25663]: from nova.cmd.novncproxy import main
Apr 22 18:41:51 centos71.localdomain nova-novncproxy[25663]: File "/usr/lib/python2.7/site-packages/nova/cmd/novncproxy.py", line 25, in <module>
Apr 22 18:41:51 centos71.localdomain nova-novncproxy[25663]: from nova.cmd import baseproxy
Apr 22 18:41:51 centos71.localdomain nova-novncproxy[25663]: File "/usr/lib/python2.7/site-packages/nova/cmd/baseproxy.py", line 26, in <module>
Apr 22 18:41:51 centos71.localdomain nova-novncproxy[25663]: from nova.console import websocketproxy
Apr 22 18:41:51 centos71.localdomain nova-novncproxy[25663]: File "/usr/lib/python2.7/site-packages/nova/console/websocketproxy.py", line 154, in <module>
Apr 22 18:41:51 centos71.localdomain nova-novncproxy[25663]: websockify.ProxyRequestHandler):
Apr 22 18:41:51 centos71.localdomain nova-novncproxy[25663]: AttributeError: 'module' object has no attribute 'ProxyRequestHandler'
Apr 22 18:41:51 centos71.localdomain systemd[1]: openstack-nova-novncproxy.service: main process exited, code=exited, status=1/FAILURE
Apr 22 18:41:51 centos71.localdomain systemd[1]: Unit openstack-nova-novncproxy.service entered failed state.

 Boris

From: bderzhavets at hotmail.com
To: apevec at gmail.com; rdo-list at redhat.com
Date: Wed, 22 Apr 2015 07:02:32 -0400
Subject: Re: [Rdo-list] RDO Kilo RC snapshot - core packages




Alan,

# packstack --allinone 

completes successfully on CentOS 7.1

However, when attaching interface to private subnet  to neutron router
(as demo or as admin )  port status is down .  I tested it via Horizon and 
via Neutron CLI  result was the  same.  Instance (cirros) been launched cannot  access nova meta-data server and obtain instance-id

Lease of 50.0.0.12 obtained, lease time 86400
cirros-ds 'net' up at 7.14
checking http://169.254.169.254/2009-04-04/instance-id
failed 1/20: up 7.47. request failed
failed 2/20: up 12.81. request failed
failed 3/20: up 15.82. request failed
.  .  .  .  .   .  .   .  .
failed 18/20: up 78.28. request failed
failed 19/20: up 81.27. request failed
failed 20/20: up 86.50. request failed
failed to read iid from metadata. tried 20
no results found for mode=net. up 89.53. searched: nocloud configdrive ec2
failed to get instance-id of datasource


Thanks.
Boris

> Date: Wed, 22 Apr 2015 04:15:54 +0200
> From: apevec at gmail.com
> To: rdo-list at redhat.com
> Subject: [Rdo-list] RDO Kilo RC snapshot - core packages
> 
> Hi all,
> 
> unofficial[*] Kilo RC builds are now available for testing. This
> snapshot completes packstack --allinone i.e. issue in provision_glance
> reported on IRC has been fixed.
> 
> Quick installation HOWTO
> 
> yum install http://rdoproject.org/repos/openstack-kilo/rdo-release-kilo.rpm
> # Following works out-of-the-box on CentOS7
> # For RHEL see http://fedoraproject.org/wiki/EPEL#How_can_I_use_these_extra_packages.3F
> yum install epel-release
> cd /etc/yum.repos.d
> curl -O https://repos.fedorapeople.org/repos/openstack/openstack-trunk/epel-7/rc1-Apr21/delorean-kilo.repo
> 
> After above steps, regular Quickstart continues:
> yum install openstack-packstack
> packstack --allinone
> 
> NB this snapshot has NOT been tested with rdo-management! If testing
> rdo-management, please follow their instructions.
> 
> 
> Cheers,
> Alan
> 
> [*]  Apr21 evening snapshot built from stable/kilo branches in
> Delorean Kilo instance, official RDO Kilo builds will come from CentOS
> CloudSIG CBS
> 
> _______________________________________________
> Rdo-list mailing list
> Rdo-list at redhat.com
> https://www.redhat.com/mailman/listinfo/rdo-list
> 
> To unsubscribe: rdo-list-unsubscribe at redhat.com
 		 	   		  

_______________________________________________
Rdo-list mailing list
Rdo-list at redhat.com
https://www.redhat.com/mailman/listinfo/rdo-list

To unsubscribe: rdo-list-unsubscribe at redhat.com 		 	   		  

_______________________________________________

Rdo-list mailing list

Rdo-list at redhat.com

https://www.redhat.com/mailman/listinfo/rdo-list



To unsubscribe: rdo-list-unsubscribe at redhat.com

 		 	   		  
  		 	   		  
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.rdoproject.org/pipermail/dev/attachments/20150423/0ab201f6/attachment.html>


More information about the dev mailing list