[rdo-list] Fwd: ConnectFailure error upon triggering “nova image-list” command using openstack-mitaka release
by Chinmaya Dwibedy
Hi ,
I am getting the ConnectFailure error message upon triggering “nova
image-list” command. nova-api process should be listening on 8774. It
doesn't look like it is not running. Also I do not find any error logs in
nova-api.log nova-compute.log and nova-conductor.log. I am using openstack-
mitaka release on host (Cent OS 7.2). How can I debug and know what
prevents it from running ? please suggest.
Note: This was working while back and got this issue all of a sudden.
Here are some logs.
[root@localhost ~(keystone_admin)]# nova image-list
ERROR (ConnectFailure): Unable to establish connection to
http://172.18.121.48:8774/v2/4bc608763cee41d9a8df26d3ef919825
[root@localhost ~(keystone_admin)]#
[root@localhost ~(keystone_admin)]# service openstack-nova-api restart
Redirecting to /bin/systemctl restart openstack-nova-api.service
Job for openstack-nova-api.service failed because the control process
exited with error code. See "systemctl status openstack-nova-api.service"
and "journalctl -xe" for details.
[root@localhost ~(keystone_admin)]# systemctl status
openstack-nova-api.service
â openstack-nova-api.service - OpenStack Nova API Server
Loaded: loaded (/usr/lib/systemd/system/openstack-nova-api.service;
enabled; vendor preset: disabled)
Active: activating (start) since Wed 2016-06-08 07:59:20 EDT; 2s ago
Main PID: 179955 (nova-api)
CGroup: /system.slice/openstack-nova-api.service
ââ179955 /usr/bin/python2 /usr/bin/nova-api
Jun 08 07:59:20 localhost systemd[1]: Starting OpenStack Nova API Server...
Jun 08 07:59:22 localhost python2[179955]: detected unhandled Python
exception in '/usr/bin/nova-api'
[root@localhost ~(keystone_admin)]#
[root@localhost ~(keystone_admin)]# keystone endpoint-list
/usr/lib/python2.7/site-packages/keystoneclient/shell.py:64:
DeprecationWarning: The keystone CLI is deprecated in favor of
python-openstackclient. For a Python library, continue using
python-keystoneclient.
'python-keystoneclient.', DeprecationWarning)
/usr/lib/python2.7/site-packages/keystoneclient/v2_0/client.py:145:
DeprecationWarning: Constructing an instance of the
keystoneclient.v2_0.client.Client class without a session is deprecated as
of the 1.7.0 release and may be removed in the 2.0.0 release.
'the 2.0.0 release.', DeprecationWarning)
/usr/lib/python2.7/site-packages/keystoneclient/v2_0/client.py:147:
DeprecationWarning: Using the 'tenant_name' argument is deprecated in
version '1.7.0' and will be removed in version '2.0.0', please use the
'project_name' argument instead
super(Client, self).__init__(**kwargs)
/usr/lib/python2.7/site-packages/debtcollector/renames.py:45:
DeprecationWarning: Using the 'tenant_id' argument is deprecated in version
'1.7.0' and will be removed in version '2.0.0', please use the 'project_id'
argument instead
return f(*args, **kwargs)
/usr/lib/python2.7/site-packages/keystoneclient/httpclient.py:371:
DeprecationWarning: Constructing an HTTPClient instance without using a
session is deprecated as of the 1.7.0 release and may be removed in the
2.0.0 release.
'the 2.0.0 release.', DeprecationWarning)
/usr/lib/python2.7/site-packages/keystoneclient/session.py:140:
DeprecationWarning: keystoneclient.session.Session is deprecated as of the
2.1.0 release in favor of keystoneauth1.session.Session. It will be removed
in future releases.
DeprecationWarning)
/usr/lib/python2.7/site-packages/keystoneclient/auth/identity/base.py:56:
DeprecationWarning: keystoneclient auth plugins are deprecated as of the
2.1.0 release in favor of keystoneauth1 plugins. They will be removed in
future releases.
'in future releases.', DeprecationWarning)
+----------------------------------+-----------+-------------------------------------------------+-------------------------------------------------+-------------------------------------------------+----------------------------------+
| id | region |
publicurl |
internalurl |
adminurl | service_id |
+----------------------------------+-----------+-------------------------------------------------+-------------------------------------------------+-------------------------------------------------+----------------------------------+
| 02fcec9a7b834128b3e30403c4ed0de7 | RegionOne |
http://172.18.121.48:8080/v1/AUTH_%(tenant_id)s |
http://172.18.121.48:8080/v1/AUTH_%(tenant_id)s |
http://172.18.121.48:8080/v1/AUTH_%(tenant_id)s |
5533324a63d8402888040832640a19d0 |
| 295802909413422cb7c22dc1e268bce9 | RegionOne |
http://172.18.121.48:8774/v2/%(tenant_id)s |
http://172.18.121.48:8774/v2/%(tenant_id)s |
http://172.18.121.48:8774/v2/%(tenant_id)s |
f7fe68bf4cec47a4a3c942f3916dc377 |
| 2a125f10b0d04f8a9306dede85b65514 | RegionOne |
http://172.18.121.48:9696 |
http://172.18.121.48:9696
| http://172.18.121.48:9696 |
b2a60cdc144e40a49757f13c2264f030 |
| 2d1a91d39f3d421cb1b2fe73fba5fd3a | RegionOne |
http://172.18.121.48:8777 |
http://172.18.121.48:8777
| http://172.18.121.48:8777 |
e6d750ac5ef3433799d4fe39518a3fe6 |
| 47b634f3e18e4caf914521a1a4157008 | RegionOne |
http://172.18.121.48:8042 |
http://172.18.121.48:8042
| http://172.18.121.48:8042 |
07cd8adf66254b4ab9b07be03a24084b |
| 595913f7227b44dc8753db3b0cf6acdc | RegionOne |
http://172.18.121.48:8041 |
http://172.18.121.48:8041
| http://172.18.121.48:8041 |
f43240abe5f3476ea64a8bd381fe4da7 |
| 64381b509bc84639b6a4710e6d99a23b | RegionOne |
http://172.18.121.48:8776/v1/%(tenant_id)s |
http://172.18.121.48:8776/v1/%(tenant_id)s |
http://172.18.121.48:8776/v1/%(tenant_id)s |
7edc7bedf93d4f388185699b9793ec7f |
| 727d25775be54c9f8453f697ae5cb625 | RegionOne |
http://172.18.121.48:5000/v2.0 |
http://172.18.121.48:5000/v2.0 |
http://172.18.121.48:35357/v2.0 | 25e99a2a98f244d9a73bf965acdd39da |
| 9049338c57574b2d8ff8308b1a4265a5 | RegionOne |
http://172.18.121.48:8776/v2/%(tenant_id)s |
http://172.18.121.48:8776/v2/%(tenant_id)s |
http://172.18.121.48:8776/v2/%(tenant_id)s |
6e070f0629094b72b66025250fdbda64 |
| c051c0f9649143f6b29eaf0895940abe | RegionOne |
http://172.18.121.48:9292 |
http://172.18.121.48:9292
| http://172.18.121.48:9292 |
40874e10139a47eb88dfec2114047a34 |
| ee4a00c1e8334cb8921fa3f2a7c82f1b | RegionOne |
http://172.18.121.48:8774/v3 |
http://172.18.121.48:8774/v3 |
http://172.18.121.48:8774/v3 | 24c92ce4cd354e3db6c5ad59b8beeae8 |
| fa60b5ba0ab7436ab1ffebb2982d3ccc | RegionOne |
http://127.0.0.1:8776/v3/%(tenant_id)s |
http://127.0.0.1:8776/v3/%(tenant_id)s |
http://127.0.0.1:8776/v3/%(tenant_id)s |
b0b6b97cf9d649c9800300cc64b0e866 |
+----------------------------------+-----------+-------------------------------------------------+-------------------------------------------------+-------------------------------------------------+----------------------------------+
[root@localhost ~(keystone_admin)]#
[root@localhost ~(keystone_admin)]# netstat -ntlp | grep 8774
[root@localhost ~(keystone_admin)]#
[root@localhost ~(keystone_admin)]# ps -ef | grep nova-api
nova 156427 1 86 07:51 ? 00:00:01 /usr/bin/python2
/usr/bin/nova-api
[root@localhost ~(keystone_admin)]#
[root@localhost ~(keystone_admin)]# lsof -i :8774
[root@localhost ~(keystone_admin)]#
[root@localhost ~(keystone_admin)]# keystone user-list
/usr/lib/python2.7/site-packages/keystoneclient/shell.py:64:
DeprecationWarning: The keystone CLI is deprecated in favor of
python-openstackclient. For a Python library, continue using
python-keystoneclient.
'python-keystoneclient.', DeprecationWarning)
/usr/lib/python2.7/site-packages/keystoneclient/v2_0/client.py:145:
DeprecationWarning: Constructing an instance of the
keystoneclient.v2_0.client.Client class without a session is deprecated as
of the 1.7.0 release and may be removed in the 2.0.0 release.
'the 2.0.0 release.', DeprecationWarning)
/usr/lib/python2.7/site-packages/keystoneclient/v2_0/client.py:147:
DeprecationWarning: Using the 'tenant_name' argument is deprecated in
version '1.7.0' and will be removed in version '2.0.0', please use the
'project_name' argument instead
super(Client, self).__init__(**kwargs)
/usr/lib/python2.7/site-packages/debtcollector/renames.py:45:
DeprecationWarning: Using the 'tenant_id' argument is deprecated in version
'1.7.0' and will be removed in version '2.0.0', please use the 'project_id'
argument instead
return f(*args, **kwargs)
/usr/lib/python2.7/site-packages/keystoneclient/httpclient.py:371:
DeprecationWarning: Constructing an HTTPClient instance without using a
session is deprecated as of the 1.7.0 release and may be removed in the
2.0.0 release.
'the 2.0.0 release.', DeprecationWarning)
/usr/lib/python2.7/site-packages/keystoneclient/session.py:140:
DeprecationWarning: keystoneclient.session.Session is deprecated as of the
2.1.0 release in favor of keystoneauth1.session.Session. It will be removed
in future releases.
DeprecationWarning)
/usr/lib/python2.7/site-packages/keystoneclient/auth/identity/base.py:56:
DeprecationWarning: keystoneclient auth plugins are deprecated as of the
2.1.0 release in favor of keystoneauth1 plugins. They will be removed in
future releases.
'in future releases.', DeprecationWarning)
+----------------------------------+------------+---------+----------------------+
| id | name | enabled |
email |
+----------------------------------+------------+---------+----------------------+
| 266f5859848e4f39b9725203dda5c3f2 | admin | True |
root@localhost |
| 79a6ff3cc7cc4d018247c750adbc18e7 | aodh | True |
aodh@localhost |
| 90f28a2a80054132a901d39da307213f | ceilometer | True |
ceilometer@localhost |
| 16fa5ffa60e147d89ad84646b6519278 | cinder | True |
cinder@localhost |
| c6312ec6c2c444288a412f32173fcd99 | glance | True |
glance@localhost |
| ac8fb9c33d404a1697d576d428db90b3 | gnocchi | True |
gnocchi@localhost |
| 1a5b4da4ed974ac8a6c78b752ac8fab6 | neutron | True |
neutron@localhost |
| f21e8a15da5c40b7957416de4fa91b62 | nova | True |
nova@localhost |
| b843358d7ae44944b11af38ce4b61f4d | swift | True |
swift@localhost |
+----------------------------------+------------+---------+----------------------+
[root@localhost ~(keystone_admin)]#
[root@localhost ~(keystone_admin)]# nova-manage service list
Option "verbose" from group "DEFAULT" is deprecated for removal. Its value
may be silently ignored in the future.
Option "notification_driver" from group "DEFAULT" is deprecated. Use option
"driver" from group "oslo_messaging_notifications".
Option "notification_topics" from group "DEFAULT" is deprecated. Use option
"topics" from group "oslo_messaging_notifications".
DEPRECATED: Use the nova service-* commands from python-novaclient instead
or the os-services REST resource. The service subcommand will be removed in
the 14.0 release.
Binary Host Zone
Status State Updated_At
nova-osapi_compute 0.0.0.0 internal
enabled XXX None
nova-metadata 0.0.0.0 internal
enabled XXX None
nova-cert localhost internal
enabled XXX 2016-06-08 06:12:38
nova-consoleauth localhost internal
enabled XXX 2016-06-08 06:12:37
nova-scheduler localhost internal
enabled XXX 2016-06-08 06:12:38
nova-conductor localhost internal
enabled XXX 2016-06-08 06:12:37
nova-compute localhost nova
enabled XXX 2016-06-08 06:12:43
[root@localhost ~(keystone_admin)]#
[root@localhost ~(keystone_admin)]# ls -l /var/log/nova/
total 4
-rw-r--r--. 1 nova nova 0 Jun 8 05:22 nova-api.log
-rw-r--r--. 1 nova nova 0 Jun 8 05:22 nova-cert.log
-rw-r--r--. 1 nova nova 0 Jun 8 05:22 nova-compute.log
-rw-r--r--. 1 nova nova 0 Jun 8 05:22 nova-conductor.log
-rw-r--r--. 1 nova nova 0 Jun 8 05:22 nova-consoleauth.log
-rw-r--r--. 1 root root 0 Jun 8 05:22 nova-manage.log
-rw-r--r--. 1 nova nova 995 Jun 8 05:32 nova-novncproxy.log
-rw-r--r--. 1 nova nova 0 Jun 8 05:23 nova-scheduler.log
[root@localhost ~(keystone_admin)]#
Regards,
Chinmaya
8 years, 5 months
Re: [rdo-list] Packstack refactor and future ideas
by Javier Pena
----- Original Message -----
> On Jun 8, 2016 11:54 PM, "Ivan Chavero" < ichavero(a)redhat.com > wrote:
> >
> >
> >
> > ----- Original Message -----
> > > From: "Hugh Brock" < hbrock(a)redhat.com >
> > > To: "Ivan Chavero" < ichavero(a)redhat.com >
> > > Cc: "Javier Pena" < javier.pena(a)redhat.com >, "David Moreau Simard" <
> > > dms(a)redhat.com >, "rdo-list" < rdo-list(a)redhat.com >
> > > Sent: Wednesday, June 8, 2016 5:40:39 PM
> > > Subject: Re: [rdo-list] Packstack refactor and future ideas
> > >
> > > On Jun 8, 2016 11:33 PM, "Ivan Chavero" < ichavero(a)redhat.com > wrote:
> > > >
> > > >
> > > >
> > > > ----- Original Message -----
> > > > > From: "David Moreau Simard" < dms(a)redhat.com >
> > > > > To: "Ivan Chavero" < ichavero(a)redhat.com >
> > > > > Cc: "Javier Pena" < javier.pena(a)redhat.com >, "rdo-list" <
> > > rdo-list(a)redhat.com >
> > > > > Sent: Wednesday, June 8, 2016 3:37:08 PM
> > > > > Subject: Re: [rdo-list] Packstack refactor and future ideas
> > > > >
> > > > > On Wed, Jun 8, 2016 at 3:27 PM, Ivan Chavero < ichavero(a)redhat.com >
> > > wrote:
> > > > > > I think it can be reduced to a single manifest per node.
> > > > > > Also, when a review is created it would be easier to check if you
> > > create
> > > > > > one
> > > > > > review for the python, puppet, tests and release notes.
> > > > >
> > > > > This would not pass CI and thus could not be merged.
> > > > > If there are separate commits, each must pass CI.
> > > >
> > > > well, make it just one big commit if there's no way around this
> > > >
> > > > > Otherwise, my opinion is that Packstack should focus on being a lean,
> > > > > simple and efficient single node installation tool that targets the
> > > > > same use case as DevStack but for the RHEL-derivatives and RDO/OSP
> > > > > population.
> > > > > A tool that is lightweight, simple (to an extent), easy to extend and
> > > > > add new projects in and focuses on developers and proof of concepts.
> > > >
> > > > > I don't believe Packstack should be able to handle multi-node by
> > > > > itself.
> > > > > I don't think I am being pessimistic by saying there is too few
> > > > > resources contributing to Packstack to make multi-node a good story.
> > > > > We're not testing Packstack multi-node right now and testing it
> > > > > properly is really hard, just ask the whole teams of people focused
> > > > > on
> > > > > just testing TripleO.
> > > >
> > > > So in your opinion we should drop features that packstack already has
> > > > because this are difficult to test.
> > > > I don't agree with this, we can set the untested features as
> > > "experimental"
> > > > or "unsupported"
> > > >
> > > >
> > > > > If Packstack is really good at installing things on one node, an
> > > > > advanced/experienced user could have Packstack install components on
> > > > > different servers if that is what he is looking for.
> > > > >
> > > > > Pseudo-code:
> > > > > - Server 1: packstack --install-rabbitmq=y --install-mariadb=y
> > > > > - Server 2: packstack --install-keystone=y --rabbitmq-server=server1
> > > > > --database-server=server1
> > > > > - Server 3: packstack --install-glance=y --keystone-server=server2
> > > > > --database-server=server1 --rabbitmq-server=server1
> > > > > - Server 4: packstack --install-nova=y --keystone-server=server2
> > > > > --database-server=server1 --rabbitmq-server=server1
> > > > > (etc)
> > > >
> > > > I can be wrong but right now Packstack can already do this stuff,
> > > > more command line options are needed or it might need little tweaks to
> > > > the
> > > > code but this is not far from current Packstack options.
> > > >
> > > > > So in my concept, Packstack is not able to do multi node by itself
> > > > > but
> > > > > provides the necessary mechanisms to allow to be installed across
> > > > > different nodes.
> > > > > If an orchestration or wrapper mechanism is required, Ansible is a
> > > > > obvious choice but not the only one.
> > > > > Using Ansible would, notably, make it easy to rip out all the python
> > > > > code that's around executing things on servers over SSH.
> > > > >
> > > >
> > > >
> > > > I think this refactor discussion should focus on a proper puppet usage
> > > > and
> > > > optimizations instead of retiring stuff that already works.
> > > > Actually Packstack used to be able to install all the components in
> > > different
> > > > nodes and this feature was modified to the current limited multinode
> > > features.
> > > >
> > > > We need a tool like Packstack so users can try RDO without the
> > > > complexity
> > > of
> > > > TripleO, imagine you're new to OpenStack and you want to test it in
> > > different
> > > > scenarios, not everybody has a spare machine with 16GB of ram just to
> > > test, not to
> > > > mention the fact of understanding the concept of undercloud before
> > > understanding
> > > > the key concepts of OpenStack.
> > > >
> > > > Cheers,
> > > > Ivan
> > >
> > > Here's a possibly stupid question, indulge me....
> > >
> > > Seems like we're always going to need a simple (ish) tool that just
> > > installs the openstack services on a single machine, without any need for
> > > VMs.
> > >
> > > In fact, the tripleo installer - instack - is one such tool. Packstack is
> > > another, more flexible such tool. Should we consider merging or adapting
> > > them to be the same tool?
> >
> > I don't think this is supid at all, actually TripleO and Packstack are both
> > based on OpenStack Puppet Modules but i don't think you can make merge them
> > since the behaviour of both tools is very different, Packstack is not
> > focused on managing the hardware, it's just focused on installing OpenStack
> > and i'm not very familiar with TripleO inner workings but i think it would
> > be
> > very difficult to make it more like Packstack.
> >
> > Cheers,
> > Ivan
> No, sorry, I didn't mean merge packstack and tripleo, they are very different
> beasts. I meant merge the tripleo installer -- which is called "instack",
> and whose job it is to install an openstack undercloud on a single machine
> so that it can then install openstack -- with packstack, whose job it is to
> install openstack on a single machine, for whatever reason. Deployers who
> want a production install could then go on to deploy a full overcloud.
This could make a lot of sense. I'm not very aware of the instack internals, and I remember it had some code to create VMs for the overcloud on test environments, but it could make sense to use a specific Packstack profile for that.
Javier
> -Hugh
8 years, 5 months
[rdo-list] [Meeting] RDO meeting (2016-06-08) Minutes
by Haïkel
==============================
#rdo: RDO meeting (2016-06-08)
==============================
Meeting started by number80 at 15:00:36 UTC. The full logs are available
at
https://meetbot.fedoraproject.org/rdo/2016-06-08/rdo_meeting_(2016-06-08)...
.
Meeting summary
---------------
* LINK: https://etherpad.openstack.org/p/RDO-Meeting (number80,
15:02:02)
* DLRN instance migration to ci.centos infra (recurring) (number80,
15:04:44)
* ACTION: dmsimard to symlink hashes on internal dlrn
(current-passed-ci, current-tripleo) (dmsimard, 15:11:26)
* ACTION: jpena to switch DNS for trunk-primary to the ci.centos.org
instance on Jun 13 (jpena, 15:17:26)
* Test day readiness (number80, 15:17:52)
* LINK: https://www.rdoproject.org/testday/ (number80, 15:19:48)
* ACTION: everyone help rbowen to update test scenarios (number80,
15:21:43)
* Packstack refactor (number80, 15:23:19)
* LINK:
https://github.com/javierpena/packstack/commit/affad262614a375ed48eac5964...
crashed my browser (EmilienM, 15:25:36)
* LINK:
https://review.openstack.org/#/q/status:open+topic:tripleo-multinode
(EmilienM, 15:28:59)
* ACTION: jpena put packstack phase 2 discussion on the list
(number80, 15:36:21)
* Demos needed for RDO booth @ Red Hat Summit (number80, 15:38:08)
* LINK: https://etherpad.openstack.org/p/rhsummit-rdo-booth
(number80, 15:38:17)
* LINK: https://etherpad.openstack.org/p/rhsummit-rdo-booth (rbowen,
15:38:25)
* if you have a cool demo to show @ RH Summit ping rbowen (number80,
15:40:43)
* open floor (number80, 15:41:14)
* LINK: https://review.rdoproject.org/r/1100 adds a second plugin
(jpena, 15:42:46)
Meeting ended at 15:50:28 UTC.
Action Items
------------
* dmsimard to symlink hashes on internal dlrn (current-passed-ci,
current-tripleo)
* jpena to switch DNS for trunk-primary to the ci.centos.org instance on
Jun 13
* everyone help rbowen to update test scenarios
* jpena put packstack phase 2 discussion on the list
Action Items, by person
-----------------------
* dmsimard
* dmsimard to symlink hashes on internal dlrn (current-passed-ci,
current-tripleo)
* jpena
* jpena to switch DNS for trunk-primary to the ci.centos.org instance
on Jun 13
* jpena put packstack phase 2 discussion on the list
* rbowen
* everyone help rbowen to update test scenarios
* **UNASSIGNED**
* (none)
People Present (lines said)
---------------------------
* number80 (51)
* dmsimard (47)
* jpena (39)
* leifmadsen (15)
* imcsk8 (14)
* trown (13)
* rbowen (11)
* EmilienM (9)
* zodbot (8)
* openstack (4)
* amoralej (4)
* Duck (3)
* ccamacho (2)
* larsks (1)
* champson (1)
* eggmaster (1)
* rdogerrit (1)
* trwon (0)
Generated by `MeetBot`_ 0.1.4
.. _`MeetBot`: http://wiki.debian.org/MeetBot
8 years, 5 months
[rdo-list] Reinstallation RDO Openstack
by Andrey Shevel
Hello,
after update OS (Scientific Linux 7.2) I decided to reinstall the Openstack.
I did everything from the page openstack.redhat.com
unfortunately I got message (I tried several times and gat exactly same answer)
=========================================
Applying 212.193.96.154_keystone.pp
Applying 212.193.96.154_glance.pp
Applying 212.193.96.154_cinder.pp
212.193.96.154_keystone.pp: [ DONE ]
212.193.96.154_glance.pp: [ DONE ]
212.193.96.154_cinder.pp: [ DONE ]
Applying 212.193.96.154_api_nova.pp
212.193.96.154_api_nova.pp: [ ERROR ]
Applying Puppet manifests [ ERROR ]
ERROR : Error appeared during Puppet run: 212.193.96.154_api_nova.pp
Error: Could not autoload puppet/provider/nova_flavor/openstack:
uninitialized constant Puppet::Provider::Openstack
You will find full trace in log
/var/tmp/packstack/20160608-154226-EtZiWG/manifests/212.193.96.154_api_nova.pp.log
Please check log file
/var/tmp/packstack/20160608-154226-EtZiWG/openstack-setup.log for more
information
Additional information:
* A new answerfile was created in: /root/packstack-answers-20160608-154226.txt
* Time synchronization installation was skipped. Please note that
unsynchronized time on server instances might be problem for some
OpenStack components.
* File /root/keystonerc_admin has been created on OpenStack client
host 212.193.96.154. To use the command line tools you need to source
the file.
* To access the OpenStack Dashboard browse to http://212.193.96.154/dashboard .
Please, find your login credentials stored in the keystonerc_admin in
your home directory.
* To use Nagios, browse to http://212.193.96.154/nagios username:
nagiosadmin, password: 89651f6d0bdd4176
++ echo
+++ date
++ echo 'Stop Date & Time = ' Wed Jun 8 15:51:09 MSK 2016
Stop Date & Time = Wed Jun 8 15:51:09 MSK 2016
[root@lmsys001 ~]# uname -a
Linux lmsys001.pnpi.spb.ru 3.10.0-327.18.2.el7.x86_64 #1 SMP Thu May
12 04:13:05 CDT 2016 x86_64 x86_64 x86_64 GNU/Linux
[root@lmsys001 ~]# cat /proc/version
Linux version 3.10.0-327.18.2.el7.x86_64
(mockbuild(a)sl7-uefisign.fnal.gov) (gcc version 4.8.5 20150623 (Red Hat
4.8.5-4) (GCC) ) #1 SMP Thu May 12 04:13:05 CDT 2016
[root@lmsys001 ~]# puppet --version
3.6.2
===================================
Any ideas would be helpful.
Thanks in advance.
--
Andrey Y Shevel
8 years, 5 months
[rdo-list] Unable to log in to the VM instance’s console using openstack-mitaka release
by Chinmaya Dwibedy
Hi All,
I have installed OpenStack (i.e., openstack-mitaka release) on CentOS7.2 .
Used Fedora20 qcow2 cloud image for creating a VM using Dashboard.
1) Installed “libguestfs” on Nova compute node.
2) Updated these lines in “/etc/nova/nova.conf ”
inject_password=true
inject_key=true
inject_partition=-1
3) Restarted nove-compute: # service openstack-nova-compute restart
4) Enabled setting root password in
/usr/share/openstack-dashboard/openstack_dashboard/local/local_settings.py
OPENSTACK_HYPERVISOR_FEATURES = {
…..
‘can_set_password’: True,
}
5) Placed the below code in “Customization Script” section of the
Launch Instance dialog box in OpenStack.
#cloud-config
ssh_pwauth: True
chpasswd:
list: |
root: root
expire: False
runcmd:
- [ sh, -c, echo "=========hello world'=========" ]
It appears that, when the instance was launched, cloud-init did not
change the password for root user, and I was not able to log in to the
instance’s console (Dashboard) using username (root) and password
(root). it says “Log in incorrect”.
Upon checking the boot log found that, cloud-init has executed
/var/lib/cloud/instance/scripts/runcmd and printed hello world. Can
anyone please let me know where I am wrong ? Thanks in advance for
your support and time.
Regards,
Chinmaya
8 years, 5 months
Re: [rdo-list] Read the docs for DLRN shows an old version
by David Moreau Simard
Hey,
Just letting you know I haven't forgotten about this, still on my to-do
list.
David Moreau Simard
Senior Software Engineer | Openstack RDO
dmsimard = [irc, github, twitter]
On May 30, 2016 8:59 PM, "Gerard Braad" <me(a)gbraad.nl> wrote:
Hi All,
The read the docs for DLRN is showing an older version of the
documentation. Likely a push does not trigger the rebuild of the docs
automatically. I have experienced the same with one of my projects. I
created an issue at the project's Github for this [1]. Hope this can
be resolved.
regards,
Gerard
[1] https://github.com/openstack-packages/DLRN/issues/17
--
Gerard Braad
F/OSS & IT Consultant
_______________________________________________
rdo-list mailing list
rdo-list(a)redhat.com
https://www.redhat.com/mailman/listinfo/rdo-list
To unsubscribe: rdo-list-unsubscribe(a)redhat.com
8 years, 5 months
[rdo-list] Issue with assignment of Intel’s QAT Card to VM (PCI-passthrough) using openstack-mitaka release on Cent OS 7.2 host
by Chinmaya Dwibedy
Hi All,
I want the Intel’s QAT Card to be used for PCI Passthrough device. But to
implement PCI-passthrough, when I launch a VM using a flavor configured for
passthrough, it gives the below errors in nova-conductor.log and instance
goes into Error state. Note that, I have installed
openstack-mitaka release on host (Cent OS 7.2). Can anyone please have a
look into the below stated and let me know if I have missed anything or
done anything wrong? Thank you in advance for your support and time.
When I create an instance, this error is output in nova-conductor.log.
2016-06-06 05:42:42.005 4898 WARNING nova.scheduler.utils
[req-94484e27-1998-4e3a-8aa8-06805613ae65 266f5859848e4f39b9725203dda5c3f2
4bc608763cee41d9a8df26d3ef919825 - - -] Failed to
compute_task_build_instances: No valid host was found. There are not enough
hosts available.
Traceback (most recent call last):
File "/usr/lib/python2.7/site-packages/oslo_messaging/rpc/server.py",
line 150, in inner
return func(*args, **kwargs)
File "/usr/lib/python2.7/site-packages/nova/scheduler/manager.py", line
104, in select_destinations
dests = self.driver.select_destinations(ctxt, spec_obj)
File
"/usr/lib/python2.7/site-packages/nova/scheduler/filter_scheduler.py", line
74, in select_destinations
raise exception.NoValidHost(reason=reason)
*NoValidHost: No valid host was found. There are not enough hosts
available.*
2016-06-06 05:42:42.006 4898 WARNING nova.scheduler.utils
[req-94484e27-1998-4e3a-8aa8-06805613ae65 266f5859848e4f39b9725203dda5c3f2
4bc608763cee41d9a8df26d3ef919825 - - -] [instance:
f1db1cce-0777-4f0e-a141-4b278c2d98b4] Setting instance to ERROR state
In order to assign Intel’s QAT Card to VMs , followed below procedures
Using the PCI bus ID, found out the product id
1) [root@localhost ~(keystone_admin)]# lspci -nn | grep QAT
83:00.0 Co-processor [0b40]: Intel Corporation DH895XCC Series QAT
[8086:0435]
88:00.0 Co-processor [0b40]: Intel Corporation DH895XCC Series QAT
[8086:0435]
[root@localhost ~(keystone_admin)]# cat
/sys/bus/pci/devices/0000:83:00.0/device
0x0435
[root@localhost ~(keystone_admin)]# cat
/sys/bus/pci/devices/0000:88:00.0/device
0x0435
[root@localhost ~(keystone_admin)]#
[root@localhost ~(keystone_admin)]#
2) Configured the below stated in nova.conf
pci_alias = {"name": "QuickAssist", "product_id": "0435", "vendor_id":
"8086", "device_type": "type-PCI"}
pci_passthrough_whitelist = [{"vendor_id":"8086","product_id":"0435"}]
scheduler_default_filters=RetryFilter,AvailabilityZoneFilter,RamFilter,ComputeFilter,ComputeCapabilitiesFilter,ImagePropertiesFilter,CoreFilter,
*PciPassthroughFilter*
scheduler_available_filters=nova.scheduler.filters.all_filter
3) service openstack-nova-api restart
4) systemctl restart openstack-nova-compute
5) [root@localhost ~(keystone_admin)]# nova flavor-list
+--------------------------------------+-----------------+-----------+------+-----------+------+-------+-------------+-----------+
| ID | Name | Memory_MB | Disk
| Ephemeral | Swap | VCPUs | RXTX_Factor | Is_Public |
+--------------------------------------+-----------------+-----------+------+-----------+------+-------+-------------+-----------+
| 1 | m1.tiny | 512 | 1
| 0 | | 1 | 1.0 | True |
| 2 | m1.small | 2048 | 20
| 0 | | 1 | 1.0 | True |
| 3 | m1.medium | 4096 | 40
| 0 | | 2 | 1.0 | True |
| 4 | m1.large | 8192 | 80
| 0 | | 4 | 1.0 | True |
+--------------------------------------+-----------------+-----------+------+-----------+------+-------+-------------+-----------+
[root@localhost ~(keystone_admin)]#
6) nova flavor-key 4 set "pci_passthrough:alias"="QuickAssist:1"
7) [root@localhost ~(keystone_admin)]# nova flavor-show 4
+----------------------------+--------------------------------------------+
| Property | Value |
+----------------------------+--------------------------------------------+
| OS-FLV-DISABLED:disabled | False |
| OS-FLV-EXT-DATA:ephemeral | 0 |
| disk | 80 |
| extra_specs | {"pci_passthrough:alias": "QuickAssist:1"} |
| id | 4 |
| name | m1.large |
| os-flavor-access:is_public | True |
| ram | 8192 |
| rxtx_factor | 1.0 |
| swap | |
| vcpus | 4 |
+----------------------------+--------------------------------------------+
[root@localhost ~(keystone_admin)]#
8) [root@localhost ~(keystone_admin)]# nova boot --flavor 4 --key_name
oskey1 --image bc859dc5-103b-428b-814f-d36e59009454 --nic
net-id=e2ca118d-1f25-47de-8524-bb2a2635c4be --user-data=./myfile.txt TEST
WARNING: Option "--key_name" is deprecated; use "--key-name"; this option
will be removed in novaclient 3.3.0.
+--------------------------------------+--------------------------------------------------------------------------+
| Property |
Value |
+--------------------------------------+--------------------------------------------------------------------------+
| OS-DCF:diskConfig | MANUAL
|
| OS-EXT-AZ:availability_zone
| |
| OS-EXT-SRV-ATTR:host |
- |
| OS-EXT-SRV-ATTR:hypervisor_hostname |
- |
| OS-EXT-SRV-ATTR:instance_name |
instance-00000026 |
| OS-EXT-STS:power_state |
0 |
| OS-EXT-STS:task_state |
scheduling |
| OS-EXT-STS:vm_state |
building |
| OS-SRV-USG:launched_at | -
|
| OS-SRV-USG:terminated_at |
- |
| accessIPv4 |
|
| accessIPv6
| |
| adminPass |
7ZKdcaQut7gu |
| config_drive
| |
| created |
2016-06-06T09:42:41Z |
| flavor | m1.large
(4) |
| hostId
| |
| id |
f1db1cce-0777-4f0e-a141-4b278c2d98b4 |
| image | Benu-vMEG-Dev-M.0.0.0-160525-1347
(bc859dc5-103b-428b-814f-d36e59009454) |
| key_name | oskey1
|
| metadata |
{} |
| name | TEST
|
| os-extended-volumes:volumes_attached |
[] |
| progress |
0 |
| security_groups |
default |
| status |
BUILD |
| tenant_id |
4bc608763cee41d9a8df26d3ef919825 |
| updated |
2016-06-06T09:42:41Z |
| user_id |
266f5859848e4f39b9725203dda5c3f2 |
+--------------------------------------+--------------------------------------------------------------------------+
[root@ localhost ~(keystone_admin)]#
.
9) MariaDB [nova]> select * from pci_devices;
+---------------------+---------------------+------------+---------+----+-----------------+--------------+------------+-----------+----------+------------------+-----------------+-----------+------------+---------------+------------+-----------+-------------+
| created_at | updated_at | deleted_at | deleted | id |
compute_node_id | address | product_id | vendor_id | dev_type |
dev_id | label | status | extra_info | instance_uuid
| request_id | numa_node | parent_addr |
+---------------------+---------------------+------------+---------+----+-----------------+--------------+------------+-----------+----------+------------------+-----------------+-----------+------------+---------------+------------+-----------+-------------+
| 2016-06-03 12:01:45 | 2016-06-06 09:46:35 | NULL | 0 | 1
| 1 | 0000:83:00.0 | 0435 | 8086 | type-PF |
pci_0000_83_00_0 | label_8086_0435 | available | {} | NULL
| NULL | 1 | NULL |
| 2016-06-03 12:01:45 | 2016-06-06 09:46:35 | NULL | 0 | 2
| 1 | 0000:88:00.0 | 0435 | 8086 | type-PF |
pci_0000_88_00_0 | label_8086_0435 | available | {} | NULL
| NULL | 1 | NULL |
+---------------------+---------------------+------------+---------+----+-----------------+--------------+------------+-----------+----------+------------------+-----------------+-----------+------------+---------------+------------+-----------+-------------+
2 rows in set (0.00 sec)
MariaDB [nova]>
[root@localhost ~(keystone_admin)]# dmesg | grep -e DMAR -e IOMMU
[ 0.000000] ACPI: DMAR 000000007b69a000 00130 (v01 INTEL S2600WT
00000001 INTL 20091013)
[ 0.128779] dmar: IOMMU 0: reg_base_addr fbffc000 ver 1:0 cap
d2078c106f0466 ecap f020de
[ 0.128785] dmar: IOMMU 1: reg_base_addr c7ffc000 ver 1:0 cap
d2078c106f0466 ecap f020de
[ 0.128911] IOAPIC id 10 under DRHD base 0xfbffc000 IOMMU 0
[ 0.128912] IOAPIC id 8 under DRHD base 0xc7ffc000 IOMMU 1
[ 0.128913] IOAPIC id 9 under DRHD base 0xc7ffc000 IOMMU 1
[root@ localhost ~(keystone_admin)]#
[root@localhost ~(keystone_admin)]# lscpu | grep Virtualization
Virtualization: VT-x
[root@localhost ~(keystone_admin)]#
[root@localhost ~(keystone_admin)]# nova service-list
+----+------------------+--------+----------+---------+-------+----------------------------+-----------------+
| Id | Binary | Host | Zone | Status | State |
Updated_at | Disabled Reason |
+----+------------------+--------+----------+---------+-------+----------------------------+-----------------+
| 9 | nova-cert | localhost | internal | enabled | up |
2016-06-07T04:58:28.000000 | - |
| 10 | nova-consoleauth | localhost | internal | enabled | up |
2016-06-07T04:58:30.000000 | - |
| 11 | nova-scheduler | localhost | internal | enabled | up |
2016-06-07T04:58:30.000000 | - |
| 12 | nova-conductor | localhost | internal | enabled | up |
2016-06-07T04:58:29.000000 | - |
| 18 | nova-compute | localhost | nova | enabled | up |
2016-06-07T04:58:29.000000 | - |
+----+------------------+--------+----------+---------+-------+----------------------------+-----------------+
[root@localhost ~(keystone_admin)]#
[root@localhost ~(keystone_admin)]# nova host-list
+-----------+-------------+----------+
| host_name | service | zone |
+-----------+-------------+----------+
| localhost | cert | internal |
| localhost | consoleauth | internal |
| localhost | scheduler | internal |
| localhost | conductor | internal |
| localhost | compute | nova |
+-----------+-------------+----------+
[root@localhost ~(keystone_admin)]#
[root@localhost ~(keystone_admin)]# neutron agent-list
+--------------------------------------+--------------------+--------+-------------------+-------+----------------+---------------------------+
| id | agent_type | host |
availability_zone | alive | admin_state_up | binary |
+--------------------------------------+--------------------+--------+-------------------+-------+----------------+---------------------------+
| 0e81d20f-b41d-490a-966a-7171880963b9 | Metadata agent | localhost
| | :-) | True | neutron-metadata-agent |
| 2ccb17dc-35d8-41cc-8e5d-83496a7e26b0 | Metering agent | localhost
| | :-) | True | neutron-metering-agent |
| 6fef2fa7-2479-4d45-889c-b38b854ac3e3 | DHCP agent | localhost |
nova | :-) | True | neutron-dhcp-agent |
| 87c976cc-e3cd-4818-aa4f-ee599bf812b1 | L3 agent | localhost |
nova | :-) | True | neutron-l3-agent |
| aeb4f399-2281-4ad3-b880-802812910ec8 | Open vSwitch agent | localhost
| | :-) | True | neutron-openvswitch-agent |
+--------------------------------------+--------------------+--------+-------------------+-------+----------------+---------------------------+
[root@localhost ~(keystone_admin)]#
[root@localhost ~(keystone_admin)]# nova hypervisor-list
+----+---------------------+-------+---------+
| ID | Hypervisor hostname | State | Status |
+----+---------------------+-------+---------+
| 1 | localhost | up | enabled |
+----+---------------------+-------+---------+
[root@localhost ~(keystone_admin)]#
[root@localhost ~(keystone_admin)]# grep ^virt_type /etc/nova/nova.conf
virt_type=kvm
[root@localhost ~(keystone_admin)]#
[root@localhost ~(keystone_admin)]# grep ^compute_driver /etc/nova/nova.conf
compute_driver=libvirt.LibvirtDriver
[root@localhost ~(keystone_admin)]#
[root@localhost ~(keystone_admin)]#
Regards,
Chinmaya
8 years, 5 months
Re: [rdo-list] Baremetal Tripleo stable version?
by Christopher Brown
I'm glad you said it.
I'm having exactly the same problem. I've had to customize an image due to missing python-hardware-detect and introspection is very hit and miss.
Currently testing delorean packages but may have to consider reverting to Liberty.
Documentation is not clear and still references Liberty. I think some mitaka stabilisation work would be gratefully received.
Regards,
Christopher Brown
-------- Original message --------
From: Pedro Sousa <pgsousa(a)gmail.com>
Date: 03/06/2016 04:11 (GMT+00:00)
To: rdo-list <rdo-list(a)redhat.com>
Subject: [rdo-list] Baremetal Tripleo stable version?
Hi all,
been doing some tests on baremetal hosts, but I'm kind of stuck here, it's getting to start to be frustrating.
I've followed the documentation from http://docs.openstack.org/developer/tripleo-docs/
First I've tried the stable version from liberty and got stuck in a python-config-oslo outdated package bug.
Then I've tried mitaka and I got stuck in this error:
"Could not retrieve fact='rabbitmq_nodename', resolution='<anonymous>': undefined method `[]' for nil:NilClass Could not retrieve fact='rabbitmq_nodename', resolution='<anonymous>': undefined method `[]' for nil:NilClass"
My question is if there's a stable version that can be installed on overcloud baremetal hosts that we can rely on?
Thanks
8 years, 5 months
[rdo-list] Upcoming RDO/OpenStack Meetups
by Rich Bowen
The following are the meetups I'm aware of in the coming week where
OpenStack and/or RDO enthusiasts are likely to be present. If you know
of others, please let me know, and/or add them to
http://rdoproject.org/events
If there's a meetup in your area, please consider attending. If you
attend, please consider taking a few photos, and possibly even writing
up a brief summary of what was covered.
--Rich
* Monday June 06 in Paris, FR: Discutons OpenStack et containers -
http://www.meetup.com/Meetup-SUSE-Linux-Paris/events/231095109/
* Tuesday June 07 in Sydney, AU: June Sydney Meetup - SDN 101 and
Gnocchi -
http://www.meetup.com/Australian-OpenStack-User-Group/events/229602105/
* Tuesday June 07 in San Jose, CA, US: Come and talk about Openstack
Project Romana and Datera Storage -
http://www.meetup.com/Silicon-Valley-OpenStack-Ops-Meetup/events/231210364/
* Tuesday June 07 in Fort Collins, CO, US: Heat usage -
http://www.meetup.com/OpenStack-Colorado/events/231434361/
* Wednesday June 08 in Prague, CZ: OpenStack Day Prague -
http://www.meetup.com/OpenStack-Czech-User-Group-Meetup/events/228029462/
* Wednesday June 08 in Houston, TX, US: OpenStack & Cisco UCS -
http://www.meetup.com/Houston-Cisco-UCS-Meetup/events/230853127/
* Thursday June 09 in San Antonio, TX, US: Passing the Certified
OpenStack Administrator Test Part 1: OpenStack Overview -
http://www.meetup.com/SA-Open-Stackers/events/231626701/
* Thursday June 09 in San Francisco, CA, US: SF Bay OpenStack Meetup:
Data-Driven, Cost-Based OpenStack Capacity Management -
http://www.meetup.com/openstack/events/231297777/
* Thursday June 09 in San Diego, CA, US: OpenStack LAMP & Load Balancing
as a Service -
http://www.meetup.com/San-Diego-Cloud-Computing-Meetup/events/231422642/
* Thursday June 09 in Montevideo, UY: El 13 no es mala suerte -
http://www.meetup.com/OpenStack-Uruguay/events/231426806/
* Friday June 10 in Dublin, IE: OpenStack Ireland Day - June 10th 2016 -
http://www.meetup.com/OpenStack-Ireland/events/229221735/
* Friday June 10 in Houston, TX, US: Arista, Neutron, and Docker Oh My!
Did I mention Docker!? -
http://www.meetup.com/openstackhoustonmeetup/events/231594293/
8 years, 5 months