[Rdo-list] Openshift inside Openstack ? how to access internal network from external ?
by Antonio C. Velez
I successfully install openshift origin inside openstack, but I cannot access my openshift apps from my external network! broker and node already have floating ips but the dns on broker assign internal ips for my apps! what the correct procedure to fix this issue?
Thanks in advance!!!
------------------
Antonio C. Velez Baez
Linux Consultant
Vidalinux.com
RHCE, RHCI, RHCX, RHCOE
Red Hat Certified Training Center
Email: acvelez(a)vidalinux.com
Tel: 1-787-439-2983
Skype: vidalinuxpr
Twitter: @vidalinux.com
Website: www.vidalinux.com
--
This message has been scanned for viruses and
dangerous content by MailScanner, and is
believed to be clean.
10 years, 4 months
[Rdo-list] Icehouse MariaDB-Galera -Server problem.
by john decot
Hi,
I am new to openstack. I am on the way to RDO for the installation of
openstack.
packstack --allinone command generates output error : cannot find
mariadb-galera-server in repo.
any help will be appreciated.
Thank You,
John.
10 years, 4 months
[Rdo-list] Quickstart should mention architecture requirements
by Lars Kellogg-Stedman
I just spent some time debugging an issue on #rdo in which someone
appeared to have done everything correctly but was unable to install
RDO because of several missing packages.
It turns out this was because they were working with an i686 CentOS
image.
I think we need to update the "Prerequisites" section of the
Quickstart document (http://openstack.redhat.com/Quickstart) to
indicate that we only support x86_64, because otherwise this is a
tricky failure mode to detect: there are no particular errors, and all
the .noarch packages still show up, so the problem is not immediately
obvious.
--
Lars Kellogg-Stedman <lars(a)redhat.com> | larsks @ irc
Cloud Engineering / OpenStack | " " @ twitter
10 years, 4 months
Re: [Rdo-list] Icehouse multi-node - Centos7 - live migration failed because of "network qbr no such device"
by Benoit ML
Hello,
I know that ... I have started with ml2 plugins for this reason. Perhaps
can it be a bug in ml2 plugin ?
Next week i will try with ml2 and admin credential on neutron, will see.
Moreover, for information, by default, puppet rdo modules from foreman
configure erything with openvswitch plugin by default.
Thank you.
Regards,
2014-07-17 21:59 GMT+02:00 Miguel Angel <miguelangel(a)ajo.es>:
> Be aware that ovs plug in deprecated, and it's going to be removed now
> from Juno. This could only harm if you wanted to upgrade to Juno at a
> later time. Otherwise it may be ok.
>
> Could you try using the admin credentials in the settings?
> On Jul 17, 2014 4:54 PM, "Benoit ML" <ben42ml(a)gmail.com> wrote:
>
>> Hello,
>>
>> Evrything now works. I replace ml2 by openvswitch for the core plugin.
>>
>>
>> ##core_plugin =neutron.plugins.ml2.plugin.Ml2Plugin
>> core_plugin =openvswitch
>>
>>
>> Regards,
>>
>>
>>
>> 2014-07-16 17:28 GMT+02:00 Benoit ML <ben42ml(a)gmail.com>:
>>
>>> Hello,
>>>
>>> Another mail about the problem.... Well i have enable debug = True in
>>> keystone.conf
>>>
>>> And after a nova migrate <VM>, when i nova show <VM> :
>>>
>>> ==============================================================================
>>> | fault | {"message": "Remote error:
>>> Unauthorized {\"error\": {\"message\": \"User
>>> 0b45ccc267e04b59911e88381bb450c0 is unauthorized for tenant services\",
>>> \"code\": 401, \"title\": \"Unauthorized\"}} |
>>>
>>> ==============================================================================
>>>
>>> So well User with id 0b45ccc267e04b59911e88381bb450c0 is neutron :
>>>
>>> ==============================================================================
>>> keystone user-list
>>> | 0b45ccc267e04b59911e88381bb450c0 | neutron | True | |
>>>
>>> ==============================================================================
>>>
>>> And the role seems good :
>>>
>>> ==============================================================================
>>> keystone user-role-add --user=neutron --tenant=services --role=admin
>>> Conflict occurred attempting to store role grant. User
>>> 0b45ccc267e04b59911e88381bb450c0 already has role
>>> 734c2fb6fb444792b5ede1fa1e17fb7e in tenant dea82f7937064b6da1c370280d8bfdad
>>> (HTTP 409)
>>>
>>>
>>> keystone user-role-list --user neutron --tenant services
>>>
>>> +----------------------------------+-------+----------------------------------+----------------------------------+
>>> | id | name |
>>> user_id | tenant_id |
>>>
>>> +----------------------------------+-------+----------------------------------+----------------------------------+
>>> | 734c2fb6fb444792b5ede1fa1e17fb7e | admin |
>>> 0b45ccc267e04b59911e88381bb450c0 | dea82f7937064b6da1c370280d8bfdad |
>>>
>>> +----------------------------------+-------+----------------------------------+----------------------------------+
>>>
>>> keystone tenant-list
>>> +----------------------------------+----------+---------+
>>> | id | name | enabled |
>>> +----------------------------------+----------+---------+
>>> | e250f7573010415da6f191e0b53faae5 | admin | True |
>>> | fa30c6bdd56e45dea48dfbe9c3ee8782 | exploit | True |
>>> | dea82f7937064b6da1c370280d8bfdad | services | True |
>>> +----------------------------------+----------+---------+
>>>
>>>
>>> ==============================================================================
>>>
>>>
>>> Really i didn't see where is my mistake ... can you help me plz ?
>>>
>>>
>>> Thank you in advance !
>>>
>>> Regards,
>>>
>>>
>>>
>>>
>>>
>>>
>>> 2014-07-15 15:13 GMT+02:00 Benoit ML <ben42ml(a)gmail.com>:
>>>
>>> Hello again,
>>>>
>>>> Ok on controller node I modify the neutron server configuration with
>>>> nova_admin_tenant_id = f23ed5be5f534fdba31d23f60621347d
>>>>
>>>> where id is "services" in keystone and now it's working with "vif_plugging_is_fatal
>>>> = True". Good thing.
>>>>
>>>> Well by the way the migrate doesnt working ...
>>>>
>>>>
>>>>
>>>>
>>>> 2014-07-15 14:20 GMT+02:00 Benoit ML <ben42ml(a)gmail.com>:
>>>>
>>>> Hello,
>>>>>
>>>>> Thank you for taking time !
>>>>>
>>>>> Well on the compute node, when i activate "vif_plugging_is_fatal =
>>>>> True", the vm creation stuck in spawning state, and in neutron server log i
>>>>> have :
>>>>>
>>>>> =======================================
>>>>> 2014-07-15 14:12:52.351 18448 DEBUG neutron.notifiers.nova [-] Sending
>>>>> events: [{'status': 'completed', 'tag':
>>>>> u'ba56921c-4628-4fcc-9dc5-2b324cd91faf', 'name': 'network-vif-plugged',
>>>>> 'server_uuid': u'f8554441-565c-49ec-bc88-3cbc628b0579'}] send_events
>>>>> /usr/lib/python2.7/site-packages/neutron/notifiers/nova.py:218
>>>>> 2014-07-15 14:12:52.354 18448 INFO urllib3.connectionpool [-] Starting
>>>>> new HTTP connection (1): localhost
>>>>> 2014-07-15 14:12:52.360 18448 DEBUG urllib3.connectionpool [-] "POST
>>>>> /v2/5c9c186a909e499e9da0dd5cf2c403e0/os-server-external-events HTTP/1.1"
>>>>> 401 23 _make_request
>>>>> /usr/lib/python2.7/site-packages/urllib3/connectionpool.py:295
>>>>> 2014-07-15 14:12:52.362 18448 INFO urllib3.connectionpool [-] Starting
>>>>> new HTTP connection (1): localhost
>>>>> 2014-07-15 14:12:52.452 18448 DEBUG urllib3.connectionpool [-] "POST
>>>>> /v2.0/tokens HTTP/1.1" 401 114 _make_request
>>>>> /usr/lib/python2.7/site-packages/urllib3/connectionpool.py:295
>>>>> 2014-07-15 14:12:52.453 18448 ERROR neutron.notifiers.nova [-] Failed
>>>>> to notify nova on events: [{'status': 'completed', 'tag':
>>>>> u'ba56921c-4628-4fcc-9dc5-2b324cd91faf', 'name': 'network-vif-plugged',
>>>>> 'server_uuid': u'f8554441-565c-49ec-bc88-3cbc628b0579'}]
>>>>> 2014-07-15 14:12:52.453 18448 TRACE neutron.notifiers.nova Traceback
>>>>> (most recent call last):
>>>>> 2014-07-15 14:12:52.453 18448 TRACE neutron.notifiers.nova File
>>>>> "/usr/lib/python2.7/site-packages/neutron/notifiers/nova.py", line 221, in
>>>>> send_events
>>>>> 2014-07-15 14:12:52.453 18448 TRACE neutron.notifiers.nova
>>>>> batched_events)
>>>>> 2014-07-15 14:12:52.453 18448 TRACE neutron.notifiers.nova File
>>>>> "/usr/lib/python2.7/site-packages/novaclient/v1_1/contrib/server_external_events.py",
>>>>> line 39, in create
>>>>> 2014-07-15 14:12:52.453 18448 TRACE neutron.notifiers.nova
>>>>> return_raw=True)
>>>>> 2014-07-15 14:12:52.453 18448 TRACE neutron.notifiers.nova File
>>>>> "/usr/lib/python2.7/site-packages/novaclient/base.py", line 152, in _create
>>>>> 2014-07-15 14:12:52.453 18448 TRACE neutron.notifiers.nova _resp,
>>>>> body = self.api.client.post(url, body=body)
>>>>> 2014-07-15 14:12:52.453 18448 TRACE neutron.notifiers.nova File
>>>>> "/usr/lib/python2.7/site-packages/novaclient/client.py", line 312, in post
>>>>> 2014-07-15 14:12:52.453 18448 TRACE neutron.notifiers.nova return
>>>>> self._cs_request(url, 'POST', **kwargs)
>>>>> 2014-07-15 14:12:52.453 18448 TRACE neutron.notifiers.nova File
>>>>> "/usr/lib/python2.7/site-packages/novaclient/client.py", line 301, in
>>>>> _cs_request
>>>>> 2014-07-15 14:12:52.453 18448 TRACE neutron.notifiers.nova raise e
>>>>> 2014-07-15 14:12:52.453 18448 TRACE neutron.notifiers.nova
>>>>> Unauthorized: Unauthorized (HTTP 401)
>>>>> 2014-07-15 14:12:52.453 18448 TRACE neutron.notifiers.nova
>>>>> 2014-07-15 14:12:58.321 18448 DEBUG neutron.openstack.common.rpc.amqp
>>>>> [-] received {u'_context_roles': [u'admin'], u'_context_request_id':
>>>>> u'req-9bf35c42-3477-4ed3-8092-af729c21198c', u'_context_read_deleted':
>>>>> u'no', u'_context_user_name': None, u'_context_project_name': None,
>>>>> u'namespace': None, u'_context_tenant_id': None, u'args': {u'agent_state':
>>>>> {u'agent_state': {u'topic': u'N/A', u'binary':
>>>>> u'neutron-openvswitch-agent', u'host': u'pvidgsh006.pvi', u'agent_type':
>>>>> u'Open vSwitch agent', u'configurations': {u'tunnel_types': [u'vxlan'],
>>>>> u'tunneling_ip': u'192.168.40.5', u'bridge_mappings': {}, u'l2_population':
>>>>> False, u'devices': 1}}}, u'time': u'2014-07-15T12:12:58.313995'},
>>>>> u'_context_tenant': None, u'_unique_id':
>>>>> u'7c9a4dfcd256494caf6e1327c8051e29', u'_context_is_admin': True,
>>>>> u'version': u'1.0', u'_context_timestamp': u'2014-07-15 12:01:28.190772',
>>>>> u'_context_tenant_name': None, u'_context_user': None, u'_context_user_id':
>>>>> None, u'method': u'report_state', u'_context_project_id': None} _safe_log
>>>>> /usr/lib/python2.7/site-packages/neutron/openstack/common/rpc/common.py:280
>>>>> =======================================
>>>>>
>>>>> Well I'm supposed it's related ... Perhaps with those options in
>>>>> neutron.conf :
>>>>> ======================================
>>>>> notify_nova_on_port_status_changes = True
>>>>> notify_nova_on_port_data_changes = True
>>>>> nova_url = http://localhost:8774/v2
>>>>> nova_admin_tenant_name = services
>>>>> nova_admin_username = nova
>>>>> nova_admin_password = nova
>>>>> nova_admin_auth_url = http://localhost:35357/v2.0
>>>>> ======================================
>>>>>
>>>>> But well didnt see anything wrong ...
>>>>>
>>>>> Thank you in advance !
>>>>>
>>>>> Regards,
>>>>>
>>>>>
>>>>>
>>>>> 2014-07-11 16:08 GMT+02:00 Vimal Kumar <vimal7370(a)gmail.com>:
>>>>>
>>>>> -----
>>>>>> File "/usr/lib/python2.7/site-packages/neutronclient/client.py", line
>>>>>> 239, in authenticate\\n content_type="application/json")\\n\', u\' File
>>>>>> "/usr/lib/python2.7/site-packages/neutronclient/client.py", line 163, in
>>>>>> _cs_request\\n raise exceptions.Unauthorized(message=body)\\n\',
>>>>>> u\'Unauthorized: {"error": {"message": "The request you have made requires
>>>>>> authentication.", "code": 401, "title": "Unauthorized"}}\\n\'].\n']
>>>>>> -----
>>>>>>
>>>>>> Looks like HTTP connection to neutron server is resulting in 401
>>>>>> error.
>>>>>>
>>>>>> Try enabling debug mode for neutron server and then tail
>>>>>> /var/log/neutron/server.log , hopefully you should get more info.
>>>>>>
>>>>>>
>>>>>> On Fri, Jul 11, 2014 at 7:13 PM, Benoit ML <ben42ml(a)gmail.com> wrote:
>>>>>>
>>>>>>> Hello,
>>>>>>>
>>>>>>> Ok I see. Nova telles neutron/openvswitch to create the bridge qbr
>>>>>>> prior to the migration itself.
>>>>>>> I ve already activate debug and verbose ... But well i'm really
>>>>>>> stuck, dont know how and where to search/look ...
>>>>>>>
>>>>>>>
>>>>>>>
>>>>>>> Regards,
>>>>>>>
>>>>>>>
>>>>>>>
>>>>>>>
>>>>>>>
>>>>>>> 2014-07-11 15:09 GMT+02:00 Miguel Angel <miguelangel(a)ajo.es>:
>>>>>>>
>>>>>>> Hi Benoit,
>>>>>>>>
>>>>>>>> A manual virsh migration should fail, because the
>>>>>>>> network ports are not migrated to the destination host.
>>>>>>>>
>>>>>>>> You must investigate on the authentication problem itself,
>>>>>>>> and let nova handle all the underlying API calls which should
>>>>>>>> happen...
>>>>>>>>
>>>>>>>> May be it's worth setting nova.conf to debug=True
>>>>>>>>
>>>>>>>>
>>>>>>>>
>>>>>>>> ---
>>>>>>>> irc: ajo / mangelajo
>>>>>>>> Miguel Angel Ajo Pelayo
>>>>>>>> +34 636 52 25 69
>>>>>>>> skype: ajoajoajo
>>>>>>>>
>>>>>>>>
>>>>>>>> 2014-07-11 14:41 GMT+02:00 Benoit ML <ben42ml(a)gmail.com>:
>>>>>>>>
>>>>>>>> Hello,
>>>>>>>>>
>>>>>>>>> cat /etc/redhat-release
>>>>>>>>> CentOS Linux release 7 (Rebuilt from: RHEL 7.0)
>>>>>>>>>
>>>>>>>>>
>>>>>>>>> Regards,
>>>>>>>>>
>>>>>>>>>
>>>>>>>>> 2014-07-11 13:40 GMT+02:00 Boris Derzhavets <
>>>>>>>>> bderzhavets(a)hotmail.com>:
>>>>>>>>>
>>>>>>>>> Could you please post /etc/redhat-release.
>>>>>>>>>>
>>>>>>>>>> Boris.
>>>>>>>>>>
>>>>>>>>>> ------------------------------
>>>>>>>>>> Date: Fri, 11 Jul 2014 11:57:12 +0200
>>>>>>>>>> From: ben42ml(a)gmail.com
>>>>>>>>>> To: rdo-list(a)redhat.com
>>>>>>>>>> Subject: [Rdo-list] Icehouse multi-node - Centos7 - live
>>>>>>>>>> migration failed because of "network qbr no such device"
>>>>>>>>>>
>>>>>>>>>>
>>>>>>>>>> Hello,
>>>>>>>>>>
>>>>>>>>>> I'm working on a multi-node setup of openstack Icehouse using
>>>>>>>>>> centos7.
>>>>>>>>>> Well i have :
>>>>>>>>>> - one controllor node with all server services thing stuff
>>>>>>>>>> - one network node with openvswitch agent, l3-agent, dhcp-agent
>>>>>>>>>> - two compute node with nova-compute and neutron-openvswitch
>>>>>>>>>> - one storage nfs node
>>>>>>>>>>
>>>>>>>>>> NetworkManager is deleted on compute nodes and network node.
>>>>>>>>>>
>>>>>>>>>> My network use is configured to use vxlan. I can create VM,
>>>>>>>>>> tenant-network, external-network, routeur, assign floating-ip to VM, push
>>>>>>>>>> ssh-key into VM, create volume from glance image, etc... Evrything is
>>>>>>>>>> conected and reacheable. Pretty cool :)
>>>>>>>>>>
>>>>>>>>>> But when i try to migrate VM things go wrong ... I have
>>>>>>>>>> configured nova, libvirtd and qemu to use migration through libvirt-tcp.
>>>>>>>>>> I have create and exchanged ssh-key for nova user on all node. I
>>>>>>>>>> have verified userid and groupid of nova.
>>>>>>>>>>
>>>>>>>>>> Well nova-compute log, on the target compute node, :
>>>>>>>>>> 2014-07-11 11:45:02.749 6984 TRACE nova.compute.manager
>>>>>>>>>> [instance: a5326fe1-4faa-4347-ba29-159fce26a85c] RemoteError: Remote error:
>>>>>>>>>> Unauthorized {"error": {"m
>>>>>>>>>> essage": "The request you have made requires authentication.",
>>>>>>>>>> "code": 401, "title": "Unauthorized"}}
>>>>>>>>>>
>>>>>>>>>>
>>>>>>>>>> So well after searching a lots in all logs, i have fount that i
>>>>>>>>>> cant simply migration VM between compute node with a simple virsh :
>>>>>>>>>> virsh migrate instance-00000084 qemu+tcp://<dest>/system
>>>>>>>>>>
>>>>>>>>>> The error is :
>>>>>>>>>> erreur :Cannot get interface MTU on 'qbr3ca65809-05': No such
>>>>>>>>>> device
>>>>>>>>>>
>>>>>>>>>> Well when i look on the source hyperviseur the bridge
>>>>>>>>>> "qbr3ca65809" exists and have a network tap device. And moreover i
>>>>>>>>>> manually create qbr3ca65809 on the target hypervisor, virsh migrate succed !
>>>>>>>>>>
>>>>>>>>>> Can you help me plz ?
>>>>>>>>>> What can i do wrong ? Perhpas neutron must create the bridge
>>>>>>>>>> before migration but didnt for a mis configuration ?
>>>>>>>>>>
>>>>>>>>>> Plz ask anything you need !
>>>>>>>>>>
>>>>>>>>>> Thank you in advance.
>>>>>>>>>>
>>>>>>>>>>
>>>>>>>>>> The full nova-compute log attached.
>>>>>>>>>>
>>>>>>>>>>
>>>>>>>>>>
>>>>>>>>>>
>>>>>>>>>>
>>>>>>>>>>
>>>>>>>>>> Regards,
>>>>>>>>>>
>>>>>>>>>> --
>>>>>>>>>> --
>>>>>>>>>> Benoit
>>>>>>>>>>
>>>>>>>>>> _______________________________________________ Rdo-list mailing
>>>>>>>>>> list Rdo-list(a)redhat.com
>>>>>>>>>> https://www.redhat.com/mailman/listinfo/rdo-list
>>>>>>>>>>
>>>>>>>>>
>>>>>>>>>
>>>>>>>>>
>>>>>>>>> --
>>>>>>>>> --
>>>>>>>>> Benoit
>>>>>>>>>
>>>>>>>>> _______________________________________________
>>>>>>>>> Rdo-list mailing list
>>>>>>>>> Rdo-list(a)redhat.com
>>>>>>>>> https://www.redhat.com/mailman/listinfo/rdo-list
>>>>>>>>>
>>>>>>>>>
>>>>>>>>
>>>>>>>
>>>>>>>
>>>>>>> --
>>>>>>> --
>>>>>>> Benoit
>>>>>>>
>>>>>>> _______________________________________________
>>>>>>> Rdo-list mailing list
>>>>>>> Rdo-list(a)redhat.com
>>>>>>> https://www.redhat.com/mailman/listinfo/rdo-list
>>>>>>>
>>>>>>>
>>>>>>
>>>>>
>>>>>
>>>>> --
>>>>> --
>>>>> Benoit
>>>>>
>>>>
>>>>
>>>>
>>>> --
>>>> --
>>>> Benoit
>>>>
>>>
>>>
>>>
>>> --
>>> --
>>> Benoit
>>>
>>
>>
>>
>> --
>> --
>> Benoit
>>
>> _______________________________________________
>> Rdo-list mailing list
>> Rdo-list(a)redhat.com
>> https://www.redhat.com/mailman/listinfo/rdo-list
>>
>>
--
--
Benoit
10 years, 4 months
[Rdo-list] Icehouse multi-node - Centos7 - live migration failed because of "network qbr no such device"
by Benoit ML
Hello,
I'm working on a multi-node setup of openstack Icehouse using centos7.
Well i have :
- one controllor node with all server services thing stuff
- one network node with openvswitch agent, l3-agent, dhcp-agent
- two compute node with nova-compute and neutron-openvswitch
- one storage nfs node
NetworkManager is deleted on compute nodes and network node.
My network use is configured to use vxlan. I can create VM,
tenant-network, external-network, routeur, assign floating-ip to VM, push
ssh-key into VM, create volume from glance image, etc... Evrything is
conected and reacheable. Pretty cool :)
But when i try to migrate VM things go wrong ... I have configured nova,
libvirtd and qemu to use migration through libvirt-tcp.
I have create and exchanged ssh-key for nova user on all node. I have
verified userid and groupid of nova.
Well nova-compute log, on the target compute node, :
2014-07-11 11:45:02.749 6984 TRACE nova.compute.manager [instance:
a5326fe1-4faa-4347-ba29-159fce26a85c] RemoteError: Remote error:
Unauthorized {"error": {"m
essage": "The request you have made requires authentication.", "code": 401,
"title": "Unauthorized"}}
So well after searching a lots in all logs, i have fount that i cant simply
migration VM between compute node with a simple virsh :
virsh migrate instance-00000084 qemu+tcp://<dest>/system
The error is :
erreur :Cannot get interface MTU on 'qbr3ca65809-05': No such device
Well when i look on the source hyperviseur the bridge "qbr3ca65809" exists
and have a network tap device. And moreover i manually create qbr3ca65809
on the target hypervisor, virsh migrate succed !
Can you help me plz ?
What can i do wrong ? Perhpas neutron must create the bridge before
migration but didnt for a mis configuration ?
Plz ask anything you need !
Thank you in advance.
The full nova-compute log attached.
Regards,
--
--
Benoit
10 years, 4 months
Re: [Rdo-list] Icehouse Neutron DB code bug still persists?
by Kodiak Firesmith
Ihar,
Apologies! I looked at this with fresh eyes this morning and realized
that while neutron-server was listening on 9696 I hadn't yet put a
rule into our enterprise iptables management module in Puppet for
neutron-server yet, thus Neutron stuff was timing out when a user
attempts to log into Horizon.
Everything works well now - I'll make sure to pay the help forward by
filing an Openstack documentation bug request that distils the missing
steps that the RDO team helped me get through yesterday.
Thanks so much!
- Kodiak
Date: Thu, 17 Jul 2014 10:55:08 +0200
From: Ihar Hrachyshka <ihrachys(a)redhat.com>
To: rdo-list(a)redhat.com
Subject: Re: [Rdo-list] Icehouse Neutron DB code bug still persists?
Message-ID: <53C78F6C.3070106(a)redhat.com>
Content-Type: text/plain; charset=ISO-8859-1
-----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA512
On 16/07/14 18:58, Kodiak Firesmith wrote:
> Of course setting up Neutron has taken Horizon offline:
>
> http://paste.openstack.org/show/86778/
>
Any interesting log messages for neutron service? Do basic neutron
requests like 'neutron net-list' work?
>
> - Kodiak
>
> On Wed, Jul 16, 2014 at 12:34 PM, Kodiak Firesmith
> <kfiresmith(a)gmail.com> wrote:
>> Further modifying /etc/neutron/neutron.conf as follows allowed
>> the neutron-db-manage goodness to happen:
>>
>> -service_plugins = router +service_plugins =
>> neutron.services.l3_router.l3_
router_plugin.L3RouterPlugin
>>
>> # neutron-db-manage --config-file /etc/neutron/neutron.conf
>> --config-file /etc/neutron/plugin.ini upgrade head No handlers
>> could be found for logger "neutron.common.legacy" INFO
>> [alembic.migration] Context impl MySQLImpl. INFO
>> [alembic.migration] Will assume non-transactional DDL. INFO
>> [alembic.migration] Running upgrade None -> folsom INFO
>> [alembic.migration] Running upgrade folsom -> 2c4af419145b ...
>> INFO [alembic.migration] Running upgrade 1341ed32cc1e ->
>> grizzly INFO [alembic.migration] Running upgrade grizzly ->
>> f489cf14a79c INFO [alembic.migration] Running upgrade
>> f489cf14a79c -> 176a85fc7d79 ... INFO [alembic.migration]
>> Running upgrade 49f5e553f61f -> 40b0aff0302e INFO
>> [alembic.migration] Running upgrade 40b0aff0302e -> havana INFO
>> [alembic.migration] Running upgrade havana -> e197124d4b9 ...
>> INFO [alembic.migration] Running upgrade 538732fa21e1 ->
>> 5ac1c354a051 INFO [alembic.migration] Running upgrade
>> 5ac1c354a051 -> icehouse
>>
>> I am now cautiously optimistic that I'm back on track - will
>> report back with success fail. If success I'll submit a
>> documentation bug to the docs.openstack people.
>>
>> Here's my tables now: http://paste.openstack.org/show/86776/
>>
>> Thanks a million!
>>
>> - Kodiak
>>
>> On Wed, Jul 16, 2014 at 11:15 AM, Kodiak Firesmith
>> <kfiresmith(a)gmail.com> wrote:
>>> Thanks again Kuba!
>>>
>>> So I think it's gotten farther. I replaced the line on
>>> /etc/neutron/neutron.conf:
>>>
>>> -core_plugin = ml2 +core_plugin = neutron.plugins.ml2.plugin.
>>> Ml2Plugin
>>>
>>> Then I re-ran the neutron-db-manage as seen in the paste below.
>>> It's gotten past ml2 and now is erroring out on 'router':
>>>
>>> http://paste.openstack.org/show/86759/
>>>
>>>
>>> - Kodiak
>>>
>>> On Wed, Jul 16, 2014 at 11:01 AM, Jakub Libosvar
>>> <libosvar(a)redhat.com> wrote:
>>>> On 07/16/2014 04:57 PM, Kodiak Firesmith wrote:
>>>>> Hello Kuba, Thanks for the reply. I used the ml2 ini file
>>>>> as my core plugin per the docs and did what you mentioned.
>>>>> It resulted in a traceback unfortunately.
>>>>>
>>>>> Here is a specific accounting of what I did:
>>>>> http://paste.openstack.org/show/86756/
>>>>
>>>> Ah, this is because we don't load full path from entry_points
>>>> for plugins in neutron-db-manage (we didn't fix this because
>>>> this dependency is going to be removed soon).
>>>>
>>>> Can you please try to change core_plugin in neutron.conf to
>>>>
>>>> core_plugin = neutron.plugins.ml2.plugin.Ml2Plugin
>>>>
>>>> and re-run neutron-db-manage.
>>>>
>>>> Thanks, Kuba
>>>>>
>>>>> So it looks like maybe there is an issue with the ml2
>>>>> plugin as the openstack docs cover it so far as how it
>>>>> works with the RDO packages.
>>>>>
>>>>> Another admin reports that stuff "just works" in RDO
>>>>> packstack - maybe there is some workaround in Packstack or
>>>>> maybe it uses another driver and not ML2?
>>>>>
>>>>> Thanks again, - Kodiak
>>>>>
>>>>> On Wed, Jul 16, 2014 at 8:54 AM, Jakub Libosvar
>>>>> <libosvar(a)redhat.com> wrote:
>>>>>> On 07/16/2014 02:25 PM, Kodiak Firesmith wrote:
>>>>>>> Hello, First go-round with Openstack and first post on
>>>>>>> the list so bear with me...
>>>>>>>
>>>>>>> I've been working through the manual installation of
>>>>>>> RDO using the docs.openstack installation guide.
>>>>>>> Everything went smoothly for the most part until
>>>>>>> Neutron. It appears I've been hit by the same bug(?)
>>>>>>> discussed here:
>>>>>>> http://www.marshut.com/ithyup/net-create-issue.html#ithzts,
>>>>>>> and here:
>>>>>>> https://www.redhat.com/archives/rdo-list/2014-March/msg00005.html
>>>>>>>
>>>>>>>
...among other places.
>>>>>>>
>>>>>>> Upon first launch of the neutron-server daemon, this
>>>>>>> appears in the neutron-server log file:
>>>>>>> http://paste.openstack.org/show/86614/
>>>>>>>
>>>>>>> And once you go into the db you can see that a bunch of
>>>>>>> tables are not created that should be.
>>>>>>>
>>>>>>> As the first link alludes to, it looks like a MyISAM /
>>>>>>> InnoDB formatting mix-up but I'm no MySQL guy so I
>>>>>>> can't prove that.
>>>>>>>
>>>>>>> I would really like if someone on the list who is a bit
>>>>>>> more experienced with this stuff could please see if
>>>>>>> the suspicions raised in the links above are correct,
>>>>>>> and if so, could the RDO people please provide a
>>>>>>> workaround to get me back up and running with our test
>>>>>>> deployment?
>>>>>>>
>>>>>>> Thanks! - Kodiak
>>>>>>>
>>>>>>> _______________________________________________
>>>>>>> Rdo-list mailing list Rdo-list(a)redhat.com
>>>>>>> https://www.redhat.com/mailman/listinfo/rdo-list
>>>>>>>
>>>>>> Hi Kodiak,
>>>>>>
>>>>>> I think there is a bug in documentation, I'm missing
>>>>>> running neutron-db-manage command to create scheme for
>>>>>> neutron. Can you please try to 1. stop neutron-server 2.
>>>>>> create a new database 3. set connection string in
>>>>>> neutron.conf 4. run neutron-db-manage --config-file
>>>>>> /etc/neutron/neutron.conf --config-file
>>>>>> <path_to_your_core_plugin_file.ini> upgrade head 5. start
>>>>>> neutron-server
>>>>>>
>>>>>> Kuba
>>>>
>
> _______________________________________________ Rdo-list mailing
> list Rdo-list(a)redhat.com
> https://www.redhat.com/mailman/listinfo/rdo-list
>
-----BEGIN PGP SIGNATURE-----
Version: GnuPG/MacGPG2 v2.0.22 (Darwin)
Comment: Using GnuPG with Thunderbird - http://www.enigmail.net/
iQEcBAEBCgAGBQJTx49sAAoJEC5aWaUY1u577IoH/A8xk9HwDgJYJ6M2T11D3Yt+
VkRyZvR8drsVl2GaX51uQF4F9HWNgT6zhZqk4Y9n8MvTtG66XIUI7K0KW51uU05/
Ki4NaD9RrkVMIvNGExCSOzcpuUaCYmTOjDVoHkKT+jp+vdRcjNrFZHtI7IE1qGpI
BSbhNzV8htJJiFI40dsjJgZgutmqORvU79oFZDADcUMQnb/tIH9hw5xSAWe2+dzi
IzUq88Brd90t8tteAAauNaHYcx4yG9dGZ7xaXi0FNqOhw/WzaVm8U/UkKmvEatoV
NBlnbliuPBBtttGr/EtOtUcyo9eiNN1P+IvmoJgz8dSvbX95vAXZBGA+2Nq/ecQ=
=dPVj
-----END PGP SIGNATURE-----
10 years, 4 months
[Rdo-list] Icehouse Neutron DB code bug still persists?
by Kodiak Firesmith
Hello,
First go-round with Openstack and first post on the list so bear with me...
I've been working through the manual installation of RDO using the
docs.openstack installation guide. Everything went smoothly for the
most part until Neutron. It appears I've been hit by the same bug(?)
discussed here:
http://www.marshut.com/ithyup/net-create-issue.html#ithzts, and here:
https://www.redhat.com/archives/rdo-list/2014-March/msg00005.html
...among other places.
Upon first launch of the neutron-server daemon, this appears in the
neutron-server log file: http://paste.openstack.org/show/86614/
And once you go into the db you can see that a bunch of tables are not
created that should be.
As the first link alludes to, it looks like a MyISAM / InnoDB
formatting mix-up but I'm no MySQL guy so I can't prove that.
I would really like if someone on the list who is a bit more
experienced with this stuff could please see if the suspicions raised
in the links above are correct, and if so, could the RDO people please
provide a workaround to get me back up and running with our test
deployment?
Thanks!
- Kodiak
10 years, 4 months
Re: [Rdo-list] cinder doesn't find volgroup/cinder-volumes
by Nathan M.
>
> Are you able to reproduce this issue (assuming you
> can consistently) on current latest IceHouse RDO packages? You haven't
> noted the versions you're using.
rpm -qa | grep icehouse
rdo-release-icehouse-4.noarch
Maybe you don't really have enough free space there? I don't have a
> Cinder setup to do a sanity check, you might want to ensure if you have
> your Cinder filter scheduler configured correctly.
I created another vg and gave it 30 gigs just in case:
[root@node1]/etc/cinder# (openstack_admin)] vgdisplay cinder-volumes
--- Volume group ---
VG Name cinder-volumes
System ID
Format lvm2
Metadata Areas 1
Metadata Sequence No 1
VG Access read/write
VG Status resizable
MAX LV 0
Cur LV 0
Open LV 0
Max PV 0
Cur PV 1
Act PV 1
VG Size 30.00 GiB
PE Size 4.00 MiB
Total PE 7679 Alloc PE / Size 0 / 0
Free PE / Size 7679 / 30.00 GiB
VG UUID huXkqD-3JIm-Fasr-Gkwc-EPVP-n15c-KIvBRc
> > 2014-07-11 12:34:43.325 8665 ERROR cinder.scheduler.flows.create_volume
> > [req-2acb85f1-5b7b-4b63-bf95-9037338cb52b
> 555e3e826c9f445c9975d0e1c6e00fc6
> > ff6a2b534e984db58313ae194b2d908c - - -] Failed to schedule_create_volume:
> > No valid host was found.
> > 2014-07-11 12:35:16.253 8665 WARNING cinder.context [-] Arguments dropped
> > when creating context: {'user': None, 'tenant': None, 'user_identity':
> u'-
> > - - - -'}
> >
>
> --
> /kashyap
>
10 years, 4 months
Re: [Rdo-list] Glance/Keystone problem
by Adam Huffman
Hi Flavio,
Thanks for looking. In the end, the cause here was an omission in the
api-paste file for Keystone, now fixed.
Best Wishes,
Adam
On Wed, Jul 16, 2014 at 5:35 PM, Adam Huffman <adam.huffman(a)gmail.com> wrote:
> Hi Flavio,
>
> Thanks for looking. In the end, the cause here was an omission in the
> api-paste file for Keystone, now fixed.
>
> Best Wishes,
> Adam
>
> On Wed, Jul 16, 2014 at 9:11 AM, Flavio Percoco <flavio(a)redhat.com> wrote:
>> On 07/15/2014 03:32 PM, Adam Huffman wrote:
>>> I've altered Keystone on my Icehouse cloud to use Apache/mod_ssl. The
>>> Keystone and Nova clients are working (more or less) but I'm having
>>> trouble with Glance.
>>
>> Hi Adam,
>>
>> We'd need your config files to have a better idea of what the issue
>> could be. Based on the logs you just sent, keystone's middleware can't
>> find/load the certification file:
>>
>> "Unable to load certificate. Ensure your system is configured properly"
>>
>> Some things you could check:
>>
>> 1. Is the file path in your config file correct?
>> 2. Is the config option name correct?
>> 3. Is the file readable?
>>
>> Hope the above helps,
>> Flavio
>>
>>
>>>
>>> Here's an example of the sort of error I'm seeing from the Glance api.log:
>>>
>>>
>>> 2014-07-15 14:24:00.551 24063 DEBUG
>>> glance.api.middleware.version_negotiation [-] Determining version of
>>> request: GET /v1/shared-images/e35356df747b4c5aa663fae2897facba
>>> Accept: process_request
>>> /usr/lib/python2.6/site-packages/glance/api/middleware/version_negotiation.py:44
>>> 2014-07-15 14:24:00.552 24063 DEBUG
>>> glance.api.middleware.version_negotiation [-] Using url versioning
>>> process_request
>>> /usr/lib/python2.6/site-packages/glance/api/middleware/version_negotiation.py:57
>>> 2014-07-15 14:24:00.552 24063 DEBUG
>>> glance.api.middleware.version_negotiation [-] Matched version: v1
>>> process_request
>>> /usr/lib/python2.6/site-packages/glance/api/middleware/version_negotiation.py:69
>>> 2014-07-15 14:24:00.552 24063 DEBUG
>>> glance.api.middleware.version_negotiation [-] new path
>>> /v1/shared-images/e35356df747b4c5aa663fae2897facba process_request
>>> /usr/lib/python2.6/site-packages/glance/api/middleware/version_negotiation.py:70
>>> 2014-07-15 14:24:00.553 24063 DEBUG
>>> keystoneclient.middleware.auth_token [-] Authenticating user token
>>> __call__ /usr/lib/python2.6/site-packages/keystoneclient/middleware/auth_token.py:666
>>> 2014-07-15 14:24:00.553 24063 DEBUG
>>> keystoneclient.middleware.auth_token [-] Removing headers from request
>>> environment: X-Identity-Status,X-Domain-Id,X-Domain-Name,X-Project-Id,X-Project-Name,X-Project-Domain-Id,X-Project-Domain-Name,X-User-Id,X-User-Name,X-User-Domain-Id,X-User-Domain-Name,X-Roles,X-Service-Catalog,X-User,X-Tenant-Id,X-Tenant-Name,X-Tenant,X-Role
>>> _remove_auth_headers
>>> /usr/lib/python2.6/site-packages/keystoneclient/middleware/auth_token.py:725
>>> 2014-07-15 14:24:00.591 24063 INFO urllib3.connectionpool [-] Starting
>>> new HTTPS connection (1): <hostname>
>>> 2014-07-15 14:24:01.921 24063 DEBUG urllib3.connectionpool [-] "POST
>>> /v2.0/tokens HTTP/1.1" 200 7003 _make_request
>>> /usr/lib/python2.6/site-packages/urllib3/connectionpool.py:295
>>> 2014-07-15 14:24:01.931 24063 INFO urllib3.connectionpool [-] Starting
>>> new HTTPS connection (1): <hostname>
>>> 2014-07-15 14:24:03.243 24063 DEBUG urllib3.connectionpool [-] "GET
>>> /v2.0/tokens/revoked HTTP/1.1" 200 682 _make_request
>>> /usr/lib/python2.6/site-packages/urllib3/connectionpool.py:295
>>> 2014-07-15 14:24:03.252 24063 INFO urllib3.connectionpool [-] Starting
>>> new HTTPS connection (1): <hostname>
>>> 2014-07-15 14:24:04.529 24063 DEBUG urllib3.connectionpool [-] "GET /
>>> HTTP/1.1" 300 384 _make_request
>>> /usr/lib/python2.6/site-packages/urllib3/connectionpool.py:295
>>> 2014-07-15 14:24:04.530 24063 DEBUG
>>> keystoneclient.middleware.auth_token [-] Server reports support for
>>> api versions: v3.0 _get_supported_versions
>>> /usr/lib/python2.6/site-packages/keystoneclient/middleware/auth_token.py:656
>>> 2014-07-15 14:24:04.531 24063 INFO
>>> keystoneclient.middleware.auth_token [-] Auth Token confirmed use of
>>> v3.0 apis
>>> 2014-07-15 14:24:04.531 24063 INFO urllib3.connectionpool [-] Starting
>>> new HTTPS connection (1): <hostname>
>>> 2014-07-15 14:24:04.667 24063 DEBUG urllib3.connectionpool [-] "GET
>>> /v3/OS-SIMPLE-CERT/certificates HTTP/1.1" 404 93 _make_request
>>> /usr/lib/python2.6/site-packages/urllib3/connectionpool.py:295
>>> 2014-07-15 14:24:04.669 24063 DEBUG
>>> keystoneclient.middleware.auth_token [-] Token validation failure.
>>> _validate_user_token
>>> /usr/lib/python2.6/site-packages/keystoneclient/middleware/auth_token.py:943
>>> 2014-07-15 14:24:04.669 24063 TRACE
>>> keystoneclient.middleware.auth_token Traceback (most recent call
>>> last):
>>> 2014-07-15 14:24:04.669 24063 TRACE
>>> keystoneclient.middleware.auth_token File
>>> "/usr/lib/python2.6/site-packages/keystoneclient/middleware/auth_token.py",
>>> line 930, in _validate_user_token
>>> 2014-07-15 14:24:04.669 24063 TRACE
>>> keystoneclient.middleware.auth_token verified =
>>> self.verify_signed_token(user_token, token_ids)
>>> 2014-07-15 14:24:04.669 24063 TRACE
>>> keystoneclient.middleware.auth_token File
>>> "/usr/lib/python2.6/site-packages/keystoneclient/middleware/auth_token.py",
>>> line 1347, in verify_signed_token
>>> 2014-07-15 14:24:04.669 24063 TRACE
>>> keystoneclient.middleware.auth_token if
>>> self.is_signed_token_revoked(token_ids):
>>> 2014-07-15 14:24:04.669 24063 TRACE
>>> keystoneclient.middleware.auth_token File
>>> "/usr/lib/python2.6/site-packages/keystoneclient/middleware/auth_token.py",
>>> line 1299, in is_signed_token_revoked
>>> 2014-07-15 14:24:04.669 24063 TRACE
>>> keystoneclient.middleware.auth_token if
>>> self._is_token_id_in_revoked_list(token_id):
>>> 2014-07-15 14:24:04.669 24063 TRACE
>>> keystoneclient.middleware.auth_token File
>>> "/usr/lib/python2.6/site-packages/keystoneclient/middleware/auth_token.py",
>>> line 1306, in _is_token_id_in_revoked_list
>>> 2014-07-15 14:24:04.669 24063 TRACE
>>> keystoneclient.middleware.auth_token revocation_list =
>>> self.token_revocation_list
>>> 2014-07-15 14:24:04.669 24063 TRACE
>>> keystoneclient.middleware.auth_token File
>>> "/usr/lib/python2.6/site-packages/keystoneclient/middleware/auth_token.py",
>>> line 1413, in token_revocation_list
>>> 2014-07-15 14:24:04.669 24063 TRACE
>>> keystoneclient.middleware.auth_token self.token_revocation_list =
>>> self.fetch_revocation_list()
>>> 2014-07-15 14:24:04.669 24063 TRACE
>>> keystoneclient.middleware.auth_token File
>>> "/usr/lib/python2.6/site-packages/keystoneclient/middleware/auth_token.py",
>>> line 1459, in fetch_revocation_list
>>> 2014-07-15 14:24:04.669 24063 TRACE
>>> keystoneclient.middleware.auth_token return
>>> self.cms_verify(data['signed'])
>>> 2014-07-15 14:24:04.669 24063 TRACE
>>> keystoneclient.middleware.auth_token File
>>> "/usr/lib/python2.6/site-packages/keystoneclient/middleware/auth_token.py",
>>> line 1333, in cms_verify
>>> 2014-07-15 14:24:04.669 24063 TRACE
>>> keystoneclient.middleware.auth_token self.fetch_signing_cert()
>>> 2014-07-15 14:24:04.669 24063 TRACE
>>> keystoneclient.middleware.auth_token File
>>> "/usr/lib/python2.6/site-packages/keystoneclient/middleware/auth_token.py",
>>> line 1477, in fetch_signing_cert
>>> 2014-07-15 14:24:04.669 24063 TRACE
>>> keystoneclient.middleware.auth_token
>>> self._fetch_cert_file(self.signing_cert_file_name, 'signing')
>>> 2014-07-15 14:24:04.669 24063 TRACE
>>> keystoneclient.middleware.auth_token File
>>> "/usr/lib/python2.6/site-packages/keystoneclient/middleware/auth_token.py",
>>> line 1473, in _fetch_cert_file
>>> 2014-07-15 14:24:04.669 24063 TRACE
>>> keystoneclient.middleware.auth_token raise
>>> exceptions.CertificateConfigError(response.text)
>>> 2014-07-15 14:24:04.669 24063 TRACE
>>> keystoneclient.middleware.auth_token CertificateConfigError: Unable to
>>> load certificate. Ensure your system is configured properly.
>>> 2014-07-15 14:24:04.669 24063 TRACE keystoneclient.middleware.auth_token
>>> 2014-07-15 14:24:04.671 24063 DEBUG
>>> keystoneclient.middleware.auth_token [-] Marking token as unauthorized
>>> in cache _cache_store_invalid
>>> /usr/lib/python2.6/site-packages/keystoneclient/middleware/auth_token.py:1239
>>> 2014-07-15 14:24:04.672 24063 WARNING
>>> keystoneclient.middleware.auth_token [-] Authorization failed for
>>> token
>>> 2014-07-15 14:24:04.672 24063 INFO
>>> keystoneclient.middleware.auth_token [-] Invalid user token -
>>> deferring reject downstream
>>> 2014-07-15 14:24:04.674 24063 INFO glance.wsgi.server [-] <IP address>
>>> - - [15/Jul/2014 14:24:04] "GET
>>> /v1/shared-images/e35356df747b4c5aa663fae2897facba HTTP/1.1" 401 381
>>> 4.124231
>>>
>>> There is a bug report about a race condition involving Cinder, but
>>> that was supposed to have been fixed.
>>>
>>> Any suggestions appreciated.
>>>
>>> Best Wishes,
>>> Adam
>>>
>>> _______________________________________________
>>> Rdo-list mailing list
>>> Rdo-list(a)redhat.com
>>> https://www.redhat.com/mailman/listinfo/rdo-list
>>>
>>
>>
>> --
>> @flaper87
>> Flavio Percoco
>>
>> _______________________________________________
>> Rdo-list mailing list
>> Rdo-list(a)redhat.com
>> https://www.redhat.com/mailman/listinfo/rdo-list
10 years, 4 months
[Rdo-list] Glance/Keystone problem
by Adam Huffman
I've altered Keystone on my Icehouse cloud to use Apache/mod_ssl. The
Keystone and Nova clients are working (more or less) but I'm having
trouble with Glance.
Here's an example of the sort of error I'm seeing from the Glance api.log:
2014-07-15 14:24:00.551 24063 DEBUG
glance.api.middleware.version_negotiation [-] Determining version of
request: GET /v1/shared-images/e35356df747b4c5aa663fae2897facba
Accept: process_request
/usr/lib/python2.6/site-packages/glance/api/middleware/version_negotiation.py:44
2014-07-15 14:24:00.552 24063 DEBUG
glance.api.middleware.version_negotiation [-] Using url versioning
process_request
/usr/lib/python2.6/site-packages/glance/api/middleware/version_negotiation.py:57
2014-07-15 14:24:00.552 24063 DEBUG
glance.api.middleware.version_negotiation [-] Matched version: v1
process_request
/usr/lib/python2.6/site-packages/glance/api/middleware/version_negotiation.py:69
2014-07-15 14:24:00.552 24063 DEBUG
glance.api.middleware.version_negotiation [-] new path
/v1/shared-images/e35356df747b4c5aa663fae2897facba process_request
/usr/lib/python2.6/site-packages/glance/api/middleware/version_negotiation.py:70
2014-07-15 14:24:00.553 24063 DEBUG
keystoneclient.middleware.auth_token [-] Authenticating user token
__call__ /usr/lib/python2.6/site-packages/keystoneclient/middleware/auth_token.py:666
2014-07-15 14:24:00.553 24063 DEBUG
keystoneclient.middleware.auth_token [-] Removing headers from request
environment: X-Identity-Status,X-Domain-Id,X-Domain-Name,X-Project-Id,X-Project-Name,X-Project-Domain-Id,X-Project-Domain-Name,X-User-Id,X-User-Name,X-User-Domain-Id,X-User-Domain-Name,X-Roles,X-Service-Catalog,X-User,X-Tenant-Id,X-Tenant-Name,X-Tenant,X-Role
_remove_auth_headers
/usr/lib/python2.6/site-packages/keystoneclient/middleware/auth_token.py:725
2014-07-15 14:24:00.591 24063 INFO urllib3.connectionpool [-] Starting
new HTTPS connection (1): <hostname>
2014-07-15 14:24:01.921 24063 DEBUG urllib3.connectionpool [-] "POST
/v2.0/tokens HTTP/1.1" 200 7003 _make_request
/usr/lib/python2.6/site-packages/urllib3/connectionpool.py:295
2014-07-15 14:24:01.931 24063 INFO urllib3.connectionpool [-] Starting
new HTTPS connection (1): <hostname>
2014-07-15 14:24:03.243 24063 DEBUG urllib3.connectionpool [-] "GET
/v2.0/tokens/revoked HTTP/1.1" 200 682 _make_request
/usr/lib/python2.6/site-packages/urllib3/connectionpool.py:295
2014-07-15 14:24:03.252 24063 INFO urllib3.connectionpool [-] Starting
new HTTPS connection (1): <hostname>
2014-07-15 14:24:04.529 24063 DEBUG urllib3.connectionpool [-] "GET /
HTTP/1.1" 300 384 _make_request
/usr/lib/python2.6/site-packages/urllib3/connectionpool.py:295
2014-07-15 14:24:04.530 24063 DEBUG
keystoneclient.middleware.auth_token [-] Server reports support for
api versions: v3.0 _get_supported_versions
/usr/lib/python2.6/site-packages/keystoneclient/middleware/auth_token.py:656
2014-07-15 14:24:04.531 24063 INFO
keystoneclient.middleware.auth_token [-] Auth Token confirmed use of
v3.0 apis
2014-07-15 14:24:04.531 24063 INFO urllib3.connectionpool [-] Starting
new HTTPS connection (1): <hostname>
2014-07-15 14:24:04.667 24063 DEBUG urllib3.connectionpool [-] "GET
/v3/OS-SIMPLE-CERT/certificates HTTP/1.1" 404 93 _make_request
/usr/lib/python2.6/site-packages/urllib3/connectionpool.py:295
2014-07-15 14:24:04.669 24063 DEBUG
keystoneclient.middleware.auth_token [-] Token validation failure.
_validate_user_token
/usr/lib/python2.6/site-packages/keystoneclient/middleware/auth_token.py:943
2014-07-15 14:24:04.669 24063 TRACE
keystoneclient.middleware.auth_token Traceback (most recent call
last):
2014-07-15 14:24:04.669 24063 TRACE
keystoneclient.middleware.auth_token File
"/usr/lib/python2.6/site-packages/keystoneclient/middleware/auth_token.py",
line 930, in _validate_user_token
2014-07-15 14:24:04.669 24063 TRACE
keystoneclient.middleware.auth_token verified =
self.verify_signed_token(user_token, token_ids)
2014-07-15 14:24:04.669 24063 TRACE
keystoneclient.middleware.auth_token File
"/usr/lib/python2.6/site-packages/keystoneclient/middleware/auth_token.py",
line 1347, in verify_signed_token
2014-07-15 14:24:04.669 24063 TRACE
keystoneclient.middleware.auth_token if
self.is_signed_token_revoked(token_ids):
2014-07-15 14:24:04.669 24063 TRACE
keystoneclient.middleware.auth_token File
"/usr/lib/python2.6/site-packages/keystoneclient/middleware/auth_token.py",
line 1299, in is_signed_token_revoked
2014-07-15 14:24:04.669 24063 TRACE
keystoneclient.middleware.auth_token if
self._is_token_id_in_revoked_list(token_id):
2014-07-15 14:24:04.669 24063 TRACE
keystoneclient.middleware.auth_token File
"/usr/lib/python2.6/site-packages/keystoneclient/middleware/auth_token.py",
line 1306, in _is_token_id_in_revoked_list
2014-07-15 14:24:04.669 24063 TRACE
keystoneclient.middleware.auth_token revocation_list =
self.token_revocation_list
2014-07-15 14:24:04.669 24063 TRACE
keystoneclient.middleware.auth_token File
"/usr/lib/python2.6/site-packages/keystoneclient/middleware/auth_token.py",
line 1413, in token_revocation_list
2014-07-15 14:24:04.669 24063 TRACE
keystoneclient.middleware.auth_token self.token_revocation_list =
self.fetch_revocation_list()
2014-07-15 14:24:04.669 24063 TRACE
keystoneclient.middleware.auth_token File
"/usr/lib/python2.6/site-packages/keystoneclient/middleware/auth_token.py",
line 1459, in fetch_revocation_list
2014-07-15 14:24:04.669 24063 TRACE
keystoneclient.middleware.auth_token return
self.cms_verify(data['signed'])
2014-07-15 14:24:04.669 24063 TRACE
keystoneclient.middleware.auth_token File
"/usr/lib/python2.6/site-packages/keystoneclient/middleware/auth_token.py",
line 1333, in cms_verify
2014-07-15 14:24:04.669 24063 TRACE
keystoneclient.middleware.auth_token self.fetch_signing_cert()
2014-07-15 14:24:04.669 24063 TRACE
keystoneclient.middleware.auth_token File
"/usr/lib/python2.6/site-packages/keystoneclient/middleware/auth_token.py",
line 1477, in fetch_signing_cert
2014-07-15 14:24:04.669 24063 TRACE
keystoneclient.middleware.auth_token
self._fetch_cert_file(self.signing_cert_file_name, 'signing')
2014-07-15 14:24:04.669 24063 TRACE
keystoneclient.middleware.auth_token File
"/usr/lib/python2.6/site-packages/keystoneclient/middleware/auth_token.py",
line 1473, in _fetch_cert_file
2014-07-15 14:24:04.669 24063 TRACE
keystoneclient.middleware.auth_token raise
exceptions.CertificateConfigError(response.text)
2014-07-15 14:24:04.669 24063 TRACE
keystoneclient.middleware.auth_token CertificateConfigError: Unable to
load certificate. Ensure your system is configured properly.
2014-07-15 14:24:04.669 24063 TRACE keystoneclient.middleware.auth_token
2014-07-15 14:24:04.671 24063 DEBUG
keystoneclient.middleware.auth_token [-] Marking token as unauthorized
in cache _cache_store_invalid
/usr/lib/python2.6/site-packages/keystoneclient/middleware/auth_token.py:1239
2014-07-15 14:24:04.672 24063 WARNING
keystoneclient.middleware.auth_token [-] Authorization failed for
token
2014-07-15 14:24:04.672 24063 INFO
keystoneclient.middleware.auth_token [-] Invalid user token -
deferring reject downstream
2014-07-15 14:24:04.674 24063 INFO glance.wsgi.server [-] <IP address>
- - [15/Jul/2014 14:24:04] "GET
/v1/shared-images/e35356df747b4c5aa663fae2897facba HTTP/1.1" 401 381
4.124231
There is a bug report about a race condition involving Cinder, but
that was supposed to have been fixed.
Any suggestions appreciated.
Best Wishes,
Adam
10 years, 4 months