Hello,
Evrything now works. I replace ml2 by openvswitch for the core plugin.
##core_plugin =neutron.plugins.ml2.plugin.Ml2Plugin
core_plugin =openvswitch
Regards,
2014-07-16 17:28 GMT+02:00 Benoit ML <ben42ml(a)gmail.com>:
Hello,
Another mail about the problem.... Well i have enable debug = True in
keystone.conf
And after a nova migrate <VM>, when i nova show <VM> :
==============================================================================
| fault | {"message": "Remote error:
Unauthorized {\"error\": {\"message\": \"User
0b45ccc267e04b59911e88381bb450c0 is unauthorized for tenant services\",
\"code\": 401, \"title\": \"Unauthorized\"}} |
==============================================================================
So well User with id 0b45ccc267e04b59911e88381bb450c0 is neutron :
==============================================================================
keystone user-list
| 0b45ccc267e04b59911e88381bb450c0 | neutron | True | |
==============================================================================
And the role seems good :
==============================================================================
keystone user-role-add --user=neutron --tenant=services --role=admin
Conflict occurred attempting to store role grant. User
0b45ccc267e04b59911e88381bb450c0 already has role
734c2fb6fb444792b5ede1fa1e17fb7e in tenant dea82f7937064b6da1c370280d8bfdad
(HTTP 409)
keystone user-role-list --user neutron --tenant services
+----------------------------------+-------+----------------------------------+----------------------------------+
| id | name |
user_id | tenant_id |
+----------------------------------+-------+----------------------------------+----------------------------------+
| 734c2fb6fb444792b5ede1fa1e17fb7e | admin |
0b45ccc267e04b59911e88381bb450c0 | dea82f7937064b6da1c370280d8bfdad |
+----------------------------------+-------+----------------------------------+----------------------------------+
keystone tenant-list
+----------------------------------+----------+---------+
| id | name | enabled |
+----------------------------------+----------+---------+
| e250f7573010415da6f191e0b53faae5 | admin | True |
| fa30c6bdd56e45dea48dfbe9c3ee8782 | exploit | True |
| dea82f7937064b6da1c370280d8bfdad | services | True |
+----------------------------------+----------+---------+
==============================================================================
Really i didn't see where is my mistake ... can you help me plz ?
Thank you in advance !
Regards,
2014-07-15 15:13 GMT+02:00 Benoit ML <ben42ml(a)gmail.com>:
Hello again,
>
> Ok on controller node I modify the neutron server configuration with
> nova_admin_tenant_id = f23ed5be5f534fdba31d23f60621347d
>
> where id is "services" in keystone and now it's working with
"vif_plugging_is_fatal
> = True". Good thing.
>
> Well by the way the migrate doesnt working ...
>
>
>
>
> 2014-07-15 14:20 GMT+02:00 Benoit ML <ben42ml(a)gmail.com>:
>
> Hello,
>>
>> Thank you for taking time !
>>
>> Well on the compute node, when i activate "vif_plugging_is_fatal =
>> True", the vm creation stuck in spawning state, and in neutron server log i
>> have :
>>
>> =======================================
>> 2014-07-15 14:12:52.351 18448 DEBUG neutron.notifiers.nova [-] Sending
>> events: [{'status': 'completed', 'tag':
>> u'ba56921c-4628-4fcc-9dc5-2b324cd91faf', 'name':
'network-vif-plugged',
>> 'server_uuid': u'f8554441-565c-49ec-bc88-3cbc628b0579'}]
send_events
>> /usr/lib/python2.7/site-packages/neutron/notifiers/nova.py:218
>> 2014-07-15 14:12:52.354 18448 INFO urllib3.connectionpool [-] Starting
>> new HTTP connection (1): localhost
>> 2014-07-15 14:12:52.360 18448 DEBUG urllib3.connectionpool [-] "POST
>> /v2/5c9c186a909e499e9da0dd5cf2c403e0/os-server-external-events HTTP/1.1"
>> 401 23 _make_request
>> /usr/lib/python2.7/site-packages/urllib3/connectionpool.py:295
>> 2014-07-15 14:12:52.362 18448 INFO urllib3.connectionpool [-] Starting
>> new HTTP connection (1): localhost
>> 2014-07-15 14:12:52.452 18448 DEBUG urllib3.connectionpool [-] "POST
>> /v2.0/tokens HTTP/1.1" 401 114 _make_request
>> /usr/lib/python2.7/site-packages/urllib3/connectionpool.py:295
>> 2014-07-15 14:12:52.453 18448 ERROR neutron.notifiers.nova [-] Failed to
>> notify nova on events: [{'status': 'completed', 'tag':
>> u'ba56921c-4628-4fcc-9dc5-2b324cd91faf', 'name':
'network-vif-plugged',
>> 'server_uuid': u'f8554441-565c-49ec-bc88-3cbc628b0579'}]
>> 2014-07-15 14:12:52.453 18448 TRACE neutron.notifiers.nova Traceback
>> (most recent call last):
>> 2014-07-15 14:12:52.453 18448 TRACE neutron.notifiers.nova File
>> "/usr/lib/python2.7/site-packages/neutron/notifiers/nova.py", line 221,
in
>> send_events
>> 2014-07-15 14:12:52.453 18448 TRACE neutron.notifiers.nova
>> batched_events)
>> 2014-07-15 14:12:52.453 18448 TRACE neutron.notifiers.nova File
>>
"/usr/lib/python2.7/site-packages/novaclient/v1_1/contrib/server_external_events.py",
>> line 39, in create
>> 2014-07-15 14:12:52.453 18448 TRACE neutron.notifiers.nova
>> return_raw=True)
>> 2014-07-15 14:12:52.453 18448 TRACE neutron.notifiers.nova File
>> "/usr/lib/python2.7/site-packages/novaclient/base.py", line 152, in
_create
>> 2014-07-15 14:12:52.453 18448 TRACE neutron.notifiers.nova _resp,
>> body = self.api.client.post(url, body=body)
>> 2014-07-15 14:12:52.453 18448 TRACE neutron.notifiers.nova File
>> "/usr/lib/python2.7/site-packages/novaclient/client.py", line 312, in
post
>> 2014-07-15 14:12:52.453 18448 TRACE neutron.notifiers.nova return
>> self._cs_request(url, 'POST', **kwargs)
>> 2014-07-15 14:12:52.453 18448 TRACE neutron.notifiers.nova File
>> "/usr/lib/python2.7/site-packages/novaclient/client.py", line 301, in
>> _cs_request
>> 2014-07-15 14:12:52.453 18448 TRACE neutron.notifiers.nova raise e
>> 2014-07-15 14:12:52.453 18448 TRACE neutron.notifiers.nova Unauthorized:
>> Unauthorized (HTTP 401)
>> 2014-07-15 14:12:52.453 18448 TRACE neutron.notifiers.nova
>> 2014-07-15 14:12:58.321 18448 DEBUG neutron.openstack.common.rpc.amqp
>> [-] received {u'_context_roles': [u'admin'],
u'_context_request_id':
>> u'req-9bf35c42-3477-4ed3-8092-af729c21198c',
u'_context_read_deleted':
>> u'no', u'_context_user_name': None,
u'_context_project_name': None,
>> u'namespace': None, u'_context_tenant_id': None, u'args':
{u'agent_state':
>> {u'agent_state': {u'topic': u'N/A', u'binary':
>> u'neutron-openvswitch-agent', u'host': u'pvidgsh006.pvi',
u'agent_type':
>> u'Open vSwitch agent', u'configurations':
{u'tunnel_types': [u'vxlan'],
>> u'tunneling_ip': u'192.168.40.5', u'bridge_mappings': {},
u'l2_population':
>> False, u'devices': 1}}}, u'time':
u'2014-07-15T12:12:58.313995'},
>> u'_context_tenant': None, u'_unique_id':
>> u'7c9a4dfcd256494caf6e1327c8051e29', u'_context_is_admin': True,
>> u'version': u'1.0', u'_context_timestamp':
u'2014-07-15 12:01:28.190772',
>> u'_context_tenant_name': None, u'_context_user': None,
u'_context_user_id':
>> None, u'method': u'report_state', u'_context_project_id':
None} _safe_log
>> /usr/lib/python2.7/site-packages/neutron/openstack/common/rpc/common.py:280
>> =======================================
>>
>> Well I'm supposed it's related ... Perhaps with those options in
>> neutron.conf :
>> ======================================
>> notify_nova_on_port_status_changes = True
>> notify_nova_on_port_data_changes = True
>> nova_url =
http://localhost:8774/v2
>> nova_admin_tenant_name = services
>> nova_admin_username = nova
>> nova_admin_password = nova
>> nova_admin_auth_url =
http://localhost:35357/v2.0
>> ======================================
>>
>> But well didnt see anything wrong ...
>>
>> Thank you in advance !
>>
>> Regards,
>>
>>
>>
>> 2014-07-11 16:08 GMT+02:00 Vimal Kumar <vimal7370(a)gmail.com>:
>>
>> -----
>>> File "/usr/lib/python2.7/site-packages/neutronclient/client.py",
line
>>> 239, in authenticate\\n
content_type="application/json")\\n\', u\' File
>>> "/usr/lib/python2.7/site-packages/neutronclient/client.py", line
163, in
>>> _cs_request\\n raise exceptions.Unauthorized(message=body)\\n\',
>>> u\'Unauthorized: {"error": {"message": "The
request you have made requires
>>> authentication.", "code": 401, "title":
"Unauthorized"}}\\n\'].\n']
>>> -----
>>>
>>> Looks like HTTP connection to neutron server is resulting in 401 error.
>>>
>>> Try enabling debug mode for neutron server and then tail
>>> /var/log/neutron/server.log , hopefully you should get more info.
>>>
>>>
>>> On Fri, Jul 11, 2014 at 7:13 PM, Benoit ML <ben42ml(a)gmail.com> wrote:
>>>
>>>> Hello,
>>>>
>>>> Ok I see. Nova telles neutron/openvswitch to create the bridge qbr
>>>> prior to the migration itself.
>>>> I ve already activate debug and verbose ... But well i'm really
stuck,
>>>> dont know how and where to search/look ...
>>>>
>>>>
>>>>
>>>> Regards,
>>>>
>>>>
>>>>
>>>>
>>>>
>>>> 2014-07-11 15:09 GMT+02:00 Miguel Angel <miguelangel(a)ajo.es>:
>>>>
>>>> Hi Benoit,
>>>>>
>>>>> A manual virsh migration should fail, because the
>>>>> network ports are not migrated to the destination host.
>>>>>
>>>>> You must investigate on the authentication problem itself,
>>>>> and let nova handle all the underlying API calls which should
>>>>> happen...
>>>>>
>>>>> May be it's worth setting nova.conf to debug=True
>>>>>
>>>>>
>>>>>
>>>>> ---
>>>>> irc: ajo / mangelajo
>>>>> Miguel Angel Ajo Pelayo
>>>>> +34 636 52 25 69
>>>>> skype: ajoajoajo
>>>>>
>>>>>
>>>>> 2014-07-11 14:41 GMT+02:00 Benoit ML <ben42ml(a)gmail.com>:
>>>>>
>>>>> Hello,
>>>>>>
>>>>>> cat /etc/redhat-release
>>>>>> CentOS Linux release 7 (Rebuilt from: RHEL 7.0)
>>>>>>
>>>>>>
>>>>>> Regards,
>>>>>>
>>>>>>
>>>>>> 2014-07-11 13:40 GMT+02:00 Boris Derzhavets
<bderzhavets(a)hotmail.com
>>>>>> >:
>>>>>>
>>>>>> Could you please post /etc/redhat-release.
>>>>>>>
>>>>>>> Boris.
>>>>>>>
>>>>>>> ------------------------------
>>>>>>> Date: Fri, 11 Jul 2014 11:57:12 +0200
>>>>>>> From: ben42ml(a)gmail.com
>>>>>>> To: rdo-list(a)redhat.com
>>>>>>> Subject: [Rdo-list] Icehouse multi-node - Centos7 - live
migration
>>>>>>> failed because of "network qbr no such device"
>>>>>>>
>>>>>>>
>>>>>>> Hello,
>>>>>>>
>>>>>>> I'm working on a multi-node setup of openstack Icehouse
using
>>>>>>> centos7.
>>>>>>> Well i have :
>>>>>>> - one controllor node with all server services thing stuff
>>>>>>> - one network node with openvswitch agent, l3-agent,
dhcp-agent
>>>>>>> - two compute node with nova-compute and
neutron-openvswitch
>>>>>>> - one storage nfs node
>>>>>>>
>>>>>>> NetworkManager is deleted on compute nodes and network node.
>>>>>>>
>>>>>>> My network use is configured to use vxlan. I can create VM,
>>>>>>> tenant-network, external-network, routeur, assign floating-ip
to VM, push
>>>>>>> ssh-key into VM, create volume from glance image, etc...
Evrything is
>>>>>>> conected and reacheable. Pretty cool :)
>>>>>>>
>>>>>>> But when i try to migrate VM things go wrong ... I have
configured
>>>>>>> nova, libvirtd and qemu to use migration through
libvirt-tcp.
>>>>>>> I have create and exchanged ssh-key for nova user on all
node. I
>>>>>>> have verified userid and groupid of nova.
>>>>>>>
>>>>>>> Well nova-compute log, on the target compute node, :
>>>>>>> 2014-07-11 11:45:02.749 6984 TRACE nova.compute.manager
[instance:
>>>>>>> a5326fe1-4faa-4347-ba29-159fce26a85c] RemoteError: Remote
error:
>>>>>>> Unauthorized {"error": {"m
>>>>>>> essage": "The request you have made requires
authentication.",
>>>>>>> "code": 401, "title":
"Unauthorized"}}
>>>>>>>
>>>>>>>
>>>>>>> So well after searching a lots in all logs, i have fount that
i
>>>>>>> cant simply migration VM between compute node with a simple
virsh :
>>>>>>> virsh migrate instance-00000084
qemu+tcp://<dest>/system
>>>>>>>
>>>>>>> The error is :
>>>>>>> erreur :Cannot get interface MTU on 'qbr3ca65809-05':
No such device
>>>>>>>
>>>>>>> Well when i look on the source hyperviseur the bridge
"qbr3ca65809"
>>>>>>> exists and have a network tap device. And moreover i
manually create
>>>>>>> qbr3ca65809 on the target hypervisor, virsh migrate succed !
>>>>>>>
>>>>>>> Can you help me plz ?
>>>>>>> What can i do wrong ? Perhpas neutron must create the bridge
before
>>>>>>> migration but didnt for a mis configuration ?
>>>>>>>
>>>>>>> Plz ask anything you need !
>>>>>>>
>>>>>>> Thank you in advance.
>>>>>>>
>>>>>>>
>>>>>>> The full nova-compute log attached.
>>>>>>>
>>>>>>>
>>>>>>>
>>>>>>>
>>>>>>>
>>>>>>>
>>>>>>> Regards,
>>>>>>>
>>>>>>> --
>>>>>>> --
>>>>>>> Benoit
>>>>>>>
>>>>>>> _______________________________________________ Rdo-list
mailing
>>>>>>> list Rdo-list(a)redhat.com
>>>>>>>
https://www.redhat.com/mailman/listinfo/rdo-list
>>>>>>>
>>>>>>
>>>>>>
>>>>>>
>>>>>> --
>>>>>> --
>>>>>> Benoit
>>>>>>
>>>>>> _______________________________________________
>>>>>> Rdo-list mailing list
>>>>>> Rdo-list(a)redhat.com
>>>>>>
https://www.redhat.com/mailman/listinfo/rdo-list
>>>>>>
>>>>>>
>>>>>
>>>>
>>>>
>>>> --
>>>> --
>>>> Benoit
>>>>
>>>> _______________________________________________
>>>> Rdo-list mailing list
>>>> Rdo-list(a)redhat.com
>>>>
https://www.redhat.com/mailman/listinfo/rdo-list
>>>>
>>>>
>>>
>>
>>
>> --
>> --
>> Benoit
>>
>
>
>
> --
> --
> Benoit
>
--
--
Benoit