[Rdo-list] Openstack Juno Live Migration --block-migrate failed "ValueError: A NetworkModel is required here"
Gabriele Guaglianone
gabriele.guaglianone at gmail.com
Thu Jan 14 15:35:49 UTC 2016
Rookie error ...I changed the wrong section:
[api_database]
instead of
[database]
SOLVED...
Many thanks..
Gabriele
2016-01-14 14:41 GMT+00:00 Gabriele Guaglianone <
gabriele.guaglianone at gmail.com>:
> Hi Ignacio,
> thnak u so much, here my host file
>
> *cat /etc/hosts*
> *127.0.0.1 localhost localhost.localdomain localhost4
> localhost4.localdomain4 *
> *::1 localhost localhost.localdomain localhost6
> localhost6.localdomain6 *
> *### RDO ip *
> *10.20.15.11 controller*
> *10.20.15.12 compute0*
> *10.20.15.13 compute1*
> *10.20.15.14 compute2*
>
> and
> # ping -c 1 controller
> PING controller (10.20.15.11) 56(84) bytes of data.
> 64 bytes from controller (10.20.15.11): icmp_seq=1 ttl=64 time=0.043 ms
>
> Nova restarted but I'm getting the same error when I run:
>
> [root at controller nova]# su -s /bin/sh -c "nova-manage db sync" nova
> Command failed, please check log for more info
> [root at controller nova]# more /var/log/nova/nova-manage.log
> 2016-01-14 14:40:31.698 5715 CRITICAL nova [-] OperationalError:
> (_mysql_exceptions.OperationalError) (1045, "Access denied for user 'nova'@'localhost'
> (using password: YES)")
> 2016-01-14 14:40:31.698 5715 ERROR nova Traceback (most recent call last):
> 2016-01-14 14:40:31.698 5715 ERROR nova File "/usr/bin/nova-manage",
> line 10, in <module>
> 2016-01-14 14:40:31.698 5715 ERROR nova sys.exit(main())
> 2016-01-14 14:40:31.698 5715 ERROR nova File
> "/usr/lib/python2.7/site-packages/nova/cmd/manage.py", line 1443, in main
> 2016-01-14 14:40:31.698 5715 ERROR nova ret = fn(*fn_args, **fn_kwargs)
>
>
> 2016-01-14 14:02 GMT+00:00 Ignacio Bravo <ibravo at ltgfederal.com>:
>
>> Two things that I would test:
>> Check etc hosts file to see that you have one entry for controller with
>> the node ip
>> You can also ping controller to see if it resolves correctly
>> Restart nova service to ensure it is reading the latest configuration file
>>
>> -
>> Ignacio Bravo
>> LTG federal
>>
>>
>> On Jan 14, 2016, at 8:20 AM, Gabriele Guaglianone <
>> gabriele.guaglianone at gmail.com> wrote:
>>
>> Hi all,
>> I'm trying to poupulate the compute dbs on controller node, but I'm
>> getting this error, I can't figure out why 'cause I'm able to login :
>>
>> *[root at controller nova]# su -s /bin/sh -c "nova-manage db sync" nova*
>> *Command failed, please check log for more info*
>> *[root at controller nova]# more /var/log/nova/nova-manage.log*
>> *2016-01-14 13:11:15.269 4286 CRITICAL nova [-] OperationalError:
>> (_mysql_exceptions.OperationalError) (1045, "Access denied for user
>> 'nova'@'localhost' (using password: YES)")*
>> *2016-01-14 13:11:15.269 4286 ERROR nova Traceback (most recent call
>> last):*
>> *2016-01-14 13:11:15.269 4286 ERROR nova File "/usr/bin/nova-manage",
>> line 10, in <module>*
>> *2016-01-14 13:11:15.269 4286 ERROR nova sys.exit(main())*
>>
>> but :
>>
>> *[root at controller nova]# mysql -u nova -p *
>> *Enter password: r00tme*
>> *Welcome to the MariaDB monitor. Commands end with ; or \g.*
>> *Your MariaDB connection id is 12*
>> *Server version: 5.5.44-MariaDB MariaDB Server*
>>
>> *Copyright (c) 2000, 2015, Oracle, MariaDB Corporation Ab and others.*
>>
>> *Type 'help;' or '\h' for help. Type '\c' to clear the current input
>> statement.*
>>
>> *MariaDB [(none)]> *
>>
>>
>> connection string in nova.conf file is
>>
>> *connection=mysql://nova:r00tme@controller/nova*
>>
>> Any suggestions ?
>>
>> Cheers
>>
>> Gabriele
>>
>>
>>
>> 2016-01-14 11:20 GMT+00:00 Soputhi Sea <puthi at live.com>:
>>
>>> Hi,
>>>
>>> Openstack Juno's Live Migration, I've been trying to get live-migration
>>> to work on this version but i keep getting the same error as below.
>>> I wonder if anybody can point me to the right direction to where to
>>> debug the problem. Or if anybody come across this problem before please
>>> share some ideas.
>>> I google around for a few days already but so far I haven't got any luck.
>>>
>>> Note: the same nova, neutron and libvirt configuration work on Icehouse
>>> and Liberty on a different cluster, as i tested.
>>>
>>> Thanks
>>> Puthi
>>>
>>> Nova Version tested: 2014.2.3 and 2014.2.4
>>> Nova Error Log
>>> ============
>>> 2016-01-14 17:34:08.818 6173 ERROR oslo.messaging.rpc.dispatcher
>>> [req-54581412-a194-40d5-9208-b1bf6d04f8d8 ] Exception during message
>>> handling: A NetworkModel is required here
>>> 2016-01-14 17:34:08.818 6173 TRACE oslo.messaging.rpc.dispatcher
>>> Traceback (most recent call last):
>>> 2016-01-14 17:34:08.818 6173 TRACE oslo.messaging.rpc.dispatcher File
>>> "/usr/lib/python2.7/site-packages/oslo/messaging/rpc/dispatcher.py", line
>>> 134, in _dispatch_and_reply
>>> 2016-01-14 17:34:08.818 6173 TRACE oslo.messaging.rpc.dispatcher
>>> incoming.message))
>>> 2016-01-14 17:34:08.818 6173 TRACE oslo.messaging.rpc.dispatcher File
>>> "/usr/lib/python2.7/site-packages/oslo/messaging/rpc/dispatcher.py", line
>>> 177, in _dispatch
>>> 2016-01-14 17:34:08.818 6173 TRACE oslo.messaging.rpc.dispatcher
>>> return self._do_dispatch(endpoint, method, ctxt, args)
>>> 2016-01-14 17:34:08.818 6173 TRACE oslo.messaging.rpc.dispatcher File
>>> "/usr/lib/python2.7/site-packages/oslo/messaging/rpc/dispatcher.py", line
>>> 123, in _do_dispatch
>>> 2016-01-14 17:34:08.818 6173 TRACE oslo.messaging.rpc.dispatcher
>>> result = getattr(endpoint, method)(ctxt, **new_args)
>>> 2016-01-14 17:34:08.818 6173 TRACE oslo.messaging.rpc.dispatcher File
>>> "/usr/lib/python2.7/site-packages/nova/exception.py", line 88, in wrapped
>>> 2016-01-14 17:34:08.818 6173 TRACE oslo.messaging.rpc.dispatcher
>>> payload)
>>> 2016-01-14 17:34:08.818 6173 TRACE oslo.messaging.rpc.dispatcher File
>>> "/usr/lib/python2.7/site-packages/nova/openstack/common/excutils.py", line
>>> 82, in __exit__
>>> 2016-01-14 17:34:08.818 6173 TRACE oslo.messaging.rpc.dispatcher
>>> six.reraise(self.type_, self.value, self.tb)
>>> 2016-01-14 17:34:08.818 6173 TRACE oslo.messaging.rpc.dispatcher File
>>> "/usr/lib/python2.7/site-packages/nova/exception.py", line 71, in wrapped
>>> 2016-01-14 17:34:08.818 6173 TRACE oslo.messaging.rpc.dispatcher
>>> return f(self, context, *args, **kw)
>>> 2016-01-14 17:34:08.818 6173 TRACE oslo.messaging.rpc.dispatcher File
>>> "/usr/lib/python2.7/site-packages/nova/compute/manager.py", line 335, in
>>> decorated_function
>>> 2016-01-14 17:34:08.818 6173 TRACE oslo.messaging.rpc.dispatcher
>>> kwargs['instance'], e, sys.exc_info())
>>> 2016-01-14 17:34:08.818 6173 TRACE oslo.messaging.rpc.dispatcher File
>>> "/usr/lib/python2.7/site-packages/nova/openstack/common/excutils.py", line
>>> 82, in __exit__
>>> 2016-01-14 17:34:08.818 6173 TRACE oslo.messaging.rpc.dispatcher
>>> six.reraise(self.type_, self.value, self.tb)
>>> 2016-01-14 17:34:08.818 6173 TRACE oslo.messaging.rpc.dispatcher File
>>> "/usr/lib/python2.7/site-packages/nova/compute/manager.py", line 323, in
>>> decorated_function
>>> 2016-01-14 17:34:08.818 6173 TRACE oslo.messaging.rpc.dispatcher
>>> return function(self, context, *args, **kwargs)
>>> 2016-01-14 17:34:08.818 6173 TRACE oslo.messaging.rpc.dispatcher File
>>> "/usr/lib/python2.7/site-packages/nova/compute/manager.py", line 4978, in
>>> live_migration
>>> 2016-01-14 17:34:08.818 6173 TRACE oslo.messaging.rpc.dispatcher
>>> expected_attrs=expected)
>>> 2016-01-14 17:34:08.818 6173 TRACE oslo.messaging.rpc.dispatcher File
>>> "/usr/lib/python2.7/site-packages/nova/objects/instance.py", line 300, in
>>> _from_db_object
>>> 2016-01-14 17:34:08.818 6173 TRACE oslo.messaging.rpc.dispatcher
>>> db_inst['info_cache'])
>>> 2016-01-14 17:34:08.818 6173 TRACE oslo.messaging.rpc.dispatcher File
>>> "/usr/lib/python2.7/site-packages/nova/objects/instance_info_cache.py",
>>> line 45, in _from_db_object
>>> 2016-01-14 17:34:08.818 6173 TRACE oslo.messaging.rpc.dispatcher
>>> info_cache[field] = db_obj[field]
>>> 2016-01-14 17:34:08.818 6173 TRACE oslo.messaging.rpc.dispatcher File
>>> "/usr/lib/python2.7/site-packages/nova/objects/base.py", line 474, in
>>> __setitem__
>>> 2016-01-14 17:34:08.818 6173 TRACE oslo.messaging.rpc.dispatcher
>>> setattr(self, name, value)
>>> 2016-01-14 17:34:08.818 6173 TRACE oslo.messaging.rpc.dispatcher File
>>> "/usr/lib/python2.7/site-packages/nova/objects/base.py", line 75, in setter
>>> 2016-01-14 17:34:08.818 6173 TRACE oslo.messaging.rpc.dispatcher
>>> field_value = field.coerce(self, name, value)
>>> 2016-01-14 17:34:08.818 6173 TRACE oslo.messaging.rpc.dispatcher File
>>> "/usr/lib/python2.7/site-packages/nova/objects/fields.py", line 189, in
>>> coerce
>>> 2016-01-14 17:34:08.818 6173 TRACE oslo.messaging.rpc.dispatcher
>>> return self._type.coerce(obj, attr, value)
>>> 2016-01-14 17:34:08.818 6173 TRACE oslo.messaging.rpc.dispatcher File
>>> "/usr/lib/python2.7/site-packages/nova/objects/fields.py", line 516, in
>>> coerce
>>> 2016-01-14 17:34:08.818 6173 TRACE oslo.messaging.rpc.dispatcher
>>> raise ValueError(_('A NetworkModel is required here'))
>>> 2016-01-14 17:34:08.818 6173 TRACE oslo.messaging.rpc.dispatcher
>>> ValueError: A NetworkModel is required here
>>> 2016-01-14 17:34:08.818 6173 TRACE oslo.messaging.rpc.dispatcher
>>>
>>>
>>> Nova Config
>>> ===========================
>>> [DEFAULT]
>>> rpc_backend = qpid
>>> qpid_hostname = management-host
>>> auth_strategy = keystone
>>> my_ip = 10.201.171.244
>>> vnc_enabled = True
>>> novncproxy_host=0.0.0.0
>>> novncproxy_port=6080
>>> novncproxy_base_url=http://management-host:6080/vnc_auto.html
>>> network_api_class = nova.network.neutronv2.api.API
>>> linuxnet_interface_driver =
>>> nova.network.linux_net.LinuxOVSInterfaceDriver
>>> firewall_driver = nova.virt.firewall.NoopFirewallDriver
>>> vncserver_listen=0.0.0.0
>>> vncserver_proxyclient_address=10.201.171.244
>>> [baremetal]
>>> [cells]
>>> [cinder]
>>> [conductor]
>>> [database]
>>> connection = mysql://nova:novadbpassword@db-host/nova
>>> [ephemeral_storage_encryption]
>>> [glance]
>>> host = glance-host
>>> port = 9292
>>> api_servers=$host:$port
>>> [hyperv]
>>> [image_file_url]
>>> [ironic]
>>> [keymgr]
>>> [keystone_authtoken]
>>> auth_uri = http://management-host:5000/v2.0
>>> identity_uri = http://management-host:35357
>>> admin_user = nova
>>> admin_tenant_name = service
>>> admin_password = nova2014agprod2
>>> [libvirt]
>>> live_migration_flag=VIR_MIGRATE_UNDEFINE_SOURCE, VIR_MIGRATE_PEER2PEER,
>>> VIR_MIGRATE_LIVE #, VIR_MIGRATE_TUNNELLED
>>> block_migration_flag=VIR_MIGRATE_UNDEFINE_SOURCE, VIR_MIGRATE_PEER2PEER,
>>> VIR_MIGRATE_NON_SHARED_INC, VIR_MIGRATE_LIVE
>>> [matchmaker_redis]
>>> [matchmaker_ring]
>>> [metrics]
>>> [neutron]
>>> url = http://management-host:9696
>>> admin_username = neutron
>>> admin_password = neutronpassword
>>> admin_tenant_name = service
>>> admin_auth_url = http://management-host:35357/v2.0
>>> auth_strategy = keystone
>>> [osapi_v3]
>>> [rdp]
>>> [serial_console]
>>> [spice]
>>> [ssl]
>>> [trusted_computing]
>>> [upgrade_levels]
>>> compute=icehouse
>>> conductor=icehouse
>>> [vmware]
>>> [xenserver]
>>> [zookeeper]
>>>
>>>
>>>
>>>
>>> Neutron Config
>>> ============
>>> [DEFAULT]
>>> auth_strategy = keystone
>>> rpc_backend = neutron.openstack.common.rpc.impl_qpid
>>> qpid_hostname = management-host
>>> core_plugin = ml2
>>> service_plugins = router
>>> dhcp_lease_duration = 604800
>>> dhcp_agents_per_network = 3
>>> [matchmaker_redis]
>>> [matchmaker_ring]
>>> [quotas]
>>> [agent]
>>> [keystone_authtoken]
>>> auth_uri = http://management-host:5000
>>> identity_uri = http://management-host:35357
>>> admin_tenant_name = service
>>> admin_user = neutron
>>> admin_password = neutronpassword
>>> auth_host = management-host
>>> auth_protocol = http
>>> auth_port = 35357
>>> [database]
>>> [service_providers]
>>>
>>> service_provider=LOADBALANCER:Haproxy:neutron.services.loadbalancer.drivers.haproxy.plugin_driver.HaproxyOnHostPluginDriver:default
>>>
>>> service_provider=VPN:openswan:neutron.services.vpn.service_drivers.ipsec.IPsecVPNDriver:default
>>>
>>>
>>> Neutron Plugin
>>> ============
>>> [ml2]
>>> type_drivers = local,flat
>>> mechanism_drivers = openvswitch
>>> [ml2_type_flat]
>>> flat_networks = physnet3
>>> [ml2_type_vlan]
>>> [ml2_type_gre]
>>> tunnel_id_ranges = 1:1000
>>> [ml2_type_vxlan]
>>> [securitygroup]
>>> firewall_driver = neutron.agent.firewall.NoopFirewallDriver
>>> enable_security_group = False
>>> [ovs]
>>> enable_tunneling = False
>>> local_ip = 10.201.171.244
>>> network_vlan_ranges = physnet3
>>> bridge_mappings = physnet3:br-bond0
>>>
>>>
>>> Libvirt Config
>>> ===========
>>>
>>> /etc/sysconfig/libvirtd
>>>
>>> Uncomment
>>>
>>> LIBVIRTD_ARGS="--listen"
>>>
>>>
>>> /etc/libvirt/libvirtd.conf
>>>
>>> listen_tls = 0
>>>
>>> listen_tcp = 1
>>>
>>> auth_tcp = “none”
>>>
>>> _______________________________________________
>>> Rdo-list mailing list
>>> Rdo-list at redhat.com
>>> https://www.redhat.com/mailman/listinfo/rdo-list
>>>
>>> To unsubscribe: rdo-list-unsubscribe at redhat.com
>>>
>>
>> _______________________________________________
>> Rdo-list mailing list
>> Rdo-list at redhat.com
>> https://www.redhat.com/mailman/listinfo/rdo-list
>>
>> To unsubscribe: rdo-list-unsubscribe at redhat.com
>>
>>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.rdoproject.org/pipermail/dev/attachments/20160114/723014bf/attachment.html>
More information about the dev
mailing list