[rdo-users] Issue while creating an instance in nova and error log while restarting Nova compute service

Bipul bipul.gogoi at gmail.com
Sun Dec 1 03:22:33 UTC 2019


Dear users,

I am getting problem while creating an instance . It not able to determine
valid host.

I have  noticed when i restart nova compute service , it restarted
successfully  BUT  It logs error on nova-compute.log

2019-11-30 06:29:30.324 145061 ERROR nova.compute.resource_tracker
[req-6a4d7b4f-9ab1-4669-8c7e-99e3adb5cb18 - - - - -] Skipping removal of
allocations for deleted instances: Failed to retrieve allocations for
resource provider 27a39914-a509-4261-90f5-8135ad471843: <!DOCTYPE HTML
PUBLIC "-//IETF//DTD HTML 2.0//EN">

: ResourceProviderAllocationRetrievalFailed: Failed to retrieve allocations
for resource provider 27a39914-a509-4261-90f5-8135ad471843: <!DOCTYPE HTML
PUBLIC "-//IETF//DTD HTML 2.0//EN">
2019-11-30 06:29:30.388 145061 ERROR nova.scheduler.client.report
[req-6a4d7b4f-9ab1-4669-8c7e-99e3adb5cb18 - - - - -] [None] *Failed to
retrieve resource provider tree from placement API* for UUID
27a39914-a509-4261-90f5-8135ad471843. Got 500: <!DOCTYPE HTML PUBLIC
"-//IETF//DTD HTML 2.0//EN">

UUID is correct

MariaDB [(none)]> select uuid from nova.compute_nodes where host='
openstack.bipul.com';
+--------------------------------------+
| uuid                                 |
+--------------------------------------+
| 27a39914-a509-4261-90f5-8135ad471843 |
+--------------------------------------+
1 row in set (0.000 sec)

MariaDB [(none)]>


1) nova.conf is not changed , It just the same which comes with the
distribution

2) Openstack overall health seems OK, all services are in running state

3) Problem :   placement url running on port  8778   ( URL : http://<IP
address >:8778/placement ) is showing internal server error (500) while
accessing via web browser or curl .

4) nova-status upgrade check  also showing error InternalServerError:
Internal Server Error (HTTP 500)

5) followed the standard method of installation described in
https://www.rdoproject.org/install/packstack/

6) Attached o/p of log during a nove compute service restart and nova
service status

Appreciate all your help

Thanks
Bipul
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.rdoproject.org/pipermail/users/attachments/20191201/85296b35/attachment-0001.html>
-------------- next part --------------

<< Nova log >>

2019-11-30 06:29:27.391 145061 INFO os_vif [-] Loaded VIF plugins: ovs, linux_bridge, noop
2019-11-30 06:29:28.560 145061 INFO nova.virt.driver [req-0631d103-d97e-48c7-974d-7fc399a0e0ee - - - - -] Loading compute driver 'libvirt.LibvirtDriver'
2019-11-30 06:29:29.228 145061 WARNING os_brick.initiator.connectors.remotefs [req-0631d103-d97e-48c7-974d-7fc399a0e0ee - - - - -] Connection details not present. RemoteFsClient may not initialize properly.
2019-11-30 06:29:29.244 145061 WARNING oslo_config.cfg [req-0631d103-d97e-48c7-974d-7fc399a0e0ee - - - - -] Deprecated: Option "dhcpbridge" from group "DEFAULT" is deprecated for removal (
nova-network is deprecated, as are any related configuration options.
).  Its value may be silently ignored in the future.
2019-11-30 06:29:29.244 145061 WARNING oslo_config.cfg [req-0631d103-d97e-48c7-974d-7fc399a0e0ee - - - - -] Deprecated: Option "dhcpbridge_flagfile" from group "DEFAULT" is deprecated for removal (
nova-network is deprecated, as are any related configuration options.
).  Its value may be silently ignored in the future.
2019-11-30 06:29:29.246 145061 WARNING oslo_config.cfg [req-0631d103-d97e-48c7-974d-7fc399a0e0ee - - - - -] Deprecated: Option "force_dhcp_release" from group "DEFAULT" is deprecated for removal (
nova-network is deprecated, as are any related configuration options.
).  Its value may be silently ignored in the future.
2019-11-30 06:29:29.250 145061 WARNING oslo_config.cfg [req-0631d103-d97e-48c7-974d-7fc399a0e0ee - - - - -] Deprecated: Option "force_snat_range" from group "DEFAULT" is deprecated for removal (
nova-network is deprecated, as are any related configuration options.
).  Its value may be silently ignored in the future.
2019-11-30 06:29:29.272 145061 WARNING oslo_config.cfg [req-0631d103-d97e-48c7-974d-7fc399a0e0ee - - - - -] Deprecated: Option "url" from group "neutron" is deprecated for removal (Endpoint lookup uses the service catalog via common keystoneauth1 Adapter configuration options. In the current release, "url" will override this behavior, but will be ignored and/or removed in a future release. To achieve the same result, use the endpoint_override option instead.).  Its value may be silently ignored in the future.
2019-11-30 06:29:29.274 145061 WARNING oslo_config.cfg [req-0631d103-d97e-48c7-974d-7fc399a0e0ee - - - - -] Deprecated: Option "vncserver_listen" from group "vnc" is deprecated. Use option "server_listen" from group "vnc".
2019-11-30 06:29:29.275 145061 WARNING oslo_config.cfg [req-0631d103-d97e-48c7-974d-7fc399a0e0ee - - - - -] Deprecated: Option "vncserver_proxyclient_address" from group "vnc" is deprecated. Use option "server_proxyclient_address" from group "vnc".
2019-11-30 06:29:29.279 145061 WARNING oslo_config.cfg [req-0631d103-d97e-48c7-974d-7fc399a0e0ee - - - - -] Deprecated: Option "live_migration_uri" from group "libvirt" is deprecated for removal (
live_migration_uri is deprecated for removal in favor of two other options that
allow to change live migration scheme and target URI: ``live_migration_scheme``
and ``live_migration_inbound_addr`` respectively.
).  Its value may be silently ignored in the future.
2019-11-30 06:29:29.296 145061 INFO nova.service [-] Starting compute node (version 19.0.3-1.el7)
2019-11-30 06:29:29.373 145061 INFO nova.virt.libvirt.driver [-] Connection event '1' reason 'None'
2019-11-30 06:29:29.398 145061 INFO nova.virt.libvirt.host [-] Libvirt host capabilities <capabilities>

  <host>
    <uuid>98761abc-dd6f-450a-8f2f-13db228bd2ba</uuid>
    <cpu>
      <arch>x86_64</arch>
      <model>Westmere-IBRS</model>
      <vendor>Intel</vendor>
      <microcode version='1'/>
      <topology sockets='2' cores='1' threads='1'/>
      <feature name='vme'/>
      <feature name='ss'/>
      <feature name='pclmuldq'/>
      <feature name='pcid'/>
      <feature name='x2apic'/>
      <feature name='movbe'/>
      <feature name='tsc-deadline'/>
      <feature name='f16c'/>
      <feature name='rdrand'/>
      <feature name='hypervisor'/>
      <feature name='arat'/>
      <feature name='fsgsbase'/>
      <feature name='tsc_adjust'/>
      <feature name='bmi1'/>
      <feature name='smep'/>
      <feature name='bmi2'/>
      <feature name='invpcid'/>
      <feature name='stibp'/>
      <feature name='ssbd'/>
      <feature name='rdtscp'/>
      <feature name='abm'/>
      <pages unit='KiB' size='4'/>
      <pages unit='KiB' size='2048'/>
    </cpu>
    <power_management>
      <suspend_mem/>
    </power_management>
    <iommu support='no'/>
    <migration_features>
      <live/>
      <uri_transports>
        <uri_transport>tcp</uri_transport>
        <uri_transport>rdma</uri_transport>
      </uri_transports>
    </migration_features>
    <topology>
      <cells num='1'>
        <cell id='0'>
          <memory unit='KiB'>5242332</memory>
          <pages unit='KiB' size='4'>1310583</pages>
          <pages unit='KiB' size='2048'>0</pages>
          <distances>
            <sibling id='0' value='10'/>
          </distances>
          <cpus num='2'>
            <cpu id='0' socket_id='0' core_id='0' siblings='0'/>
            <cpu id='1' socket_id='1' core_id='0' siblings='1'/>
          </cpus>
        </cell>
      </cells>
    </topology>
    <cache>
      <bank id='0' level='3' type='both' size='16' unit='MiB' cpus='0'/>
      <bank id='1' level='3' type='both' size='16' unit='MiB' cpus='1'/>
    </cache>
    <secmodel>
      <model>none</model>
      <doi>0</doi>
    </secmodel>
    <secmodel>
      <model>dac</model>
      <doi>0</doi>
      <baselabel type='kvm'>+107:+107</baselabel>
      <baselabel type='qemu'>+107:+107</baselabel>
    </secmodel>
  </host>

  <guest>
    <os_type>hvm</os_type>
    <arch name='i686'>
      <wordsize>32</wordsize>
      <emulator>/usr/libexec/qemu-kvm</emulator>
      <machine maxCpus='240'>pc-i440fx-rhel7.6.0</machine>
      <machine canonical='pc-i440fx-rhel7.6.0' maxCpus='240'>pc</machine>
      <machine maxCpus='240'>pc-i440fx-rhel7.0.0</machine>
      <machine maxCpus='384'>pc-q35-rhel7.6.0</machine>
      <machine canonical='pc-q35-rhel7.6.0' maxCpus='384'>q35</machine>
      <machine maxCpus='240'>rhel6.3.0</machine>
      <machine maxCpus='240'>rhel6.4.0</machine>
      <machine maxCpus='240'>rhel6.0.0</machine>
      <machine maxCpus='240'>pc-i440fx-rhel7.5.0</machine>
      <machine maxCpus='240'>pc-i440fx-rhel7.1.0</machine>
      <machine maxCpus='240'>pc-i440fx-rhel7.2.0</machine>
      <machine maxCpus='255'>pc-q35-rhel7.3.0</machine>
      <machine maxCpus='240'>rhel6.5.0</machine>
      <machine maxCpus='384'>pc-q35-rhel7.4.0</machine>
      <machine maxCpus='240'>rhel6.6.0</machine>
      <machine maxCpus='240'>rhel6.1.0</machine>
      <machine maxCpus='240'>rhel6.2.0</machine>
      <machine maxCpus='240'>pc-i440fx-rhel7.3.0</machine>
      <machine maxCpus='240'>pc-i440fx-rhel7.4.0</machine>
      <machine maxCpus='384'>pc-q35-rhel7.5.0</machine>
      <domain type='qemu'/>
    </arch>
    <features>
      <cpuselection/>
      <deviceboot/>
      <disksnapshot default='on' toggle='no'/>
      <acpi default='on' toggle='yes'/>
      <apic default='on' toggle='no'/>
      <pae/>
      <nonpae/>
    </features>
  </guest>

  <guest>
    <os_type>hvm</os_type>
    <arch name='x86_64'>
      <wordsize>64</wordsize>
      <emulator>/usr/libexec/qemu-kvm</emulator>
      <machine maxCpus='240'>pc-i440fx-rhel7.6.0</machine>
      <machine canonical='pc-i440fx-rhel7.6.0' maxCpus='240'>pc</machine>
      <machine maxCpus='240'>pc-i440fx-rhel7.0.0</machine>
      <machine maxCpus='384'>pc-q35-rhel7.6.0</machine>
      <machine canonical='pc-q35-rhel7.6.0' maxCpus='384'>q35</machine>
      <machine maxCpus='240'>rhel6.3.0</machine>
      <machine maxCpus='240'>rhel6.4.0</machine>
      <machine maxCpus='240'>rhel6.0.0</machine>
      <machine maxCpus='240'>pc-i440fx-rhel7.5.0</machine>
      <machine maxCpus='240'>pc-i440fx-rhel7.1.0</machine>
      <machine maxCpus='240'>pc-i440fx-rhel7.2.0</machine>
      <machine maxCpus='255'>pc-q35-rhel7.3.0</machine>
      <machine maxCpus='240'>rhel6.5.0</machine>
      <machine maxCpus='384'>pc-q35-rhel7.4.0</machine>
      <machine maxCpus='240'>rhel6.6.0</machine>
      <machine maxCpus='240'>rhel6.1.0</machine>
      <machine maxCpus='240'>rhel6.2.0</machine>
      <machine maxCpus='240'>pc-i440fx-rhel7.3.0</machine>
      <machine maxCpus='240'>pc-i440fx-rhel7.4.0</machine>
      <machine maxCpus='384'>pc-q35-rhel7.5.0</machine>
      <domain type='qemu'/>
    </arch>
    <features>
      <cpuselection/>
      <deviceboot/>
      <disksnapshot default='on' toggle='no'/>
      <acpi default='on' toggle='yes'/>
      <apic default='on' toggle='no'/>
    </features>
  </guest>

</capabilities>

2019-11-30 06:29:30.324 145061 ERROR nova.compute.resource_tracker [req-6a4d7b4f-9ab1-4669-8c7e-99e3adb5cb18 - - - - -] Skipping removal of allocations for deleted instances: Failed to retrieve allocations for resource provider 27a39914-a509-4261-90f5-8135ad471843: <!DOCTYPE HTML PUBLIC "-//IETF//DTD HTML 2.0//EN">
<html><head>
<title>500 Internal Server Error</title>
</head><body>
<h1>Internal Server Error</h1>
<p>The server encountered an internal error or
misconfiguration and was unable to complete
your request.</p>
<p>Please contact the server administrator at
 [no address given] to inform them of the time this error occurred,
 and the actions you performed just before this error.</p>
<p>More information about this error may be available
in the server error log.</p>
</body></html>
: ResourceProviderAllocationRetrievalFailed: Failed to retrieve allocations for resource provider 27a39914-a509-4261-90f5-8135ad471843: <!DOCTYPE HTML PUBLIC "-//IETF//DTD HTML 2.0//EN">
2019-11-30 06:29:30.388 145061 ERROR nova.scheduler.client.report [req-6a4d7b4f-9ab1-4669-8c7e-99e3adb5cb18 - - - - -] [None] Failed to retrieve resource provider tree from placement API for UUID 27a39914-a509-4261-90f5-8135ad471843. Got 500: <!DOCTYPE HTML PUBLIC "-//IETF//DTD HTML 2.0//EN">
<html><head>
<title>500 Internal Server Error</title>
</head><body>
<h1>Internal Server Error</h1>
<p>The server encountered an internal error or
misconfiguration and was unable to complete
your request.</p>
<p>Please contact the server administrator at
 [no address given] to inform them of the time this error occurred,
 and the actions you performed just before this error.</p>
<p>More information about this error may be available
in the server error log.</p>
</body></html>
.
2019-11-30 06:29:30.388 145061 ERROR nova.compute.manager [req-6a4d7b4f-9ab1-4669-8c7e-99e3adb5cb18 - - - - -] Error updating resources for node openstack.bipul.com.: ResourceProviderRetrievalFailed: Failed to get resource provider with UUID 27a39914-a509-4261-90f5-8135ad471843
2019-11-30 06:29:30.388 145061 ERROR nova.compute.manager Traceback (most recent call last):
2019-11-30 06:29:30.388 145061 ERROR nova.compute.manager   File "/usr/lib/python2.7/site-packages/nova/compute/manager.py", line 8148, in _update_available_resource_for_node
2019-11-30 06:29:30.388 145061 ERROR nova.compute.manager     startup=startup)
2019-11-30 06:29:30.388 145061 ERROR nova.compute.manager   File "/usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py", line 748, in update_available_resource
2019-11-30 06:29:30.388 145061 ERROR nova.compute.manager     self._update_available_resource(context, resources, startup=startup)
2019-11-30 06:29:30.388 145061 ERROR nova.compute.manager   File "/usr/lib/python2.7/site-packages/oslo_concurrency/lockutils.py", line 328, in inner
2019-11-30 06:29:30.388 145061 ERROR nova.compute.manager     return f(*args, **kwargs)
2019-11-30 06:29:30.388 145061 ERROR nova.compute.manager   File "/usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py", line 829, in _update_available_resource
2019-11-30 06:29:30.388 145061 ERROR nova.compute.manager     self._update(context, cn, startup=startup)
2019-11-30 06:29:30.388 145061 ERROR nova.compute.manager   File "/usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py", line 1036, in _update
2019-11-30 06:29:30.388 145061 ERROR nova.compute.manager     self._update_to_placement(context, compute_node, startup)
2019-11-30 06:29:30.388 145061 ERROR nova.compute.manager   File "/usr/lib/python2.7/site-packages/retrying.py", line 68, in wrapped_f
2019-11-30 06:29:30.388 145061 ERROR nova.compute.manager     return Retrying(*dargs, **dkw).call(f, *args, **kw)
2019-11-30 06:29:30.388 145061 ERROR nova.compute.manager   File "/usr/lib/python2.7/site-packages/retrying.py", line 223, in call
2019-11-30 06:29:30.388 145061 ERROR nova.compute.manager     return attempt.get(self._wrap_exception)
2019-11-30 06:29:30.388 145061 ERROR nova.compute.manager   File "/usr/lib/python2.7/site-packages/retrying.py", line 261, in get
2019-11-30 06:29:30.388 145061 ERROR nova.compute.manager     six.reraise(self.value[0], self.value[1], self.value[2])
2019-11-30 06:29:30.388 145061 ERROR nova.compute.manager   File "/usr/lib/python2.7/site-packages/retrying.py", line 217, in call
2019-11-30 06:29:30.388 145061 ERROR nova.compute.manager     attempt = Attempt(fn(*args, **kwargs), attempt_number, False)
2019-11-30 06:29:30.388 145061 ERROR nova.compute.manager   File "/usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py", line 962, in _update_to_placement
2019-11-30 06:29:30.388 145061 ERROR nova.compute.manager     context, compute_node.uuid, name=compute_node.hypervisor_hostname)
2019-11-30 06:29:30.388 145061 ERROR nova.compute.manager   File "/usr/lib/python2.7/site-packages/nova/scheduler/client/report.py", line 873, in get_provider_tree_and_ensure_root
2019-11-30 06:29:30.388 145061 ERROR nova.compute.manager     parent_provider_uuid=parent_provider_uuid)
2019-11-30 06:29:30.388 145061 ERROR nova.compute.manager   File "/usr/lib/python2.7/site-packages/nova/scheduler/client/report.py", line 655, in _ensure_resource_provider
2019-11-30 06:29:30.388 145061 ERROR nova.compute.manager     rps_to_refresh = self._get_providers_in_tree(context, uuid)
2019-11-30 06:29:30.388 145061 ERROR nova.compute.manager   File "/usr/lib/python2.7/site-packages/nova/scheduler/client/report.py", line 71, in wrapper
2019-11-30 06:29:30.388 145061 ERROR nova.compute.manager     return f(self, *a, **k)
2019-11-30 06:29:30.388 145061 ERROR nova.compute.manager   File "/usr/lib/python2.7/site-packages/nova/scheduler/client/report.py", line 522, in _get_providers_in_tree
2019-11-30 06:29:30.388 145061 ERROR nova.compute.manager     raise exception.ResourceProviderRetrievalFailed(uuid=uuid)
2019-11-30 06:29:30.388 145061 ERROR nova.compute.manager ResourceProviderRetrievalFailed: Failed to get resource provider with UUID 27a39914-a509-4261-90f5-8135ad471843
2019-11-30 06:29:30.388 145061 ERROR nova.compute.manager

<<Nova Log >>


Nova service status :

[root at openstack ~(keystone_admin)]# systemctl status openstack-nova-compute.service
? openstack-nova-compute.service - OpenStack Nova Compute Server
   Loaded: loaded (/usr/lib/systemd/system/openstack-nova-compute.service; enabled; vendor preset: disabled)
   Active: active (running) since Sun 2019-12-01 03:05:02 EST; 4h 46min left
 Main PID: 1632 (nova-compute)
    Tasks: 22
   CGroup: /system.slice/openstack-nova-compute.service
           +-1632 /usr/bin/python2 /usr/bin/nova-compute

Dec 01 03:04:23 openstack.bipul.com systemd[1]: Starting OpenStack Nova Compute Server...
Dec 01 03:05:02 openstack.bipul.com systemd[1]: Started OpenStack Nova Compute Server.


[root at openstack ~(keystone_admin)]# systemctl status openstack-nova-conductor.service
? openstack-nova-conductor.service - OpenStack Nova Conductor Server
   Loaded: loaded (/usr/lib/systemd/system/openstack-nova-conductor.service; enabled; vendor preset: disabled)
   Active: active (running) since Sun 2019-12-01 03:04:50 EST; 4h 46min left
 Main PID: 1224 (nova-conductor)
    Tasks: 3
   CGroup: /system.slice/openstack-nova-conductor.service
           +-1224 /usr/bin/python2 /usr/bin/nova-conductor
           +-2306 /usr/bin/python2 /usr/bin/nova-conductor
           +-2307 /usr/bin/python2 /usr/bin/nova-conductor

Dec 01 03:04:20 openstack.bipul.com systemd[1]: Starting OpenStack Nova Conductor Server...
Dec 01 03:04:50 openstack.bipul.com systemd[1]: Started OpenStack Nova Conductor Server.


[root at openstack ~(keystone_admin)]# systemctl status openstack-nova-consoleauth.service
? openstack-nova-consoleauth.service - OpenStack Nova VNC console auth Server
   Loaded: loaded (/usr/lib/systemd/system/openstack-nova-consoleauth.service; enabled; vendor preset: disabled)
   Active: active (running) since Sun 2019-12-01 03:04:46 EST; 4h 46min left
 Main PID: 1233 (nova-consoleaut)
    Tasks: 1
   CGroup: /system.slice/openstack-nova-consoleauth.service
           +-1233 /usr/bin/python2 /usr/bin/nova-consoleauth

Dec 01 03:04:20 openstack.bipul.com systemd[1]: Starting OpenStack Nova VNC console auth Server...
Dec 01 03:04:46 openstack.bipul.com systemd[1]: Started OpenStack Nova VNC console auth Server.


[root at openstack ~(keystone_admin)]# systemctl status openstack-nova-scheduler.service
? openstack-nova-scheduler.service - OpenStack Nova Scheduler Server
   Loaded: loaded (/usr/lib/systemd/system/openstack-nova-scheduler.service; enabled; vendor preset: disabled)
   Active: active (running) since Sun 2019-12-01 03:04:52 EST; 4h 45min left
 Main PID: 1215 (nova-scheduler)
    Tasks: 3
   CGroup: /system.slice/openstack-nova-scheduler.service
           +-1215 /usr/bin/python2 /usr/bin/nova-scheduler
           +-2321 /usr/bin/python2 /usr/bin/nova-scheduler
           +-2322 /usr/bin/python2 /usr/bin/nova-scheduler

Dec 01 03:04:20 openstack.bipul.com systemd[1]: Starting OpenStack Nova Scheduler Server...
Dec 01 03:04:52 openstack.bipul.com systemd[1]: Started OpenStack Nova Scheduler Server.
[root at openstack ~(keystone_admin)]#




More information about the users mailing list