[rdo-dev] RDO | OpenStack VMs | XFS Metadata Corruption

Pradeep Antil pradeepantil at gmail.com
Tue Apr 21 08:25:59 UTC 2020


Hi Ruslanas / Openstack Gurus,

Please find the response inline below:

*is it the same image all the time?* -- Yes, we are using same image but
image size is around 6GB and recently we have an oobersavtion that VMs are
successfully spawned on some compute nodes but randomly failing on certain
compute hosts. It is also observed that in nova compute logs, image is
attempted to be resize. please refer the below snapshot,

2020-04-20 19:03:27.067 150243 DEBUG oslo_concurrency.processutils
[req-1caea4a2-7cf0-4ba5-9dda-2bb90bb746d8 cbabd9368dc24fea84fd2e43935fddfa
975a7d3840a141b0a20a9dc60e3da6cd - default default] Running cmd
(subprocess): qemu-img resize
/var/lib/nova/instances/616b1a27-8b8c-486b-b8db-57c7b91a7402/disk
64424509440 execute
/openstack/venvs/nova-17.1.12/lib/python2.7/site-packages/oslo_concurrency/processutils.py:372
2020-04-20 19:03:27.124 150243 DEBUG oslo_concurrency.processutils
[req-1caea4a2-7cf0-4ba5-9dda-2bb90bb746d8 cbabd9368dc24fea84fd2e43935fddfa
975a7d3840a141b0a20a9dc60e3da6cd - default default] CMD "qemu-img resize
/var/lib/nova/instances/616b1a27-8b8c-486b-b8db-57c7b91a7402/disk
64424509440" returned: 0 in 0.056s execute
/openstack/venvs/nova-17.1.12/lib/python2.7/site-packages/oslo_concurrency/processutils.py:409
2020-04-20 19:03:27.160 150243 DEBUG nova.virt.disk.api
[req-1caea4a2-7cf0-4ba5-9dda-2bb90bb746d8 cbabd9368dc24fea84fd2e43935fddfa
975a7d3840a141b0a20a9dc60e3da6cd - default default] Checking if we can
resize image
/var/lib/nova/instances/616b1a27-8b8c-486b-b8db-57c7b91a7402/disk.
size=64424509440 can_resize_image
/openstack/venvs/nova-17.1.12/lib/python2.7/site-packages/nova/virt/disk/api.py:216

*try to create that instance using horizon or cli, whichever you favor
more.  does it boot good?* - Yes , We did try to create an instance using
the same image sometimes VMs spawned properly without any errors. If we
specify VM count let's assume 6 , on some compute node it is failing to
spawn VMs properly, on console we are getting XFS Metadata corruption
errors.

*I would also, do cleanup of instances (remove all), and remove all
dependent base files from here.  rm -rf /var/lib/nova/instances/_base/ *-- We
used to clear the image cache from all compute nodes before initiating
Stack Creation. Yes, used the same rm command to clear cache.

I Just want let you know one more thing in my setup, my glance file system
on comtroller are mounted on external NFS share with the following
parameters,

[image: image.png]
[image: image.png]

Any pointers or suggestions to resolve this issue.



On Tue, Apr 21, 2020 at 11:37 AM Ruslanas Gžibovskis <ruslanas at lpic.lt>
wrote:

> is it the same image all the time?
>
> try to create that instance using horizon or cli, whichever you favor
> more.  does it boot good?
>
> I would also, do cleanup of instances (remove all), and remove all
> dependent base files from here.  rm -rf /var/lib/nova/instances/_base/
>
>
>
>
> On Thu, 16 Apr 2020 at 19:08, Pradeep Antil <pradeepantil at gmail.com>
> wrote:
>
>> Hi Techies,
>>
>> I have below RDO setup,
>>
>>    - RDO 13
>>    - Base OS for Controllers & Compute is Ubuntu
>>    - Neutron with vxlan + VLAN (for provider N/W)
>>    - Cinder backend is CHEF
>>    - HugePages and CPU Pinning for VNF's VMs
>>
>> I am trying to deploy a stack which is suppose to create 18 VMs  across
>> 11 computes nodes internal disk, but every time 3 to 4 VMs out of 18
>> doesn't spawned properly. At console of these VMs i am getting below
>> errors,
>>
>> Any idea and suggestion how to troubleshoot this? and resolve the issue.
>>
>> [  100.681552] ffff8b37f8f86020: 00 00 00 00 00 00 00 00 00 00 00 00 00
>> 00 00 00  ................
>> [  100.681553] ffff8b37f8f86030: 00 00 00 00 00 00 00 00 00 00 00 00 00
>> 00 00 00  ................
>> [  100.681560] XFS (vda1): Metadata corruption detected at
>> xfs_inode_buf_verify+0x79/0x100 [xfs], xfs_inode block 0x179b800
>> [  100.681561] XFS (vda1): Unmount and run xfs_repair
>> [  100.681561] XFS (vda1): First 64 bytes of corrupted metadata buffer:
>> [  100.681562] ffff8b37f8f86000: 00 00 00 00 00 00 00 00 00 00 00 00 00
>> 00 00 00  ................
>> [  100.681562] ffff8b37f8f86010: 00 00 00 00 00 00 00 00 00 00 00 00 00
>> 00 00 00  ................
>> [  100.681563] ffff8b37f8f86020: 00 00 00 00 00 00 00 00 00 00 00 00 00
>> 00 00 00  ................
>> [  100.681564] ffff8b37f8f86030: 00 00 00 00 00 00 00 00 00 00 00 00 00
>> 00 00 00  ................
>> [  100.681596] XFS (vda1): metadata I/O error: block 0x179b800
>> ("xfs_trans_read_buf_map") error 117 numblks 32
>> [  100.681599] XFS (vda1): xfs_imap_to_bp: xfs_trans_read_buf() returned
>> error -117.
>> [   99.585766] [cloud-init[32m  OK  [0m[2530]: ] Cloud-init v. 18.2
>> running 'init-local' at Thu, 16 Apr 2020 10:44:21 +0000. Up 99.55
>> seconds.Started oVirt Guest Agent.
>>
>> [  101.086566] XFS (vda1): Metadata corruption detected at
>> xfs_inode_buf_verify+0x79/0x100 [xfs], xfs_inode block 0x179b800
>> [  101.092093] XFS (vda1): Unmount and run xfs_repair
>> [  101.094660] XFS (vda1): First 64 bytes of corrupted metadata buffer:
>> [  101.097787] ffff8b37fef07000: 00 00 00 00 00 00 00 00 00 00 00 00 00
>> 00 00 00  ................
>> [  101.105959] ffff8b37fef07010: 00 00 00 00 00 00 00 00 00 00 00 00 00
>> 00 00 00  ................
>> [  101.110718] ffff8b37fef07020: 00 00 00 00 00 00 00 00 00 00 00 00 00
>> 00 00 00  ................
>> [  101.115412] ffff8b37fef07030: 00 00 00 00 00 00 00 00 00 00 00 00 00
>> 00 00 00  ................
>> [  101.120166] XFS (vda1): Metadata corruption detected at
>> xfs_inode_buf_verify+0x79/0x100 [xfs], xfs_inode block 0x179b800
>> [  101.125644] XFS (vda1): Unmount and run xfs_repair
>> [  101.128229] XFS (vda1): First 64 bytes of corrupted metadata buffer:
>> [  101.131370] ffff8b37fef07000: 00 00 00 00 00 00 00 00 00 00 00 00 00
>> 00 00 00  ................
>> [  101.138671] ffff8b37fef07010: 00 00 00 00 00 00 00 00 00 00 00 00 00
>> 00 00 00  ................
>> [  101.143427] ffff8b37fef07020: 00 00 00 00 00 00 00 00 00 00 00 00 00
>> 00 00 00  ................
>> [  101.148235] ffff8b37fef07030: 00 00 00 00 00 00 00 00 00 00 00 00 00
>> 00 00 00  ................
>> [  101.152999] XFS (vda1): Metadata corruption detected at
>> xfs_inode_buf_verify+0x79/0x100 [xfs], xfs_inode block 0x179b800
>> [  101.158479] XFS (vda1): Unmount and run xfs_repair
>> [  101.161068] XFS (vda1): First 64 bytes of corrupted metadata buffer:
>> [  101.169883] ffff8b37fef07000: 00 00 00 00 00 00 00 00 00 00 00 00 00
>> 00 00 00  ................
>> [  101.174751] ffff8b37fef07010: 00 00 00 00 00 00 00 00 00 00 00 00 00
>> 00 00 00  ................
>> [  101.179639] ffff8b37fef07020: 00 00 00 00 00 00 00 00 00 00 00 00 00
>> 00 00 00  ................
>> [  101.184285] ffff8b37fef07030: 00 00 00 00 00 00 00 00 00 00 00 00 00
>> 00 00 00  ................
>> [  101.189104] XFS (vda1): Metadata corruption detected at
>> xfs_inode_buf_verify+0x79/0x100 [xfs], xfs_inode block 0x179b800
>> [  101.194619] XFS (vda1): Unmount and run xfs_repair
>> [  101.197228] XFS (vda1): First 64 bytes of corrupted metadata buffer:
>> [  101.201109] ffff8b37fef07000: 00 00 00 00 00 00 00 00 00 00 00 00 00
>> 00 00 00  ................
>> [  101.205976] ffff8b37fef07010: 00 00 00 00 00 00 00 00 00 00 00 00 00
>> 00 00 00  ................
>> [  101.210709] ffff8b37fef07020: 00 00 00 00 00 00 00 00 00 00 00 00 00
>> 00 00 00  ................
>> [  101.215442] ffff8b37fef07030: 00 00 00 00 00 00 00 00 00 00 00 00 00
>> 00 00 00  ................
>> [  101.220196] XFS (vda1): Metadata corruption detected at
>> xfs_inode_buf_verify+0x79/0x100 [xfs], xfs_inode block 0x179b800
>> [  101.225708] XFS (vda1): Unmount and run xfs_repair
>> [  101.228296] XFS (vda1): First 64 bytes of corrupted metadata buffer:
>> [  101.232058] ffff8b37fef07000: 00 00 00 00 00 00 00 00 00 00 00 00 00
>> 00 00 00  ................
>> [  101.236803] ffff8b37fef07010: 00 00 00 00 00 00 00 00 00 00 00 00 00
>> 00 00 00  ................
>> [  101.241538] ffff8b37fef07020: 00 00 00 00 00 00 00 00 00 00 00 00 00
>> 00 00 00  ................
>> [  101.246252] ffff8b37fef07030: 00 00 00 00 00 00 00 00 00 00 00 00 00
>> 00 00 00  ................
>> [  101.250997] XFS (vda1): Metadata corruption detected at
>> xfs_inode_buf_verify+0x79/0x100 [xfs], xfs_inode block 0x179b800
>> [  101.256518] XFS (vda1): Unmount and run xfs_repair
>> [  101.259105] XFS (vda1): First 64 bytes of corrupted metadata buffer:
>> [  101.262912] ffff8b37fef07000: 00 00 00 00 00 00 00 00 00 00 00 00 00
>> 00 00 00  ................
>> [  101.267649] ffff8b37fef07010: 00 00 00 00 00 00 00 00 00 00 00 00 00
>> 00 00 00  ................
>> [  101.272360] ffff8b37fef07020: 00 00 00 00 00 00 00 00 00 00 00 00 00
>> 00 00 00  ................
>> [  101.277088] ffff8b37fef07030: 00 00 00 00 00 00 00 00 00 00 00 00 00
>> 00 00 00  ................
>> [  101.281831] XFS (vda1): Metadata corruption detected at
>> xfs_inode_buf_verify+0x79/0x100 [xfs], xfs_inode block 0x179b800
>> [  101.287322] XFS (vda1): Unmount and run xfs_repair
>> [  101.295401] XFS (vda1): First 64 bytes of corrupted metadata buffer:
>> [  101.298546] ffff8b37fef07000: 00 00 00 00 00 00 00 00 00 00 00 00 00
>> 00 00 00  ................
>> [  101.303283] ffff8b37fef07010: 00 00 00 00 00 00 00 00 00 00 00 00 00
>> 00 00 00  ................
>> [  101.308009] ffff8b37fef07020: 00 00 00 00 00 00 00 00 00 00 00 00 00
>> 00 00 00  ................
>> [  101.312747] ffff8b37fef07030: 00 00 00 00 00 00 00 00 00 00 00 00 00
>> 00 00 00  ................
>> [  101.317460] XFS (vda1): Metadata corruption detected at
>> xfs_inode_buf_verify+0x79/0x100 [xfs], xfs_inode block 0x179b800
>> [  101.322960] XFS (vda1): Unmount and run xfs_repair
>> [  101.326233] XFS (vda1): First 64 bytes of corrupted metadata buffer:
>> [  101.329383] ffff8b37fef07000: 00 00 00 00 00 00 00 00 00 00 00 00 00
>> 00 00 00  ................
>> [  101.334100] ffff8b37fef07010: 00 00 00 00 00 00 00 00 00 00 00 00 00
>> 00 00 00  ................
>> [  101.338822] ffff8b37fef07020: 00 00 00 00 00 00 00 00 00 00 00 00 00
>> 00 00 00  ................
>> [  101.343549] ffff8b37fef07030: 00 00 00 00 00 00 00 00 00 00 00 00 00
>> 00 00 00  ................
>> [  101.348297] XFS (vda1): Metadata corruption detected at
>> xfs_inode_buf_verify+0x79/0x100 [xfs], xfs_inode block 0x179b800
>> [  101.353793] XFS (vda1): Unmount and run xfs_repair
>> [  101.357102] XFS (vda1): First 64 bytes of corrupted metadata buffer:
>>
>> Below are the Nova Compute logs of the hypervisor where it is scheduled
>> to spawned,
>>
>> 3T06:04:55Z,direct_url=<?>,disk_format='qcow2',id=c255bbbc-c8c3-462e-b827-1d35db08d283,min_disk=0,min_ram=0,name='vnf-scef-18.5',owner='36c70ae400e74fc2859f44815d0c9afb',properties=ImageMetaProps,protected=<?>,size=7143292928,status='active',tags=<?>,updated_at=2020-03-03T06:05:49Z,virtual_size=<?>,visibility=<?>)
>> rescue=None block_device_info={'swap': None, 'root_device_name':
>> u'/dev/vda', 'ephemerals': [], 'block_device_mapping': []} _get_guest_xml
>> /openstack/venvs/nova-17.1.12/lib/python2.7/site-packages/nova/virt/libvirt/driver.py:5419
>> 2020-04-16 16:12:28.310 219284 DEBUG oslo_concurrency.processutils
>> [req-5a53263c-928c-4a0c-a03c-8b698339efca cbabd9368dc24fea84fd2e43935fddfa
>> 975a7d3840a141b0a20a9dc60e3da6cd - default default] CMD
>> "/openstack/venvs/nova-17.1.12/bin/python -m oslo_concurrency.prlimit
>> --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info
>> /var/lib/nova/instances/dfa80e78-ee02-46e5-ba7a-0874fa37da56/disk
>> --force-share" returned: 0 in 0.031s execute
>> /openstack/venvs/nova-17.1.12/lib/python2.7/site-packages/oslo_concurrency/processutils.py:409
>> 2020-04-16 16:12:28.310 219284 DEBUG oslo_concurrency.processutils
>> [req-5a53263c-928c-4a0c-a03c-8b698339efca cbabd9368dc24fea84fd2e43935fddfa
>> 975a7d3840a141b0a20a9dc60e3da6cd - default default] Running cmd
>> (subprocess): qemu-img resize
>> /var/lib/nova/instances/dfa80e78-ee02-46e5-ba7a-0874fa37da56/disk
>> 64424509440 execute
>> /openstack/venvs/nova-17.1.12/lib/python2.7/site-packages/oslo_concurrency/processutils.py:372
>> 2020-04-16 16:12:28.322 219284 DEBUG oslo_concurrency.processutils
>> [req-5a53263c-928c-4a0c-a03c-8b698339efca cbabd9368dc24fea84fd2e43935fddfa
>> 975a7d3840a141b0a20a9dc60e3da6cd - default default] CMD "qemu-img resize
>> /var/lib/nova/instances/dfa80e78-ee02-46e5-ba7a-0874fa37da56/disk
>> 64424509440" returned: 0 in 0.012s execute
>> /openstack/venvs/nova-17.1.12/lib/python2.7/site-packages/oslo_concurrency/processutils.py:409
>> 2020-04-16 16:12:28.323 219284 DEBUG oslo_concurrency.lockutils
>> [req-5a53263c-928c-4a0c-a03c-8b698339efca cbabd9368dc24fea84fd2e43935fddfa
>> 975a7d3840a141b0a20a9dc60e3da6cd - default default] Lock
>> "86692cd1e738b8df7cf1f951967c61e92222fc4c" released by
>> "nova.virt.libvirt.imagebackend.copy_qcow2_image" :: held 0.092s inner
>> /openstack/venvs/nova-17.1.12/lib/python2.7/site-packages/oslo_concurrency/lockutils.py:285
>> 2020-04-16 16:12:28.323 219284 DEBUG oslo_concurrency.processutils
>> [req-5a53263c-928c-4a0c-a03c-8b698339efca cbabd9368dc24fea84fd2e43935fddfa
>> 975a7d3840a141b0a20a9dc60e3da6cd - default default] Running cmd
>> (subprocess): /openstack/venvs/nova-17.1.12/bin/python -m
>> oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C
>> qemu-img info
>> /var/lib/nova/instances/_base/86692cd1e738b8df7cf1f951967c61e92222fc4c
>> --force-share execute
>> /openstack/venvs/nova-17.1.12/lib/python2.7/site-packages/oslo_concurrency/processutils.py:372
>> 2020-04-16 16:12:28.338 219284 DEBUG nova.virt.libvirt.driver
>> [req-a7ee4c3e-ea3a-4237-ba75-4c85411c9889 cbabd9368dc24fea84fd2e43935fddfa
>> 975a7d3840a141b0a20a9dc60e3da6cd - default default] CPU mode 'host-model'
>> model '' was chosen, with extra flags: '' _get_guest_cpu_model_config
>> /openstack/venvs/nova-17.1.12/lib/python2.7/site-packages/nova/virt/libvirt/driver.py:3909
>> 2020-04-16 16:12:28.338 219284 DEBUG nova.virt.hardware
>> [req-a7ee4c3e-ea3a-4237-ba75-4c85411c9889 cbabd9368dc24fea84fd2e43935fddfa
>> 975a7d3840a141b0a20a9dc60e3da6cd - default default] Getting desirable
>> topologies for flavor
>> Flavor(created_at=2020-03-23T11:20:41Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw:cpu_policy='dedicated',hw:mem_page_size='1048576'},flavorid='03e45d45-f4f4-4c24-8b70-678c3703402f',id=102,is_public=False,memory_mb=49152,name='dmdc-traffic-flavor',projects=<?>,root_gb=60,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=16)
>> and image_meta
>> ImageMeta(checksum='69f8c18e59db9669d669d04824507a82',container_format='bare',created_at=2020-03-03T06:07:18Z,direct_url=<?>,disk_format='qcow2',id=d31e39bc-c2b7-42ad-968f-7e782dd72943,min_disk=0,min_ram=0,name='vnf-dmdc-18.5.0',owner='36c70ae400e74fc2859f44815d0c9afb',properties=ImageMetaProps,protected=<?>,size=5569380352,status='active',tags=<?>,updated_at=2020-03-03T06:08:03Z,virtual_size=<?>,visibility=<?>),
>> allow threads: True _get_desirable_cpu_topologies
>> /openstack/venvs/nova-17.1.12/lib/python2.7/site-packages/nova/virt/hardware.py:551
>> 2020-04-16 16:12:28.339 219284 DEBUG nova.virt.hardware
>> [req-a7ee4c3e-ea3a-4237-ba75-4c85411c9889 cbabd9368dc24fea84fd2e43935fddfa
>> 975a7d3840a141b0a20a9dc60e3da6cd - default default] Flavor limits
>> 65536:65536:65536 _get_cpu_topology_constraints
>> /openstack/venvs/nova-17.1.12/lib/python2.7/site-packages/nova/virt/hardware.py:297
>> 2020-04-16 16:12:28.339 219284 DEBUG nova.virt.hardware
>> [req-a7ee4c3e-ea3a-4237-ba75-4c85411c9889 cbabd9368dc24fea84fd2e43935fddfa
>> 975a7d3840a141b0a20a9dc60e3da6cd - default default] Image limits
>> 65536:65536:65536 _get_cpu_topology_constraints
>> /openstack/venvs/nova-17.1.12/lib/python2.7/site-packages/nova/virt/hardware.py:308
>> 2020-04-16 16:12:28.339 219284 DEBUG nova.virt.hardware
>> [req-a7ee4c3e-ea3a-4237-ba75-4c85411c9889 cbabd9368dc24fea84fd2e43935fddfa
>> 975a7d3840a141b0a20a9dc60e3da6cd - default default] Flavor pref -1:-1:-1
>> _get_cpu_topology_constraints
>> /openstack/venvs/nova-17.1.12/lib/python2.7/site-packages/nova/virt/hardware.py:331
>> 2020-04-16 16:12:28.340 219284 DEBUG nova.virt.hardware
>> [req-a7ee4c3e-ea3a-4237-ba75-4c85411c9889 cbabd9368dc24fea84fd2e43935fddfa
>> 975a7d3840a141b0a20a9dc60e3da6cd - default default] Image pref -1:-1:-1
>> _get_cpu_topology_constraints
>> /openstack/venvs/nova-17.1.12/lib/python2.7/site-packages/nova/virt/hardware.py:350
>> 2020-04-16 16:12:28.340 219284 DEBUG nova.virt.hardware
>> [req-a7ee4c3e-ea3a-4237-ba75-4c85411c9889 cbabd9368dc24fea84fd2e43935fddfa
>> 975a7d3840a141b0a20a9dc60e3da6cd - default default] Chosen -1:-1:-1 limits
>> 65536:65536:65536 _get_cpu_topology_constraints
>> /openstack/venvs/nova-17.1.12/lib/python2.7/site-packages/nova/virt/hardware.py:379
>> 2020-04-16 16:12:28.340 219284 DEBUG nova.virt.hardware
>> [req-a7ee4c3e-ea3a-4237-ba75-4c85411c9889 cbabd9368dc24fea84fd2e43935fddfa
>> 975a7d3840a141b0a20a9dc60e3da6cd - default default] Topology preferred
>> VirtCPUTopology(cores=-1,sockets=-1,threads=-1), maximum
>> VirtCPUTopology(cores=65536,sockets=65536,threads=65536)
>> _get_desirable_cpu_topologies
>> /openstack/venvs/nova-17.1.12/lib/python2.7/site-packages/nova/virt/hardware.py:555
>> 2020-04-16 16:12:28.340 219284 DEBUG nova.virt.hardware
>> [req-a7ee4c3e-ea3a-4237-ba75-4c85411c9889 cbabd9368dc24fea84fd2e43935fddfa
>> 975a7d3840a141b0a20a9dc60e3da6cd - default default] Build topologies for 16
>> vcpu(s) 16:16:16 _get_possible_cpu_topologies
>> /openstack/venvs/nova-17.1.12/lib/python2.7/site-packages/nova/virt/hardware.py:418
>> 2020-04-16 16:12:28.341 219284 DEBUG nova.virt.hardware
>> [req-a7ee4c3e-ea3a-4237-ba75-4c85411c9889 cbabd9368dc24fea84fd2e43935fddfa
>> 975a7d3840a141b0a20a9dc60e3da6cd - default default] Got 15 possible
>> topologies _get_possible_cpu_topologies
>> /openstack/venvs/nova-17.1.12/lib/python2.7/site-packages/nova/virt/hardware.py:445
>> 2020-04-16 16:12:28.341 219284 DEBUG nova.virt.hardware
>> [req-a7ee4c3e-ea3a-4237-ba75-4c85411c9889 cbabd9368dc24fea84fd2e43935fddfa
>> 975a7d3840a141b0a20a9dc60e3da6cd - default default] Possible topologies
>> [VirtCPUTopology(cores=1,sockets=16,threads=1),
>> VirtCPUTopology(cores=2,sockets=8,threads=1),
>> VirtCPUTopology(cores=4,sockets=4,threads=1),
>> VirtCPUTopology(cores=8,sockets=2,threads=1),
>> VirtCPUTopology(cores=16,sockets=1,threads=1),
>> VirtCPUTopology(cores=1,sockets=8,threads=2),
>> VirtCPUTopology(cores=2,sockets=4,threads=2),
>> VirtCPUTopology(cores=4,sockets=2,threads=2),
>> VirtCPUTopology(cores=8,sockets=1,threads=2),
>> VirtCPUTopology(cores=1,sockets=4,threads=4),
>> VirtCPUTopology(cores=2,sockets=2,threads=4),
>> VirtCPUTopology(cores=4,sockets=1,threads=4),
>> VirtCPUTopology(cores=1,sockets=2,threads=8),
>> VirtCPUTopology(cores=2,sockets=1,threads=8),
>> VirtCPUTopology(cores=1,sockets=1,threads=16)]
>> _get_desirable_cpu_topologies
>> /openstack/venvs/nova-17.1.12/lib/python2.7/site-packages/nova/virt/hardware.py:560
>> 2020-04-16 16:12:28.341 219284 DEBUG nova.virt.hardware
>> [req-a7ee4c3e-ea3a-4237-ba75-4c85411c9889 cbabd9368dc24fea84fd2e43935fddfa
>> 975a7d3840a141b0a20a9dc60e3da6cd - default default] Filtering topologies
>> best for 2 threads _get_desirable_cpu_topologies
>> /openstack/venvs/nova-17.1.12/lib/python2.7/site-packages/nova/virt/hardware.py:578
>> 2020-04-16 16:12:28.342 219284 DEBUG nova.virt.hardware
>> [req-a7ee4c3e-ea3a-4237-ba75-4c85411c9889 cbabd9368dc24fea84fd2e43935fddfa
>> 975a7d3840a141b0a20a9dc60e3da6cd - default default] Remaining possible
>> topologies [VirtCPUTopology(cores=1,sockets=8,threads=2),
>> VirtCPUTopology(cores=2,sockets=4,threads=2),
>> VirtCPUTopology(cores=4,sockets=2,threads=2),
>> VirtCPUTopology(cores=8,sockets=1,threads=2)] _get_desirable_cpu_topologies
>> /openstack/venvs/nova-17.1.12/lib/python2.7/site-packages/nova/virt/hardware.py:583
>> 2020-04-16 16:12:28.342 219284 DEBUG nova.virt.hardware
>> [req-a7ee4c3e-ea3a-4237-ba75-4c85411c9889 cbabd9368dc24fea84fd2e43935fddfa
>> 975a7d3840a141b0a20a9dc60e3da6cd - default default] Sorted desired
>> topologies [VirtCPUTopology(cores=1,sockets=8,threads=2),
>> VirtCPUTopology(cores=2,sockets=4,threads=2),
>> VirtCPUTopology(cores=4,sockets=2,threads=2),
>> VirtCPUTopology(cores=8,sockets=1,threads=2)] _get_desirable_cpu_topologies
>> /openstack/venvs/nova-17.1.12/lib/python2.7/site-packages/nova/virt/hardware.py:586
>> 2020-04-16 16:12:28.344 219284 DEBUG nova.virt.libvirt.driver
>> [req-48d96e5d-f071-44e7-94d2-e9fcb2a13087 cbabd9368dc24fea84fd2e43935fddfa
>> 975a7d3840a141b0a20a9dc60e3da6cd - default default] CPU mode 'host-model'
>> model '' was chosen, with extra flags: '' _get_guest_cpu_model_config
>> /openstack/venvs/nova-17.1.12/lib/python2.7/site-packages/nova/virt/libvirt/driver.py:3909
>> 2020-04-16 16:12:28.345 219284 DEBUG nova.virt.hardware
>> [req-48d96e5d-f071-44e7-94d2-e9fcb2a13087 cbabd9368dc24fea84fd2e43935fddfa
>> 975a7d3840a141b0a20a9dc60e3da6cd - default default] Getting desirable
>> topologies for flavor
>> Flavor(created_at=2020-03-23T11:20:34Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw:cpu_policy='dedicated',hw:mem_page_size='1048576'},flavorid='d60b66d4-c0e0-4292-9113-1df2d94d57a5',id=90,is_public=False,memory_mb=57344,name='scef-traffic-flavor',projects=<?>,root_gb=60,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=16)
>> and image_meta
>> ImageMeta(checksum='3fe0e06194e0b5327ba38bb2367f760d',container_format='bare',created_at=2020-03-03T06:04:55Z,direct_url=<?>,disk_format='qcow2',id=c255bbbc-c8c3-462e-b827-1d35db08d283,min_disk=0,min_ram=0,name='vnf-scef-18.5',owner='36c70ae400e74fc2859f44815d0c9afb',properties=ImageMetaProps,protected=<?>,size=7143292928,status='active',tags=<?>,updated_at=2020-03-03T06:05:49Z,virtual_size=<?>,visibility=<?>),
>> allow threads: True _get_desirable_cpu_topologies
>> /openstack/venvs/nova-17.1.12/lib/python2.7/site-packages/nova/virt/hardware.py:551
>> 2020-04-16 16:12:28.345 219284 DEBUG nova.virt.hardware
>> [req-48d96e5d-f071-44e7-94d2-e9fcb2a13087 cbabd9368dc24fea84fd2e43935fddfa
>> 975a7d3840a141b0a20a9dc60e3da6cd - default default] Flavor limits
>> 65536:65536:65536 _get_cpu_topology_constraints
>> /openstack/venvs/nova-17.1.12/lib/python2.7/site-packages/nova/virt/hardware.py:297
>> 2020-04-16 16:12:28.345 219284 DEBUG nova.virt.hardware
>> [req-48d96e5d-f071-44e7-94d2-e9fcb2a13087 cbabd9368dc24fea84fd2e43935fddfa
>> 975a7d3840a141b0a20a9dc60e3da6cd - default default] Image limits
>> 65536:65536:65536 _get_cpu_topology_constraints
>> /openstack/venvs/nova-17.1.12/lib/python2.7/site-packages/nova/virt/hardware.py:308
>> 2020-04-16 16:12:28.346 219284 DEBUG nova.virt.hardware
>> [req-48d96e5d-f071-44e7-94d2-e9fcb2a13087 cbabd9368dc24fea84fd2e43935fddfa
>> 975a7d3840a141b0a20a9dc60e3da6cd - default default] Flavor pref -1:-1:-1
>> _get_cpu_topology_constraints
>> /openstack/venvs/nova-17.1.12/lib/python2.7/site-packages/nova/virt/hardware.py:331
>> 2020-04-16 16:12:28.346 219284 DEBUG nova.virt.hardware
>> [req-48d96e5d-f071-44e7-94d2-e9fcb2a13087 cbabd9368dc24fea84fd2e43935fddfa
>> 975a7d3840a141b0a20a9dc60e3da6cd - default default] Image pref -1:-1:-1
>> _get_cpu_topology_constraints
>> /openstack/venvs/nova-17.1.12/lib/python2.7/site-packages/nova/virt/hardware.py:350
>> 2020-04-16 16:12:28.346 219284 DEBUG nova.virt.hardware
>> [req-48d96e5d-f071-44e7-94d2-e9fcb2a13087 cbabd9368dc24fea84fd2e43935fddfa
>> 975a7d3840a141b0a20a9dc60e3da6cd - default default] Chosen -1:-1:-1 limits
>> 65536:65536:65536 _get_cpu_topology_constraints
>> /openstack/venvs/nova-17.1.12/lib/python2.7/site-packages/nova/virt/hardware.py:379
>> packages/nova/network/base_api.py:48
>> 2020-04-16 16:34:48.580 219284 DEBUG oslo_concurrency.lockutils
>> [req-dd4a2032-bbbd-4c0b-87ac-11605ffbf6c2 - - - - -] Releasing semaphore
>> "refresh_cache-f33b2602-ac5f-491e-bdb8-7e7f9376bcad" lock
>> /openstack/venvs/nova-17.1.12/lib/python2.7/site-packages/oslo_concurrency/lockutils.py:228
>> 2020-04-16 16:34:48.580 219284 DEBUG nova.compute.manager
>> [req-dd4a2032-bbbd-4c0b-87ac-11605ffbf6c2 - - - - -] [instance:
>> f33b2602-ac5f-491e-bdb8-7e7f9376bcad] Updated the network info_cache for
>> instance _heal_instance_info_cache
>> /openstack/venvs/nova-17.1.12/lib/python2.7/site-packages/nova/compute/manager.py:6827
>> 2020-04-16 16:34:50.580 219284 DEBUG oslo_service.periodic_task
>> [req-dd4a2032-bbbd-4c0b-87ac-11605ffbf6c2 - - - - -] Running periodic task
>> ComputeManager._run_image_cache_manager_pass run_periodic_tasks
>> /openstack/venvs/nova-17.1.12/lib/python2.7/site-packages/oslo_service/periodic_task.py:215
>> 2020-04-16 16:34:50.581 219284 DEBUG oslo_concurrency.lockutils
>> [req-dd4a2032-bbbd-4c0b-87ac-11605ffbf6c2 - - - - -] Lock
>> "storage-registry-lock" acquired by
>> "nova.virt.storage_users.do_register_storage_use" :: waited 0.000s inner
>> /openstack/venvs/nova-17.1.12/lib/python2.7/site-packages/oslo_concurrency/lockutils.py:273
>> 2020-04-16 16:34:50.581 219284 DEBUG oslo_concurrency.lockutils
>> [req-dd4a2032-bbbd-4c0b-87ac-11605ffbf6c2 - - - - -] Lock
>> "storage-registry-lock" released by
>> "nova.virt.storage_users.do_register_storage_use" :: held 0.000s inner
>> /openstack/venvs/nova-17.1.12/lib/python2.7/site-packages/oslo_concurrency/lockutils.py:285
>> 2020-04-16 16:34:50.581 219284 DEBUG oslo_concurrency.lockutils
>> [req-dd4a2032-bbbd-4c0b-87ac-11605ffbf6c2 - - - - -] Lock
>> "storage-registry-lock" acquired by
>> "nova.virt.storage_users.do_get_storage_users" :: waited 0.000s inner
>> /openstack/venvs/nova-17.1.12/lib/python2.7/site-packages/oslo_concurrency/lockutils.py:273
>> 2020-04-16 16:34:50.582 219284 DEBUG oslo_concurrency.lockutils
>> [req-dd4a2032-bbbd-4c0b-87ac-11605ffbf6c2 - - - - -] Lock
>> "storage-registry-lock" released by
>> "nova.virt.storage_users.do_get_storage_users" :: held 0.000s inner
>> /openstack/venvs/nova-17.1.12/lib/python2.7/site-packages/oslo_concurrency/lockutils.py:285
>> 2020-04-16 16:34:50.628 219284 DEBUG nova.virt.libvirt.imagecache
>> [req-dd4a2032-bbbd-4c0b-87ac-11605ffbf6c2 - - - - -] Verify base images
>> _age_and_verify_cached_images
>> /openstack/venvs/nova-17.1.12/lib/python2.7/site-packages/nova/virt/libvirt/imagecache.py:348
>> 2020-04-16 16:34:50.628 219284 DEBUG nova.virt.libvirt.imagecache
>> [req-dd4a2032-bbbd-4c0b-87ac-11605ffbf6c2 - - - - -] Image id  yields
>> fingerprint da39a3ee5e6b4b0d3255bfef95601890afd80709
>> _age_and_verify_cached_images
>> /openstack/venvs/nova-17.1.12/lib/python2.7/site-packages/nova/virt/libvirt/imagecache.py:355
>> 2020-04-16 16:34:50.628 219284 DEBUG nova.virt.libvirt.imagecache
>> [req-dd4a2032-bbbd-4c0b-87ac-11605ffbf6c2 - - - - -] Image id
>> b8783f95-138b-4265-a09d-55ec9d9ad35d yields fingerprint
>> b40b27e04896d063bc591b19642da8910da3eb1f _age_and_verify_cached_images
>> /openstack/venvs/nova-17.1.12/lib/python2.7/site-packages/nova/virt/libvirt/imagecache.py:355
>> 2020-04-16 16:34:50.628 219284 INFO nova.virt.libvirt.imagecache
>> [req-dd4a2032-bbbd-4c0b-87ac-11605ffbf6c2 - - - - -] image
>> b8783f95-138b-4265-a09d-55ec9d9ad35d at
>> (/var/lib/nova/instances/_base/b40b27e04896d063bc591b19642da8910da3eb1f):
>> checking
>> 2020-04-16 16:34:50.628 219284 DEBUG nova.virt.libvirt.imagecache
>> [req-dd4a2032-bbbd-4c0b-87ac-11605ffbf6c2 - - - - -] image
>> b8783f95-138b-4265-a09d-55ec9d9ad35d at
>> (/var/lib/nova/instances/_base/b40b27e04896d063bc591b19642da8910da3eb1f):
>> image is in use _mark_in_use
>> /openstack/venvs/nova-17.1.12/lib/python2.7/site-packages/nova/virt/libvirt/imagecache.py:329
>> 2020-04-16 16:34:50.629 219284 DEBUG nova.virt.libvirt.imagecache
>> [req-dd4a2032-bbbd-4c0b-87ac-11605ffbf6c2 - - - - -] Image id
>> c255bbbc-c8c3-462e-b827-1d35db08d283 yields fingerprint
>> 86692cd1e738b8df7cf1f951967c61e92222fc4c _age_and_verify_cached_images
>> /openstack/venvs/nova-17.1.12/lib/python2.7/site-packages/nova/virt/libvirt/imagecache.py:355
>> 2020-04-16 16:34:50.630 219284 INFO nova.virt.libvirt.imagecache
>> [req-dd4a2032-bbbd-4c0b-87ac-11605ffbf6c2 - - - - -] image
>> c255bbbc-c8c3-462e-b827-1d35db08d283 at
>> (/var/lib/nova/instances/_base/86692cd1e738b8df7cf1f951967c61e92222fc4c):
>> checking
>> 2020-04-16 16:34:50.630 219284 DEBUG nova.virt.libvirt.imagecache
>> [req-dd4a2032-bbbd-4c0b-87ac-11605ffbf6c2 - - - - -] image
>> c255bbbc-c8c3-462e-b827-1d35db08d283 at
>> (/var/lib/nova/instances/_base/86692cd1e738b8df7cf1f951967c61e92222fc4c):
>> image is in use _mark_in_use
>> /openstack/venvs/nova-17.1.12/lib/python2.7/site-packages/nova/virt/libvirt/imagecache.py:329
>> 2020-04-16 16:34:50.630 219284 DEBUG nova.virt.libvirt.imagecache
>> [req-dd4a2032-bbbd-4c0b-87ac-11605ffbf6c2 - - - - -] Image id
>> d31e39bc-c2b7-42ad-968f-7e782dd72943 yields fingerprint
>> 5c538ead16d8375e4890e8b9bb1aa080edc75f33 _age_and_verify_cached_images
>> /openstack/venvs/nova-17.1.12/lib/python2.7/site-packages/nova/virt/libvirt/imagecache.py:355
>> 2020-04-16 16:34:50.630 219284 INFO nova.virt.libvirt.imagecache
>> [req-dd4a2032-bbbd-4c0b-87ac-11605ffbf6c2 - - - - -] image
>> d31e39bc-c2b7-42ad-968f-7e782dd72943 at
>> (/var/lib/nova/instances/_base/5c538ead16d8375e4890e8b9bb1aa080edc75f33):
>> checking
>> 2020-04-16 16:34:50.630 219284 DEBUG nova.virt.libvirt.imagecache
>> [req-dd4a2032-bbbd-4c0b-87ac-11605ffbf6c2 - - - - -] image
>> d31e39bc-c2b7-42ad-968f-7e782dd72943 at
>> (/var/lib/nova/instances/_base/5c538ead16d8375e4890e8b9bb1aa080edc75f33):
>> image is in use _mark_in_use
>> /openstack/venvs/nova-17.1.12/lib/python2.7/site-packages/nova/virt/libvirt/imagecache.py:329
>> 2020-04-16 16:34:50.631 219284 DEBUG nova.virt.libvirt.imagecache
>> [req-dd4a2032-bbbd-4c0b-87ac-11605ffbf6c2 - - - - -] Image id
>> b3af2bf0-055b-48fb-aedc-4683468a3f74 yields fingerprint
>> 7af98c4d49b766d82eec8169a5c87be4eb56e5eb _age_and_verify_cached_images
>> /openstack/venvs/nova-17.1.12/lib/python2.7/site-packages/nova/virt/libvirt/imagecache.py:355
>> 2020-04-16 16:34:50.631 219284 INFO nova.virt.libvirt.imagecache
>> [req-dd4a2032-bbbd-4c0b-87ac-11605ffbf6c2 - - - - -] image
>> b3af2bf0-055b-48fb-aedc-4683468a3f74 at
>> (/var/lib/nova/instances/_base/7af98c4d49b766d82eec8169a5c87be4eb56e5eb):
>> checking
>> 2020-04-16 16:34:50.631 219284 DEBUG nova.virt.libvirt.imagecache
>> [req-dd4a2032-bbbd-4c0b-87ac-11605ffbf6c2 - - - - -] image
>> b3af2bf0-055b-48fb-aedc-4683468a3f74 at
>> (/var/lib/nova/instances/_base/7af98c4d49b766d82eec8169a5c87be4eb56e5eb):
>> image is in use _mark_in_use
>> /openstack/venvs/nova-17.1.12/lib/python2.7/site-packages/nova/virt/libvirt/imagecache.py:329
>> 2020-04-16 16:34:50.632 219284 DEBUG nova.virt.libvirt.imagecache
>> [req-dd4a2032-bbbd-4c0b-87ac-11605ffbf6c2 - - - - -]
>> f33b2602-ac5f-491e-bdb8-7e7f9376bcad is a valid instance name
>> _list_backing_images
>> /openstack/venvs/nova-17.1.12/lib/python2.7/site-packages/nova/virt/libvirt/imagecache.py:169
>> 2020-04-16 16:34:50.632 219284 DEBUG nova.virt.libvirt.imagecache
>> [req-dd4a2032-bbbd-4c0b-87ac-11605ffbf6c2 - - - - -]
>> f33b2602-ac5f-491e-bdb8-7e7f9376bcad has a disk file _list_backing_images
>> /openstack/venvs/nova-17.1.12/lib/python2.7/site-packages/nova/virt/libvirt/imagecache.py:172
>> 2020-04-16 16:34:50.632 219284 DEBUG oslo_concurrency.processutils
>> [req-dd4a2032-bbbd-4c0b-87ac-11605ffbf6c2 - - - - -] Running cmd
>> (subprocess): /openstack/venvs/nova-17.1.12/bin/python -m
>> oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C
>> qemu-img info
>> /var/lib/nova/instances/f33b2602-ac5f-491e-bdb8-7e7f9376bcad/disk
>> --force-share execute
>> /openstack/venvs/nova-17.1.12/lib/python2.7/site-packages/oslo_concurrency/processutils.py:372
>> 2020-04-16 16:34:50.663 219284 DEBUG oslo_concurrency.processutils
>> [req-dd4a2032-bbbd-4c0b-87ac-11605ffbf6c2 - - - - -] CMD
>> "/openstack/venvs/nova-17.1.12/bin/python -m oslo_concurrency.prlimit
>> --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info
>> /var/lib/nova/instances/f33b2602-ac5f-491e-bdb8-7e7f9376bcad/disk
>> --force-share" returned: 0 in 0.031s execute
>> /openstack/venvs/nova-17.1.12/lib/python2.7/site-packages/oslo_concurrency/processutils.py:409
>> 2020-04-16 16:34:50.664 219284 DEBUG nova.virt.libvirt.imagecache
>> [req-dd4a2032-bbbd-4c0b-87ac-11605ffbf6c2 - - - - -] Instance
>> f33b2602-ac5f-491e-bdb8-7e7f9376bcad is backed by
>> b40b27e04896d063bc591b19642da8910da3eb1f _list_backing_images
>> /openstack/venvs/nova-17.1.12/lib/python2.7/site-packages/nova/virt/libvirt/imagecache.py:187
>> 2020-04-16 16:34:50.665 219284 DEBUG nova.virt.libvirt.imagecache
>> [req-dd4a2032-bbbd-4c0b-87ac-11605ffbf6c2 - - - - -]
>> f117eb96-06a9-4c91-9c5c-111228e24d66 is a valid instance name
>> _list_backing_images
>> /openstack/venvs/nova-17.1.12/lib/python2.7/site-packages/nova/virt/libvirt/imagecache.py:169
>> 2020-04-16 16:34:50.665 219284 DEBUG nova.virt.libvirt.imagecache
>> [req-dd4a2032-bbbd-4c0b-87ac-11605ffbf6c2 - - - - -]
>> f117eb96-06a9-4c91-9c5c-111228e24d66 has a disk file _list_backing_images
>> /openstack/venvs/nova-17.1.12/lib/python2.7/site-packages/nova/virt/libvirt/imagecache.py:172
>> 2020-04-16 16:34:50.665 219284 DEBUG oslo_concurrency.processutils
>> [req-dd4a2032-bbbd-4c0b-87ac-11605ffbf6c2 - - - - -] Running cmd
>> (subprocess): /openstack/venvs/nova-17.1.12/bin/python -m
>> oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C
>> qemu-img info
>> /var/lib/nova/instances/f117eb96-06a9-4c91-9c5c-111228e24d66/disk
>> --force-share execute
>> /openstack/venvs/nova-17.1.12/lib/python2.7/site-packages/oslo_concurrency/processutils.py:372
>> 2020-04-16 16:34:50.694 219284 DEBUG oslo_concurrency.processutils
>> [req-dd4a2032-bbbd-4c0b-87ac-11605ffbf6c2 - - - - -] CMD
>> "/openstack/venvs/nova-17.1.12/bin/python -m oslo_concurrency.prlimit
>> --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info
>> /var/lib/nova/instances/f117eb96-06a9-4c91-9c5c-111228e24d66/disk
>> --force-share" returned: 0 in 0.029s execute
>> /openstack/venvs/nova-17.1.12/lib/python2.7/site-packages/oslo_concurrency/processutils.py:409
>> 2020-04-16 16:34:50.695 219284 DEBUG nova.virt.libvirt.imagecache
>> [req-dd4a2032-bbbd-4c0b-87ac-11605ffbf6c2 - - - - -] Instance
>> f117eb96-06a9-4c91-9c5c-111228e24d66 is backed by
>> 7af98c4d49b766d82eec8169a5c87be4eb56e5eb _list_backing_images
>> /openstack/venvs/nova-17.1.12/lib/python2.7/site-packages/nova/virt/libvirt/imagecache.py:187
>> 2020-04-16 16:34:50.695 219284 DEBUG nova.virt.libvirt.imagecache
>> [req-dd4a2032-bbbd-4c0b-87ac-11605ffbf6c2 - - - - -]
>> 5ba39de3-f5f8-46a2-908d-c43b901e1696 is a valid instance name
>> _list_backing_images
>> /openstack/venvs/nova-17.1.12/lib/python2.7/site-packages/nova/virt/libvirt/imagecache.py:169
>> 2020-04-16 16:34:50.695 219284 DEBUG nova.virt.libvirt.imagecache
>> [req-dd4a2032-bbbd-4c0b-87ac-11605ffbf6c2 - - - - -]
>> 5ba39de3-f5f8-46a2-908d-c43b901e1696 has a disk file _list_backing_images
>> /openstack/venvs/nova-17.1.12/lib/python2.7/site-packages/nova/virt/libvirt/imagecache.py:172
>> 2020-04-16 16:34:50.695 219284 DEBUG oslo_concurrency.processutils
>> [req-dd4a2032-bbbd-4c0b-87ac-11605ffbf6c2 - - - - -] Running cmd
>> (subprocess): /openstack/venvs/nova-17.1.12/bin/python -m
>> oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C
>> qemu-img info
>> /var/lib/nova/instances/5ba39de3-f5f8-46a2-908d-c43b901e1696/disk
>> --force-share execute
>> /openstack/venvs/nova-17.1.12/lib/python2.7/site-packages/oslo_concurrency/processutils.py:372
>> 2020-04-16 16:34:50.723 219284 DEBUG oslo_concurrency.processutils
>> [req-dd4a2032-bbbd-4c0b-87ac-11605ffbf6c2 - - - - -] CMD
>> "/openstack/venvs/nova-17.1.12/bin/python -m oslo_concurrency.prlimit
>> --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info
>> /var/lib/nova/instances/5ba39de3-f5f8-46a2-908d-c43b901e1696/disk
>> --force-share" returned: 0 in 0.028s execute
>> /openstack/venvs/nova-17.1.12/lib/python2.7/site-packages/oslo_concurrency/processutils.py:409
>> 2020-04-16 16:34:50.724 219284 DEBUG nova.virt.libvirt.imagecache
>> [req-dd4a2032-bbbd-4c0b-87ac-11605ffbf6c2 - - - - -] Instance
>> 5ba39de3-f5f8-46a2-908d-c43b901e1696 is backed by
>> 86692cd1e738b8df7cf1f951967c61e92222fc4c _list_backing_images
>> /openstack/venvs/nova-17.1.12/lib/python2.7/site-packages/nova/virt/libvirt/imagecache.py:187
>> 2020-04-16 16:34:50.724 219284 DEBUG nova.virt.libvirt.imagecache
>> [req-dd4a2032-bbbd-4c0b-87ac-11605ffbf6c2 - - - - -]
>> d3d2837b-49c3-4822-b26b-4b3c03d344ae is a valid instance name
>> _list_backing_images
>> /openstack/venvs/nova-17.1.12/lib/python2.7/site-packages/nova/virt/libvirt/imagecache.py:169
>> 2020-04-16 16:34:50.724 219284 DEBUG nova.virt.libvirt.imagecache
>> [req-dd4a2032-bbbd-4c0b-87ac-11605ffbf6c2 - - - - -]
>> d3d2837b-49c3-4822-b26b-4b3c03d344ae has a disk file _list_backing_images
>> /openstack/venvs/nova-17.1.12/lib/python2.7/site-packages/nova/virt/libvirt/imagecache.py:172
>> 2020-04-16 16:34:50.725 219284 DEBUG oslo_concurrency.processutils
>> [req-dd4a2032-bbbd-4c0b-87ac-11605ffbf6c2 - - - - -] Running cmd
>> (subprocess): /openstack/venvs/nova-17.1.12/bin/python -m
>> oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C
>> qemu-img info
>> /var/lib/nova/instances/d3d2837b-49c3-4822-b26b-4b3c03d344ae/disk
>> --force-share execute
>> /openstack/venvs/nova-17.1.12/lib/python2.7/site-packages/oslo_concurrency/processutils.py:372
>> 2020-04-16 16:34:50.752 219284 DEBUG oslo_concurrency.processutils
>> [req-dd4a2032-bbbd-4c0b-87ac-11605ffbf6c2 - - - - -] CMD
>> "/openstack/venvs/nova-17.1.12/bin/python -m oslo_concurrency.prlimit
>> --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info
>> /var/lib/nova/instances/d3d2837b-49c3-4822-b26b-4b3c03d344ae/disk
>> --force-share" returned: 0 in 0.028s execute
>> /openstack/venvs/nova-17.1.12/lib/python2.7/site-packages/oslo_concurrency/processutils.py:409
>> 2020-04-16 16:34:50.753 219284 DEBUG nova.virt.libvirt.imagecache
>> [req-dd4a2032-bbbd-4c0b-87ac-11605ffbf6c2 - - - - -] Instance
>> d3d2837b-49c3-4822-b26b-4b3c03d344ae is backed by
>> 5c538ead16d8375e4890e8b9bb1aa080edc75f33 _list_backing_images
>> /openstack/venvs/nova-17.1.12/lib/python2.7/site-packages/nova/virt/libvirt/imagecache.py:187
>> 2020-04-16 16:34:50.753 219284 DEBUG nova.virt.libvirt.imagecache
>> [req-dd4a2032-bbbd-4c0b-87ac-11605ffbf6c2 - - - - -]
>> dfa80e78-ee02-46e5-ba7a-0874fa37da56 is a valid instance name
>> _list_backing_images
>> /openstack/venvs/nova-17.1.12/lib/python2.7/site-packages/nova/virt/libvirt/imagecache.py:169
>> 2020-04-16 16:34:50.754 219284 DEBUG nova.virt.libvirt.imagecache
>> [req-dd4a2032-bbbd-4c0b-87ac-11605ffbf6c2 - - - - -]
>> dfa80e78-ee02-46e5-ba7a-0874fa37da56 has a disk file _list_backing_images
>> /openstack/venvs/nova-17.1.12/lib/python2.7/site-packages/nova/virt/libvirt/imagecache.py:172
>> 2020-04-16 16:34:50.754 219284 DEBUG oslo_concurrency.processutils
>> [req-dd4a2032-bbbd-4c0b-87ac-11605ffbf6c2 - - - - -] Running cmd
>> (subprocess): /openstack/venvs/nova-17.1.12/bin/python -m
>> oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C
>> qemu-img info
>> /var/lib/nova/instances/dfa80e78-ee02-46e5-ba7a-0874fa37da56/disk
>> --force-share execute
>> /openstack/venvs/nova-17.1.12/lib/python2.7/site-packages/oslo_concurrency/processutils.py:372
>> 2020-04-16 16:34:50.781 219284 DEBUG oslo_concurrency.processutils
>> [req-dd4a2032-bbbd-4c0b-87ac-11605ffbf6c2 - - - - -] CMD
>> "/openstack/venvs/nova-17.1.12/bin/python -m oslo_concurrency.prlimit
>> --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info
>> /var/lib/nova/instances/dfa80e78-ee02-46e5-ba7a-0874fa37da56/disk
>> --force-share" returned: 0 in 0.027s execute
>> /openstack/venvs/nova-17.1.12/lib/python2.7/site-packages/oslo_concurrency/processutils.py:409
>> 2020-04-16 16:34:50.782 219284 DEBUG nova.virt.libvirt.imagecache
>> [req-dd4a2032-bbbd-4c0b-87ac-11605ffbf6c2 - - - - -] Instance
>> dfa80e78-ee02-46e5-ba7a-0874fa37da56 is backed by
>> 86692cd1e738b8df7cf1f951967c61e92222fc4c _list_backing_images
>> /openstack/venvs/nova-17.1.12/lib/python2.7/site-packages/nova/virt/libvirt/imagecache.py:187
>> 2020-04-16 16:34:50.782 219284 INFO nova.virt.libvirt.imagecache
>> [req-dd4a2032-bbbd-4c0b-87ac-11605ffbf6c2 - - - - -] Active base files:
>> /var/lib/nova/instances/_base/b40b27e04896d063bc591b19642da8910da3eb1f
>> /var/lib/nova/instances/_base/86692cd1e738b8df7cf1f951967c61e92222fc4c
>> /var/lib/nova/instances/_base/5c538ead16d8375e4890e8b9bb1aa080edc75f33
>> /var/lib/nova/instances/_base/7af98c4d49b766d82eec8169a5c87be4eb56e5eb
>> 2020-04-16 16:34:50.783 219284 DEBUG nova.virt.libvirt.imagecache
>> [req-dd4a2032-bbbd-4c0b-87ac-11605ffbf6c2 - - - - -] Verification complete
>> _age_and_verify_cached_images
>> /openstack/venvs/nova-17.1.12/lib/python2.7/site-packages/nova/virt/libvirt/imagecache.py:384
>> 2020-04-16 16:34:50.783 219284 DEBUG nova.virt.libvirt.imagecache
>> [req-dd4a2032-bbbd-4c0b-87ac-11605ffbf6c2 - - - - -] Verify swap images
>> _age_and_verify_swap_images
>> /openstack/venvs/nova-17.1.12/lib/python2.7/site-packages/nova/virt/libvirt/imagecache.py:333
>> 2020-04-16 16:35:01.887 219284 DEBUG oslo_service.periodic_task
>> [req-dd4a2032-bbbd-4c0b-87ac-11605ffbf6c2 - - - - -] Running periodic task
>> ComputeManager.update_available_resource run_periodic_tasks
>> /openstack/venvs/nova-17.1.12/lib/python2.7/site-packages/oslo_service/periodic_task.py:215
>> 2020-04-16 16:35:01.910 219284 DEBUG nova.compute.resource_tracker
>> [req-dd4a2032-bbbd-4c0b-87ac-11605ffbf6c2 - - - - -] Auditing locally
>> available compute resources for KO1A3D02O131106CM07 (node:
>> KO1A3D02O131106CM07.openstack.local) update_available_resource
>> /openstack/venvs/nova-17.1.12/lib/python2.7/site-packages/nova/compute/resource_tracker.py:689
>> 2020-04-16 16:35:02.009 219284 DEBUG oslo_concurrency.processutils
>> [req-dd4a2032-bbbd-4c0b-87ac-11605ffbf6c2 - - - - -] Running cmd
>> (subprocess): /openstack/venvs/nova-17.1.12/bin/python -m
>> oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C
>> qemu-img info
>> /var/lib/nova/instances/f33b2602-ac5f-491e-bdb8-7e7f9376bcad/disk
>> --force-share execute
>> /openstack/venvs/nova-17.1.12/lib/python2.7/site-packages/oslo_concurrency/processutils.py:372
>> 2020-04-16 16:35:02.040 219284 DEBUG oslo_concurrency.processutils
>> [req-dd4a2032-bbbd-4c0b-87ac-11605ffbf6c2 - - - - -] CMD
>> "/openstack/venvs/nova-17.1.12/bin/python -m oslo_concurrency.prlimit
>> --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info
>> /var/lib/nova/instances/f33b2602-ac5f-491e-bdb8-7e7f9376bcad/disk
>> --force-share" returned: 0 in 0.031s execute
>> /openstack/venvs/nova-17.1.12/lib/python2.7/site-packages/oslo_concurrency/processutils.py:409
>> 2020-04-16 16:35:02.041 219284 DEBUG oslo_concurrency.processutils
>> [req-dd4a2032-bbbd-4c0b-87ac-11605ffbf6c2 - - - - -] Running cmd
>> (subprocess): /openstack/venvs/nova-17.1.12/bin/python -m
>> oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C
>> qemu-img info
>> /var/lib/nova/instances/f33b2602-ac5f-491e-bdb8-7e7f9376bcad/disk
>> --force-share execute
>> /openstack/venvs/nova-17.1.12/lib/python2.7/site-packages/oslo_concurrency/processutils.py:372
>> 2020-04-16 16:35:02.070 219284 DEBUG oslo_concurrency.processutils
>> [req-dd4a2032-bbbd-4c0b-87ac-11605ffbf6c2 - - - - -] CMD
>> "/openstack/venvs/nova-17.1.12/bin/python -m oslo_concurrency.prlimit
>> --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info
>> /var/lib/nova/instances/f33b2602-ac5f-491e-bdb8-7e7f9376bcad/disk
>> --force-share" returned: 0 in 0.029s execute
>> /openstack/venvs/nova-17.1.12/lib/python2.7/site-packages/oslo_concurrency/processutils.py:409
>> 2020-04-16 16:35:02.070 219284 DEBUG nova.virt.libvirt.driver
>> [req-dd4a2032-bbbd-4c0b-87ac-11605ffbf6c2 - - - - -] skipping disk for
>> instance-00000636 as it does not have a path
>> _get_instance_disk_info_from_config
>> /openstack/venvs/nova-17.1.12/lib/python2.7/site-packages/nova/virt/libvirt/driver.py:7840
>> 2020-04-16 16:35:02.071 219284 DEBUG nova.virt.libvirt.driver
>> [req-dd4a2032-bbbd-4c0b-87ac-11605ffbf6c2 - - - - -] skipping disk for
>> instance-00000636 as it does not have a path
>> _get_instance_disk_info_from_config
>> /openstack/venvs/nova-17.1.12/lib/python2.7/site-packages/nova/virt/libvirt/driver.py:7840
>> 2020-04-16 16:35:02.073 219284 DEBUG oslo_concurrency.processutils
>> [req-dd4a2032-bbbd-4c0b-87ac-11605ffbf6c2 - - - - -] Running cmd
>> (subprocess): /openstack/venvs/nova-17.1.12/bin/python -m
>> oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C
>> qemu-img info
>> /var/lib/nova/instances/d3d2837b-49c3-4822-b26b-4b3c03d344ae/disk
>> --force-share execute
>> /openstack/venvs/nova-17.1.12/lib/python2.7/site-packages/oslo_concurrency/processutils.py:372
>> 2020-04-16 16:35:02.101 219284 DEBUG oslo_concurrency.processutils
>> [req-dd4a2032-bbbd-4c0b-87ac-11605ffbf6c2 - - - - -] CMD
>> "/openstack/venvs/nova-17.1.12/bin/python -m oslo_concurrency.prlimit
>> --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info
>> /var/lib/nova/instances/d3d2837b-49c3-4822-b26b-4b3c03d344ae/disk
>> --force-share" returned: 0 in 0.028s execute
>> /openstack/venvs/nova-17.1.12/lib/python2.7/site-packages/oslo_concurrency/processutils.py:409
>> 2020-04-16 16:35:02.101 219284 DEBUG oslo_concurrency.processutils
>> [req-dd4a2032-bbbd-4c0b-87ac-11605ffbf6c2 - - - - -] Running cmd
>> (subprocess): /openstack/venvs/nova-17.1.12/bin/python -m
>> oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C
>> qemu-img info
>> /var/lib/nova/instances/d3d2837b-49c3-4822-b26b-4b3c03d344ae/disk
>> --force-share execute
>> /openstack/venvs/nova-17.1.12/lib/python2.7/site-packages/oslo_concurrency/processutils.py:372
>> 2020-04-16 16:35:02.129 219284 DEBUG oslo_concurrency.processutils
>> [req-dd4a2032-bbbd-4c0b-87ac-11605ffbf6c2 - - - - -] CMD
>> "/openstack/venvs/nova-17.1.12/bin/python -m oslo_concurrency.prlimit
>> --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info
>> /var/lib/nova/instances/d3d2837b-49c3-4822-b26b-4b3c03d344ae/disk
>> --force-share" returned: 0 in 0.028s execute
>> /openstack/venvs/nova-17.1.12/lib/python2.7/site-packages/oslo_concurrency/processutils.py:409
>> 2020-04-16 16:35:02.132 219284 DEBUG oslo_concurrency.processutils
>> [req-dd4a2032-bbbd-4c0b-87ac-11605ffbf6c2 - - - - -] Running cmd
>> (subprocess): /openstack/venvs/nova-17.1.12/bin/python -m
>> oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C
>> qemu-img info
>> /var/lib/nova/instances/dfa80e78-ee02-46e5-ba7a-0874fa37da56/disk
>> --force-share execute
>> /openstack/venvs/nova-17.1.12/lib/python2.7/site-packages/oslo_concurrency/processutils.py:372
>> 2020-04-16 16:35:02.159 219284 DEBUG oslo_concurrency.processutils
>> [req-dd4a2032-bbbd-4c0b-87ac-11605ffbf6c2 - - - - -] CMD
>> "/openstack/venvs/nova-17.1.12/bin/python -m oslo_concurrency.prlimit
>> --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info
>> /var/lib/nova/instances/dfa80e78-ee02-46e5-ba7a-0874fa37da56/disk
>> --force-share" returned: 0 in 0.028s execute
>> /openstack/venvs/nova-17.1.12/lib/python2.7/site-packages/oslo_concurrency/processutils.py:409
>> 2020-04-16 16:35:02.160 219284 DEBUG oslo_concurrency.processutils
>> [req-dd4a2032-bbbd-4c0b-87ac-11605ffbf6c2 - - - - -] Running cmd
>> (subprocess): /openstack/venvs/nova-17.1.12/bin/python -m
>> oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C
>> qemu-img info
>> /var/lib/nova/instances/dfa80e78-ee02-46e5-ba7a-0874fa37da56/disk
>> --force-share execute
>> /openstack/venvs/nova-17.1.12/lib/python2.7/site-packages/oslo_concurrency/processutils.py:372
>> 2020-04-16 16:35:02.187 219284 DEBUG oslo_concurrency.processutils
>> [req-dd4a2032-bbbd-4c0b-87ac-11605ffbf6c2 - - - - -] CMD
>> "/openstack/venvs/nova-17.1.12/bin/python -m oslo_concurrency.prlimit
>> --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info
>> /var/lib/nova/instances/dfa80e78-ee02-46e5-ba7a-0874fa37da56/disk
>> --force-share" returned: 0 in 0.027s execute
>> /openstack/venvs/nova-17.1.12/lib/python2.7/site-packages/oslo_concurrency/processutils.py:409
>> 2020-04-16 16:35:02.190 219284 DEBUG oslo_concurrency.processutils
>> [req-dd4a2032-bbbd-4c0b-87ac-11605ffbf6c2 - - - - -] Running cmd
>> (subprocess): /openstack/venvs/nova-17.1.12/bin/python -m
>> oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C
>> qemu-img info
>> /var/lib/nova/instances/f117eb96-06a9-4c91-9c5c-111228e24d66/disk
>> --force-share execute
>> /openstack/venvs/nova-17.1.12/lib/python2.7/site-packages/oslo_concurrency/processutils.py:372
>> 2020-04-16 16:35:02.217 219284 DEBUG oslo_concurrency.processutils
>> [req-dd4a2032-bbbd-4c0b-87ac-11605ffbf6c2 - - - - -] CMD
>> "/openstack/venvs/nova-17.1.12/bin/python -m oslo_concurrency.prlimit
>> --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info
>> /var/lib/nova/instances/f117eb96-06a9-4c91-9c5c-111228e24d66/disk
>> --force-share" returned: 0 in 0.027s execute
>> /openstack/venvs/nova-17.1.12/lib/python2.7/site-packages/oslo_concurrency/processutils.py:409
>> 2020-04-16 16:35:02.218 219284 DEBUG oslo_concurrency.processutils
>> [req-dd4a2032-bbbd-4c0b-87ac-11605ffbf6c2 - - - - -] Running cmd
>> (subprocess): /openstack/venvs/nova-17.1.12/bin/python -m
>> oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C
>> qemu-img info
>> /var/lib/nova/instances/f117eb96-06a9-4c91-9c5c-111228e24d66/disk
>> --force-share execute
>> /openstack/venvs/nova-17.1.12/lib/python2.7/site-packages/oslo_concurrency/processutils.py:372
>> 2020-04-16 16:35:02.245 219284 DEBUG oslo_concurrency.processutils
>> [req-dd4a2032-bbbd-4c0b-87ac-11605ffbf6c2 - - - - -] CMD
>> "/openstack/venvs/nova-17.1.12/bin/python -m oslo_concurrency.prlimit
>> --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info
>> /var/lib/nova/instances/f117eb96-06a9-4c91-9c5c-111228e24d66/disk
>> --force-share" returned: 0 in 0.027s execute
>> /openstack/venvs/nova-17.1.12/lib/python2.7/site-packages/oslo_concurrency/processutils.py:409
>> 2020-04-16 16:35:02.247 219284 DEBUG oslo_concurrency.processutils
>> [req-dd4a2032-bbbd-4c0b-87ac-11605ffbf6c2 - - - - -] Running cmd
>> (subprocess): /openstack/venvs/nova-17.1.12/bin/python -m
>> oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C
>> qemu-img info
>> /var/lib/nova/instances/5ba39de3-f5f8-46a2-908d-c43b901e1696/disk
>> --force-share execute
>> /openstack/venvs/nova-17.1.12/lib/python2.7/site-packages/oslo_concurrency/processutils.py:372
>> 2020-04-16 16:35:02.274 219284 DEBUG oslo_concurrency.processutils
>> [req-dd4a2032-bbbd-4c0b-87ac-11605ffbf6c2 - - - - -] CMD
>> "/openstack/venvs/nova-17.1.12/bin/python -m oslo_concurrency.prlimit
>> --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info
>> /var/lib/nova/instances/5ba39de3-f5f8-46a2-908d-c43b901e1696/disk
>> --force-share" returned: 0 in 0.027s execute
>> /openstack/venvs/nova-17.1.12/lib/python2.7/site-packages/oslo_concurrency/processutils.py:409
>> 2020-04-16 16:35:02.275 219284 DEBUG oslo_concurrency.processutils
>> [req-dd4a2032-bbbd-4c0b-87ac-11605ffbf6c2 - - - - -] Running cmd
>> (subprocess): /openstack/venvs/nova-17.1.12/bin/python -m
>> oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C
>> qemu-img info
>> /var/lib/nova/instances/5ba39de3-f5f8-46a2-908d-c43b901e1696/disk
>> --force-share execute
>> /openstack/venvs/nova-17.1.12/lib/python2.7/site-packages/oslo_concurrency/processutils.py:372
>> 2020-04-16 16:35:02.302 219284 DEBUG oslo_concurrency.processutils
>> [req-dd4a2032-bbbd-4c0b-87ac-11605ffbf6c2 - - - - -] CMD
>> "/openstack/venvs/nova-17.1.12/bin/python -m oslo_concurrency.prlimit
>> --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info
>> /var/lib/nova/instances/5ba39de3-f5f8-46a2-908d-c43b901e1696/disk
>> --force-share" returned: 0 in 0.027s execute
>> /openstack/venvs/nova-17.1.12/lib/python2.7/site-packages/oslo_concurrency/processutils.py:409
>> 2020-04-16 16:35:02.669 219284 DEBUG nova.compute.resource_tracker
>> [req-dd4a2032-bbbd-4c0b-87ac-11605ffbf6c2 - - - - -] Hypervisor/Node
>> resource view: name=KO1A3D02O131106CM07.openstack.local free_ram=72406MB
>> free_disk=523GB free_vcpus=10 pci_devices=[{"dev_id": "pci_0000_3a_0a_7",
>> "product_id": "2047",
>>  /openstack/venvs/nova-17.1.12/lib/python2.7/site-packages/nova/compute/resource_tracker.py:828
>> 2020-04-16 16:35:02.670 219284 DEBUG oslo_concurrency.lockutils
>> [req-dd4a2032-bbbd-4c0b-87ac-11605ffbf6c2 - - - - -] Lock
>> "compute_resources" acquired by
>> "nova.compute.resource_tracker._update_available_resource" :: waited 0.000s
>> inner
>> /openstack/venvs/nova-17.1.12/lib/python2.7/site-packages/oslo_concurrency/lockutils.py:273
>> 2020-04-16 16:35:02.729 219284 DEBUG nova.compute.resource_tracker
>> [req-dd4a2032-bbbd-4c0b-87ac-11605ffbf6c2 - - - - -] Compute driver doesn't
>> require allocation refresh and we're on a compute host in a deployment that
>> only has compute hosts with Nova versions >=16 (Pike). Skipping
>> auto-correction of allocations.  _update_usage_from_instances
>> /openstack/venvs/nova-17.1.12/lib/python2.7/site-packages/nova/compute/resource_tracker.py:1247
>> 2020-04-16 16:35:02.784 219284 DEBUG nova.compute.resource_tracker
>> [req-dd4a2032-bbbd-4c0b-87ac-11605ffbf6c2 - - - - -] Instance
>> 5ba39de3-f5f8-46a2-908d-c43b901e1696 actively managed on this compute host
>> and has allocations in placement: {u'resources': {u'VCPU': 16,
>> u'MEMORY_MB': 57344, u'DISK_GB': 60}}.
>> _remove_deleted_instances_allocations
>> /openstack/venvs/nova-17.1.12/lib/python2.7/site-packages/nova/compute/resource_tracker.py:1269
>> 2020-04-16 16:35:02.785 219284 DEBUG nova.compute.resource_tracker
>> [req-dd4a2032-bbbd-4c0b-87ac-11605ffbf6c2 - - - - -] Instance
>> dfa80e78-ee02-46e5-ba7a-0874fa37da56 actively managed on this compute host
>> and has allocations in placement: {u'resources': {u'VCPU': 12,
>> u'MEMORY_MB': 24576, u'DISK_GB': 60}}.
>> _remove_deleted_instances_allocations
>> /openstack/venvs/nova-17.1.12/lib/python2.7/site-packages/nova/compute/resource_tracker.py:1269
>> 2020-04-16 16:35:02.785 219284 DEBUG nova.compute.resource_tracker
>> [req-dd4a2032-bbbd-4c0b-87ac-11605ffbf6c2 - - - - -] Instance
>> f33b2602-ac5f-491e-bdb8-7e7f9376bcad actively managed on this compute host
>> and has allocations in placement: {u'resources': {u'VCPU': 16,
>> u'MEMORY_MB': 49152, u'DISK_GB': 40}}.
>> _remove_deleted_instances_allocations
>> /openstack/venvs/nova-17.1.12/lib/python2.7/site-packages/nova/compute/resource_tracker.py:1269
>> 2020-04-16 16:35:02.785 219284 DEBUG nova.compute.resource_tracker
>> [req-dd4a2032-bbbd-4c0b-87ac-11605ffbf6c2 - - - - -] Instance
>> d3d2837b-49c3-4822-b26b-4b3c03d344ae actively managed on this compute host
>> and has allocations in placement: {u'resources': {u'VCPU': 16,
>> u'MEMORY_MB': 49152, u'DISK_GB': 60}}.
>> _remove_deleted_instances_allocations
>> /openstack/venvs/nova-17.1.12/lib/python2.7/site-packages/nova/compute/resource_tracker.py:1269
>> 2020-04-16 16:35:02.785 219284 DEBUG nova.compute.resource_tracker
>> [req-dd4a2032-bbbd-4c0b-87ac-11605ffbf6c2 - - - - -] Instance
>> f117eb96-06a9-4c91-9c5c-111228e24d66 actively managed on this compute host
>> and has allocations in placement: {u'resources': {u'VCPU': 2, u'MEMORY_MB':
>> 4096, u'DISK_GB': 20}}. _remove_deleted_instances_allocations
>> /openstack/venvs/nova-17.1.12/lib/python2.7/site-packages/nova/compute/resource_tracker.py:1269
>> 2020-04-16 16:35:02.785 219284 DEBUG nova.compute.resource_tracker
>> [req-dd4a2032-bbbd-4c0b-87ac-11605ffbf6c2 - - - - -] Total usable vcpus:
>> 72, total allocated vcpus: 62 _report_final_resource_view
>> /openstack/venvs/nova-17.1.12/lib/python2.7/site-packages/nova/compute/resource_tracker.py:844
>> 2020-04-16 16:35:02.786 219284 INFO nova.compute.resource_tracker
>> [req-dd4a2032-bbbd-4c0b-87ac-11605ffbf6c2 - - - - -] Final resource view:
>> name=KO1A3D02O131106CM07.openstack.local phys_ram=385391MB
>> used_ram=192512MB phys_disk=548GB used_disk=250GB total_vcpus=72
>> used_vcpus=62 pci_stats=[]
>> 2020-04-16 16:35:02.814 219284 DEBUG nova.compute.resource_tracker
>> [req-dd4a2032-bbbd-4c0b-87ac-11605ffbf6c2 - - - - -] Compute_service record
>> updated for KO1A3D02O131106CM07:KO1A3D02O131106CM07.openstack.local
>> _update_available_resource
>> /openstack/venvs/nova-17.1.12/lib/python2.7/site-packages/nova/compute/resource_tracker.py:784
>> 2020-04-16 16:35:02.814 219284 DEBUG oslo_concurrency.lockutils
>> [req-dd4a2032-bbbd-4c0b-87ac-11605ffbf6c2 - - - - -] Lock
>> "compute_resources" released by
>> "nova.compute.resource_tracker._update_available_resource" :: held 0.144s
>> inner
>> /openstack/venvs/nova-17.1.12/lib/python2.7/site-packages/oslo_concurrency/lockutils.py:285
>> 2020-04-16 16:35:37.612 219284 DEBUG oslo_service.periodic_task
>> [req-dd4a2032-bbbd-4c0b-87ac-11605ffbf6c2 - - - - -] Running periodic task
>> ComputeManager._reclaim_queued_deletes run_periodic_tasks
>> /openstack/venvs/nova-17.1.12/lib/python2.7/site-packages/oslo_service/periodic_task.py:215
>> 2020-04-16 16:35:37.613 219284 DEBUG nova.compute.manager
>> [req-dd4a2032-bbbd-4c0b-87ac-11605ffbf6c2 - - - - -]
>> CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes
>> /openstack/venvs/nova-17.1.12/lib/python2.7/site-packages/nova/compute/manager.py:7438
>> 2020-04-16 16:35:38.685 219284 DEBUG oslo_service.periodic_task
>> [req-dd4a2032-bbbd-4c0b-87ac-11605ffbf6c2 - - - - -] Running periodic task
>> ComputeManager._poll_rebooting_instances run_periodic_tasks
>> /openstack/venvs/nova-17.1.12/lib/python2.7/site-packages/oslo_service/periodic_task.py:215
>> 2020-04-16 16:35:39.685 219284 DEBUG oslo_service.periodic_task
>> [req-dd4a2032-bbbd-4c0b-87ac-11605ffbf6c2 - - - - -] Running periodic task
>> ComputeManager._instance_usage_audit run_periodic_tasks
>> /openstack/venvs/nova-17.1.12/lib/python2.7/site-packages/oslo_service/periodic_task.py:215
>>
>> --
>> Best Regards
>> Pradeep Kumar
>>
>
>
> --
> Ruslanas Gžibovskis
> +370 6030 7030
>


-- 
Best Regards
Pradeep Kumar
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.rdoproject.org/pipermail/dev/attachments/20200421/b1067944/attachment-0001.html>
-------------- next part --------------
A non-text attachment was scrubbed...
Name: image.png
Type: image/png
Size: 8053 bytes
Desc: not available
URL: <http://lists.rdoproject.org/pipermail/dev/attachments/20200421/b1067944/attachment-0002.png>
-------------- next part --------------
A non-text attachment was scrubbed...
Name: image.png
Type: image/png
Size: 12691 bytes
Desc: not available
URL: <http://lists.rdoproject.org/pipermail/dev/attachments/20200421/b1067944/attachment-0003.png>


More information about the dev mailing list