Package Review: ansible-role-lunasa-hsm
by Douglas Mendizabal
-----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA256
Hi RDO Friends,
This email is to notify of a Package Review request for
ansible-role-lunasa-hsm. [1]
This ansible role can be used by TripleO to deploy Barbican using
Safenet Luna SA HSM devices as the backend.
Source repository: https://opendev.org/openstack/ansible-role-lunasa-hsm
WIP distgit repository:
https://github.com/dmend/ansible-role-lunasa-hsm-distgit
Thanks,
Douglas
-----BEGIN PGP SIGNATURE-----
iQIzBAEBCAAdFiEEwcapj5oGTj2zd3XogB6WFOq/OrcFAl6p8HcACgkQgB6WFOq/
Orec4g/+IsW/GWWtiArrCmXBm6VkNqq6joEey9NqZVQOpkyd9xkRw7KJdQAcLwae
L5WXguFCeKsJf7KmWovG+kx0nDK+I7DRyR5ooPCYYiZd1ltHGxxEAZOrwBSZjq5o
FukR+ffUZbzl7bN7sSoVdge47uL1UHb683hRq13yAPsryK5CY6qkPCn4//GtqW53
AosjKZbY1qEMSzpuU1Ew/OnHdUWd8e7qyeKfQxFKL8/PiT1hjCQmrvZ8orvTiCTB
8VeAdx1rwLssKAsCHNBJvWOn9VEKBpDDcvTTcERAV8Wo6az/kJ280F1l626cS9Ok
xb7VHrtBFX53wO7LT8bdHyp7ErSfEjJoPPYiQ0no0aZCS9rDHuCBSw/AdafQhXJ8
8dUoDSRUIClqbxLY5xr3AcBzkevRZTMjxNWiq0FUkFkdsRLPmaLj8uQrt8uLEsmL
bxa0Cw6xB2VTAb/S4lRlLhan6aGkHJanom8UB2mcYJVt1oIxLVdJdoDojIevjwe6
rVFVTEPgwTGHGavMFTvLB4HxTDyLVXDsrDIuMlIZemisKg5YtOLCHir96lEgoXdg
kp1iIJOVBW+G8o1bCjCT1QGRmEzEORP9gnZTAfuhE0lcXABWKrdr45GM0m/OJSPn
6n8R95azqoilisXp9RJJo+QXX/PD9sHm9et4dRB22/R6FQFaNMQ=
=1AYs
-----END PGP SIGNATURE-----
4 years, 6 months
[rdo-users] [Meeting] RDO meeting (2020-04-29) minutes
by YATIN KAREL
==============================
#rdo: RDO meeting - 2020-04-29
==============================
Meeting started by ykarel at 14:00:38 UTC. The full logs are available
athttp://eavesdrop.openstack.org/meetings/rdo_meeting___2020_04_29/2020/r...
.
Meeting summary
---------------
* roll call (ykarel, 14:01:34)
* Ussuri Updates (ykarel, 14:08:42)
* clients and non client libraries are branched and built in CBS
(ykarel, 14:08:54)
* most of the core projects are branched, CBS builds in progress
(ykarel, 14:09:08)
* LINK: https://review.rdoproject.org/r/#/q/topic:ussuri-branching
(ykarel, 14:09:19)
* LINK: https://review.rdoproject.org/r/#/c/26940/ (ykarel, 14:12:50)
* LINK:
https://github.com/openstack/vitrage-dashboard/compare/stable/train...sta...
(jcapitao, 14:13:06)
* ACTION: jcapitao to check with horizon team wrt to new xstatic
packages in vitrage-ui (ykarel, 14:22:26)
* LINK:
https://github.com/openstack/vitrage-dashboard/commit/d181623f2285b7d745f...
(ykarel, 14:22:34)
* Mock downgrade to 1.4.21 (ykarel, 14:22:51)
* mock 1.4.21 is added to both train and ussuri centos8 build-deps
repo (ykarel, 14:23:00)
* LINK:
https://review.rdoproject.org/r/#/q/topic:rdo-centos8-remove-epel
(ykarel, 14:23:12)
* codesearch.rdoproject.org going to be removed in favor of
https://review.rdoproject.org/codesearch (ykarel, 14:25:56)
* LINK:
https://lists.rdoproject.org/pipermail/dev/2020-April/009364.html
(ykarel, 14:26:12)
* LINK:
https://codesearch.rdoproject.org/?q=Recommends%3A%7CSupplements%3A%7CSug...
(ykarel, 14:28:05)
* chair for next meeting (ykarel, 14:37:10)
* ACTION: jcapitao to chair next week (ykarel, 14:38:34)
* open floor (ykarel, 14:38:43)
* LINK:
https://logserver.rdoproject.org/openstack-periodic-master/opendev.org/op...
(ykarel, 14:42:16)
* LINK:
https://codesearch.rdoproject.org/?q=Recommends%3A|Supplements%3A|Suggest...
(ykarel, 14:42:46)
* LINK:
https://opendev.org/openstack/kolla/src/branch/master/docker/base/dnf.con...
(ykarel, 14:43:28)
Meeting ended at 15:00:15 UTC.
Action items, by person
-----------------------
* jcapitao
* jcapitao to check with horizon team wrt to new xstatic packages in
vitrage-ui
* jcapitao to chair next week
People present (lines said)
---------------------------
* ykarel (70)
* jcapitao (19)
* amoralej (11)
* openstack (5)
* rdogerrit (3)
* mjturek (3)
Generated by `MeetBot`_ 0.1.4
______________________________________________
users mailing list
users(a)lists.rdoproject.org
http://lists.rdoproject.org/mailman/listinfo/users
To unsubscribe: users-unsubscribe(a)lists.rdoproject.org
4 years, 6 months
rdopkg in EPEL 8
by Ken Dreyer
Hi folks,
rdopkg does not install on EPEL 8 due to missing dependencies:
https://bugzilla.redhat.com/1824313 [build python-distroinfo for EPEL 8]
https://bugzilla.redhat.com/1824314 [build python-pymod2pkg for EPEL 8]
For pymod2pkg, I'm having trouble understanding why this RPM has
certain BuildRequires. As one example, I cannot find any references to
the testresources library in the upstream pymod2pkg Git repository at
all (beyond the test-requirements.txt file).
- Ken
4 years, 6 months
[Meeting] RDO meeting (2020-04-22) minutes
by Joel Capitao
==============================
#rdo: RDO meeting - 2020-04-22
==============================
Meeting started by jcapitao at 14:00:25 UTC. The full logs are
available at
http://eavesdrop.openstack.org/meetings/rdo_meeting___2020_04_22/2020/rdo...
.
Meeting summary
---------------
* roll call (jcapitao, 14:00:47)
* Ussuri Updates (jcapitao, 14:07:33)
* CentOS8 Ussuri builder is up and running
https://trunk.rdoproject.org/centos8-ussuri/status_report.html
(jcapitao, 14:07:58)
* Reqcheck/pyver macro cleanup is being done
https://review.rdoproject.org/r/#/q/topic:ussuri-branching
(jcapitao, 14:08:17)
* LINK:
https://review.rdoproject.org/zuul/builds?job_name=periodic-tripleo-ci-ce...
(ykarel, 14:16:37)
* ACTION: jcapitao pin non-openstack puppet modules after promotion
(jcapitao, 14:21:29)
* Mock Downgrade to 1.4.21 in Ussuri CentOS8 build deps (jcapitao,
14:22:51)
* LINK:
https://bodhi.fedoraproject.org/updates/FEDORA-EPEL-2020-5a84e15907
(jcapitao, 14:23:28)
* LINK: https://review.rdoproject.org/r/#/c/26690/ (jcapitao,
14:23:42)
* LINK: https://bugs.launchpad.net/tripleo/+bug/1872881/comments/6 is
/dev/loop one (ykarel, 14:28:00)
* downgrade of mock 2.2 -> 1.4.x (while working on support of 2.2)
(jcapitao, 14:32:35)
* chair for next week (jcapitao, 14:35:12)
* ACTION: ykarel to chair next week (jcapitao, 14:36:41)
* Open floor (jcapitao, 14:36:50)
Meeting ended at 14:52:11 UTC.
Action items, by person
-----------------------
* jcapitao
* jcapitao pin non-openstack puppet modules after promotion
* ykarel
* ykarel to chair next week
People present (lines said)
---------------------------
* ykarel (41)
* jcapitao (35)
* weshay|ruck (10)
* openstack (7)
* rfolco (7)
* jpena (5)
* amoralej (2)
* rdogerrit (2)
Generated by `MeetBot`_ 0.1.4
4 years, 7 months
Re: [rdo-dev] RDO | OpenStack VMs | XFS Metadata Corruption
by Pradeep Antil
Hi Ruslanas / Openstack Gurus,
Please find the response inline below:
*is it the same image all the time?* -- Yes, we are using same image but
image size is around 6GB and recently we have an oobersavtion that VMs are
successfully spawned on some compute nodes but randomly failing on certain
compute hosts. It is also observed that in nova compute logs, image is
attempted to be resize. please refer the below snapshot,
2020-04-20 19:03:27.067 150243 DEBUG oslo_concurrency.processutils
[req-1caea4a2-7cf0-4ba5-9dda-2bb90bb746d8 cbabd9368dc24fea84fd2e43935fddfa
975a7d3840a141b0a20a9dc60e3da6cd - default default] Running cmd
(subprocess): qemu-img resize
/var/lib/nova/instances/616b1a27-8b8c-486b-b8db-57c7b91a7402/disk
64424509440 execute
/openstack/venvs/nova-17.1.12/lib/python2.7/site-packages/oslo_concurrency/processutils.py:372
2020-04-20 19:03:27.124 150243 DEBUG oslo_concurrency.processutils
[req-1caea4a2-7cf0-4ba5-9dda-2bb90bb746d8 cbabd9368dc24fea84fd2e43935fddfa
975a7d3840a141b0a20a9dc60e3da6cd - default default] CMD "qemu-img resize
/var/lib/nova/instances/616b1a27-8b8c-486b-b8db-57c7b91a7402/disk
64424509440" returned: 0 in 0.056s execute
/openstack/venvs/nova-17.1.12/lib/python2.7/site-packages/oslo_concurrency/processutils.py:409
2020-04-20 19:03:27.160 150243 DEBUG nova.virt.disk.api
[req-1caea4a2-7cf0-4ba5-9dda-2bb90bb746d8 cbabd9368dc24fea84fd2e43935fddfa
975a7d3840a141b0a20a9dc60e3da6cd - default default] Checking if we can
resize image
/var/lib/nova/instances/616b1a27-8b8c-486b-b8db-57c7b91a7402/disk.
size=64424509440 can_resize_image
/openstack/venvs/nova-17.1.12/lib/python2.7/site-packages/nova/virt/disk/api.py:216
*try to create that instance using horizon or cli, whichever you favor
more. does it boot good?* - Yes , We did try to create an instance using
the same image sometimes VMs spawned properly without any errors. If we
specify VM count let's assume 6 , on some compute node it is failing to
spawn VMs properly, on console we are getting XFS Metadata corruption
errors.
*I would also, do cleanup of instances (remove all), and remove all
dependent base files from here. rm -rf /var/lib/nova/instances/_base/ *-- We
used to clear the image cache from all compute nodes before initiating
Stack Creation. Yes, used the same rm command to clear cache.
I Just want let you know one more thing in my setup, my glance file system
on comtroller are mounted on external NFS share with the following
parameters,
[image: image.png]
[image: image.png]
Any pointers or suggestions to resolve this issue.
On Tue, Apr 21, 2020 at 11:37 AM Ruslanas Gžibovskis <ruslanas(a)lpic.lt>
wrote:
> is it the same image all the time?
>
> try to create that instance using horizon or cli, whichever you favor
> more. does it boot good?
>
> I would also, do cleanup of instances (remove all), and remove all
> dependent base files from here. rm -rf /var/lib/nova/instances/_base/
>
>
>
>
> On Thu, 16 Apr 2020 at 19:08, Pradeep Antil <pradeepantil(a)gmail.com>
> wrote:
>
>> Hi Techies,
>>
>> I have below RDO setup,
>>
>> - RDO 13
>> - Base OS for Controllers & Compute is Ubuntu
>> - Neutron with vxlan + VLAN (for provider N/W)
>> - Cinder backend is CHEF
>> - HugePages and CPU Pinning for VNF's VMs
>>
>> I am trying to deploy a stack which is suppose to create 18 VMs across
>> 11 computes nodes internal disk, but every time 3 to 4 VMs out of 18
>> doesn't spawned properly. At console of these VMs i am getting below
>> errors,
>>
>> Any idea and suggestion how to troubleshoot this? and resolve the issue.
>>
>> [ 100.681552] ffff8b37f8f86020: 00 00 00 00 00 00 00 00 00 00 00 00 00
>> 00 00 00 ................
>> [ 100.681553] ffff8b37f8f86030: 00 00 00 00 00 00 00 00 00 00 00 00 00
>> 00 00 00 ................
>> [ 100.681560] XFS (vda1): Metadata corruption detected at
>> xfs_inode_buf_verify+0x79/0x100 [xfs], xfs_inode block 0x179b800
>> [ 100.681561] XFS (vda1): Unmount and run xfs_repair
>> [ 100.681561] XFS (vda1): First 64 bytes of corrupted metadata buffer:
>> [ 100.681562] ffff8b37f8f86000: 00 00 00 00 00 00 00 00 00 00 00 00 00
>> 00 00 00 ................
>> [ 100.681562] ffff8b37f8f86010: 00 00 00 00 00 00 00 00 00 00 00 00 00
>> 00 00 00 ................
>> [ 100.681563] ffff8b37f8f86020: 00 00 00 00 00 00 00 00 00 00 00 00 00
>> 00 00 00 ................
>> [ 100.681564] ffff8b37f8f86030: 00 00 00 00 00 00 00 00 00 00 00 00 00
>> 00 00 00 ................
>> [ 100.681596] XFS (vda1): metadata I/O error: block 0x179b800
>> ("xfs_trans_read_buf_map") error 117 numblks 32
>> [ 100.681599] XFS (vda1): xfs_imap_to_bp: xfs_trans_read_buf() returned
>> error -117.
>> [ 99.585766] [cloud-init[32m OK [0m[2530]: ] Cloud-init v. 18.2
>> running 'init-local' at Thu, 16 Apr 2020 10:44:21 +0000. Up 99.55
>> seconds.Started oVirt Guest Agent.
>>
>> [ 101.086566] XFS (vda1): Metadata corruption detected at
>> xfs_inode_buf_verify+0x79/0x100 [xfs], xfs_inode block 0x179b800
>> [ 101.092093] XFS (vda1): Unmount and run xfs_repair
>> [ 101.094660] XFS (vda1): First 64 bytes of corrupted metadata buffer:
>> [ 101.097787] ffff8b37fef07000: 00 00 00 00 00 00 00 00 00 00 00 00 00
>> 00 00 00 ................
>> [ 101.105959] ffff8b37fef07010: 00 00 00 00 00 00 00 00 00 00 00 00 00
>> 00 00 00 ................
>> [ 101.110718] ffff8b37fef07020: 00 00 00 00 00 00 00 00 00 00 00 00 00
>> 00 00 00 ................
>> [ 101.115412] ffff8b37fef07030: 00 00 00 00 00 00 00 00 00 00 00 00 00
>> 00 00 00 ................
>> [ 101.120166] XFS (vda1): Metadata corruption detected at
>> xfs_inode_buf_verify+0x79/0x100 [xfs], xfs_inode block 0x179b800
>> [ 101.125644] XFS (vda1): Unmount and run xfs_repair
>> [ 101.128229] XFS (vda1): First 64 bytes of corrupted metadata buffer:
>> [ 101.131370] ffff8b37fef07000: 00 00 00 00 00 00 00 00 00 00 00 00 00
>> 00 00 00 ................
>> [ 101.138671] ffff8b37fef07010: 00 00 00 00 00 00 00 00 00 00 00 00 00
>> 00 00 00 ................
>> [ 101.143427] ffff8b37fef07020: 00 00 00 00 00 00 00 00 00 00 00 00 00
>> 00 00 00 ................
>> [ 101.148235] ffff8b37fef07030: 00 00 00 00 00 00 00 00 00 00 00 00 00
>> 00 00 00 ................
>> [ 101.152999] XFS (vda1): Metadata corruption detected at
>> xfs_inode_buf_verify+0x79/0x100 [xfs], xfs_inode block 0x179b800
>> [ 101.158479] XFS (vda1): Unmount and run xfs_repair
>> [ 101.161068] XFS (vda1): First 64 bytes of corrupted metadata buffer:
>> [ 101.169883] ffff8b37fef07000: 00 00 00 00 00 00 00 00 00 00 00 00 00
>> 00 00 00 ................
>> [ 101.174751] ffff8b37fef07010: 00 00 00 00 00 00 00 00 00 00 00 00 00
>> 00 00 00 ................
>> [ 101.179639] ffff8b37fef07020: 00 00 00 00 00 00 00 00 00 00 00 00 00
>> 00 00 00 ................
>> [ 101.184285] ffff8b37fef07030: 00 00 00 00 00 00 00 00 00 00 00 00 00
>> 00 00 00 ................
>> [ 101.189104] XFS (vda1): Metadata corruption detected at
>> xfs_inode_buf_verify+0x79/0x100 [xfs], xfs_inode block 0x179b800
>> [ 101.194619] XFS (vda1): Unmount and run xfs_repair
>> [ 101.197228] XFS (vda1): First 64 bytes of corrupted metadata buffer:
>> [ 101.201109] ffff8b37fef07000: 00 00 00 00 00 00 00 00 00 00 00 00 00
>> 00 00 00 ................
>> [ 101.205976] ffff8b37fef07010: 00 00 00 00 00 00 00 00 00 00 00 00 00
>> 00 00 00 ................
>> [ 101.210709] ffff8b37fef07020: 00 00 00 00 00 00 00 00 00 00 00 00 00
>> 00 00 00 ................
>> [ 101.215442] ffff8b37fef07030: 00 00 00 00 00 00 00 00 00 00 00 00 00
>> 00 00 00 ................
>> [ 101.220196] XFS (vda1): Metadata corruption detected at
>> xfs_inode_buf_verify+0x79/0x100 [xfs], xfs_inode block 0x179b800
>> [ 101.225708] XFS (vda1): Unmount and run xfs_repair
>> [ 101.228296] XFS (vda1): First 64 bytes of corrupted metadata buffer:
>> [ 101.232058] ffff8b37fef07000: 00 00 00 00 00 00 00 00 00 00 00 00 00
>> 00 00 00 ................
>> [ 101.236803] ffff8b37fef07010: 00 00 00 00 00 00 00 00 00 00 00 00 00
>> 00 00 00 ................
>> [ 101.241538] ffff8b37fef07020: 00 00 00 00 00 00 00 00 00 00 00 00 00
>> 00 00 00 ................
>> [ 101.246252] ffff8b37fef07030: 00 00 00 00 00 00 00 00 00 00 00 00 00
>> 00 00 00 ................
>> [ 101.250997] XFS (vda1): Metadata corruption detected at
>> xfs_inode_buf_verify+0x79/0x100 [xfs], xfs_inode block 0x179b800
>> [ 101.256518] XFS (vda1): Unmount and run xfs_repair
>> [ 101.259105] XFS (vda1): First 64 bytes of corrupted metadata buffer:
>> [ 101.262912] ffff8b37fef07000: 00 00 00 00 00 00 00 00 00 00 00 00 00
>> 00 00 00 ................
>> [ 101.267649] ffff8b37fef07010: 00 00 00 00 00 00 00 00 00 00 00 00 00
>> 00 00 00 ................
>> [ 101.272360] ffff8b37fef07020: 00 00 00 00 00 00 00 00 00 00 00 00 00
>> 00 00 00 ................
>> [ 101.277088] ffff8b37fef07030: 00 00 00 00 00 00 00 00 00 00 00 00 00
>> 00 00 00 ................
>> [ 101.281831] XFS (vda1): Metadata corruption detected at
>> xfs_inode_buf_verify+0x79/0x100 [xfs], xfs_inode block 0x179b800
>> [ 101.287322] XFS (vda1): Unmount and run xfs_repair
>> [ 101.295401] XFS (vda1): First 64 bytes of corrupted metadata buffer:
>> [ 101.298546] ffff8b37fef07000: 00 00 00 00 00 00 00 00 00 00 00 00 00
>> 00 00 00 ................
>> [ 101.303283] ffff8b37fef07010: 00 00 00 00 00 00 00 00 00 00 00 00 00
>> 00 00 00 ................
>> [ 101.308009] ffff8b37fef07020: 00 00 00 00 00 00 00 00 00 00 00 00 00
>> 00 00 00 ................
>> [ 101.312747] ffff8b37fef07030: 00 00 00 00 00 00 00 00 00 00 00 00 00
>> 00 00 00 ................
>> [ 101.317460] XFS (vda1): Metadata corruption detected at
>> xfs_inode_buf_verify+0x79/0x100 [xfs], xfs_inode block 0x179b800
>> [ 101.322960] XFS (vda1): Unmount and run xfs_repair
>> [ 101.326233] XFS (vda1): First 64 bytes of corrupted metadata buffer:
>> [ 101.329383] ffff8b37fef07000: 00 00 00 00 00 00 00 00 00 00 00 00 00
>> 00 00 00 ................
>> [ 101.334100] ffff8b37fef07010: 00 00 00 00 00 00 00 00 00 00 00 00 00
>> 00 00 00 ................
>> [ 101.338822] ffff8b37fef07020: 00 00 00 00 00 00 00 00 00 00 00 00 00
>> 00 00 00 ................
>> [ 101.343549] ffff8b37fef07030: 00 00 00 00 00 00 00 00 00 00 00 00 00
>> 00 00 00 ................
>> [ 101.348297] XFS (vda1): Metadata corruption detected at
>> xfs_inode_buf_verify+0x79/0x100 [xfs], xfs_inode block 0x179b800
>> [ 101.353793] XFS (vda1): Unmount and run xfs_repair
>> [ 101.357102] XFS (vda1): First 64 bytes of corrupted metadata buffer:
>>
>> Below are the Nova Compute logs of the hypervisor where it is scheduled
>> to spawned,
>>
>> 3T06:04:55Z,direct_url=<?>,disk_format='qcow2',id=c255bbbc-c8c3-462e-b827-1d35db08d283,min_disk=0,min_ram=0,name='vnf-scef-18.5',owner='36c70ae400e74fc2859f44815d0c9afb',properties=ImageMetaProps,protected=<?>,size=7143292928,status='active',tags=<?>,updated_at=2020-03-03T06:05:49Z,virtual_size=<?>,visibility=<?>)
>> rescue=None block_device_info={'swap': None, 'root_device_name':
>> u'/dev/vda', 'ephemerals': [], 'block_device_mapping': []} _get_guest_xml
>> /openstack/venvs/nova-17.1.12/lib/python2.7/site-packages/nova/virt/libvirt/driver.py:5419
>> 2020-04-16 16:12:28.310 219284 DEBUG oslo_concurrency.processutils
>> [req-5a53263c-928c-4a0c-a03c-8b698339efca cbabd9368dc24fea84fd2e43935fddfa
>> 975a7d3840a141b0a20a9dc60e3da6cd - default default] CMD
>> "/openstack/venvs/nova-17.1.12/bin/python -m oslo_concurrency.prlimit
>> --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info
>> /var/lib/nova/instances/dfa80e78-ee02-46e5-ba7a-0874fa37da56/disk
>> --force-share" returned: 0 in 0.031s execute
>> /openstack/venvs/nova-17.1.12/lib/python2.7/site-packages/oslo_concurrency/processutils.py:409
>> 2020-04-16 16:12:28.310 219284 DEBUG oslo_concurrency.processutils
>> [req-5a53263c-928c-4a0c-a03c-8b698339efca cbabd9368dc24fea84fd2e43935fddfa
>> 975a7d3840a141b0a20a9dc60e3da6cd - default default] Running cmd
>> (subprocess): qemu-img resize
>> /var/lib/nova/instances/dfa80e78-ee02-46e5-ba7a-0874fa37da56/disk
>> 64424509440 execute
>> /openstack/venvs/nova-17.1.12/lib/python2.7/site-packages/oslo_concurrency/processutils.py:372
>> 2020-04-16 16:12:28.322 219284 DEBUG oslo_concurrency.processutils
>> [req-5a53263c-928c-4a0c-a03c-8b698339efca cbabd9368dc24fea84fd2e43935fddfa
>> 975a7d3840a141b0a20a9dc60e3da6cd - default default] CMD "qemu-img resize
>> /var/lib/nova/instances/dfa80e78-ee02-46e5-ba7a-0874fa37da56/disk
>> 64424509440" returned: 0 in 0.012s execute
>> /openstack/venvs/nova-17.1.12/lib/python2.7/site-packages/oslo_concurrency/processutils.py:409
>> 2020-04-16 16:12:28.323 219284 DEBUG oslo_concurrency.lockutils
>> [req-5a53263c-928c-4a0c-a03c-8b698339efca cbabd9368dc24fea84fd2e43935fddfa
>> 975a7d3840a141b0a20a9dc60e3da6cd - default default] Lock
>> "86692cd1e738b8df7cf1f951967c61e92222fc4c" released by
>> "nova.virt.libvirt.imagebackend.copy_qcow2_image" :: held 0.092s inner
>> /openstack/venvs/nova-17.1.12/lib/python2.7/site-packages/oslo_concurrency/lockutils.py:285
>> 2020-04-16 16:12:28.323 219284 DEBUG oslo_concurrency.processutils
>> [req-5a53263c-928c-4a0c-a03c-8b698339efca cbabd9368dc24fea84fd2e43935fddfa
>> 975a7d3840a141b0a20a9dc60e3da6cd - default default] Running cmd
>> (subprocess): /openstack/venvs/nova-17.1.12/bin/python -m
>> oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C
>> qemu-img info
>> /var/lib/nova/instances/_base/86692cd1e738b8df7cf1f951967c61e92222fc4c
>> --force-share execute
>> /openstack/venvs/nova-17.1.12/lib/python2.7/site-packages/oslo_concurrency/processutils.py:372
>> 2020-04-16 16:12:28.338 219284 DEBUG nova.virt.libvirt.driver
>> [req-a7ee4c3e-ea3a-4237-ba75-4c85411c9889 cbabd9368dc24fea84fd2e43935fddfa
>> 975a7d3840a141b0a20a9dc60e3da6cd - default default] CPU mode 'host-model'
>> model '' was chosen, with extra flags: '' _get_guest_cpu_model_config
>> /openstack/venvs/nova-17.1.12/lib/python2.7/site-packages/nova/virt/libvirt/driver.py:3909
>> 2020-04-16 16:12:28.338 219284 DEBUG nova.virt.hardware
>> [req-a7ee4c3e-ea3a-4237-ba75-4c85411c9889 cbabd9368dc24fea84fd2e43935fddfa
>> 975a7d3840a141b0a20a9dc60e3da6cd - default default] Getting desirable
>> topologies for flavor
>> Flavor(created_at=2020-03-23T11:20:41Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw:cpu_policy='dedicated',hw:mem_page_size='1048576'},flavorid='03e45d45-f4f4-4c24-8b70-678c3703402f',id=102,is_public=False,memory_mb=49152,name='dmdc-traffic-flavor',projects=<?>,root_gb=60,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=16)
>> and image_meta
>> ImageMeta(checksum='69f8c18e59db9669d669d04824507a82',container_format='bare',created_at=2020-03-03T06:07:18Z,direct_url=<?>,disk_format='qcow2',id=d31e39bc-c2b7-42ad-968f-7e782dd72943,min_disk=0,min_ram=0,name='vnf-dmdc-18.5.0',owner='36c70ae400e74fc2859f44815d0c9afb',properties=ImageMetaProps,protected=<?>,size=5569380352,status='active',tags=<?>,updated_at=2020-03-03T06:08:03Z,virtual_size=<?>,visibility=<?>),
>> allow threads: True _get_desirable_cpu_topologies
>> /openstack/venvs/nova-17.1.12/lib/python2.7/site-packages/nova/virt/hardware.py:551
>> 2020-04-16 16:12:28.339 219284 DEBUG nova.virt.hardware
>> [req-a7ee4c3e-ea3a-4237-ba75-4c85411c9889 cbabd9368dc24fea84fd2e43935fddfa
>> 975a7d3840a141b0a20a9dc60e3da6cd - default default] Flavor limits
>> 65536:65536:65536 _get_cpu_topology_constraints
>> /openstack/venvs/nova-17.1.12/lib/python2.7/site-packages/nova/virt/hardware.py:297
>> 2020-04-16 16:12:28.339 219284 DEBUG nova.virt.hardware
>> [req-a7ee4c3e-ea3a-4237-ba75-4c85411c9889 cbabd9368dc24fea84fd2e43935fddfa
>> 975a7d3840a141b0a20a9dc60e3da6cd - default default] Image limits
>> 65536:65536:65536 _get_cpu_topology_constraints
>> /openstack/venvs/nova-17.1.12/lib/python2.7/site-packages/nova/virt/hardware.py:308
>> 2020-04-16 16:12:28.339 219284 DEBUG nova.virt.hardware
>> [req-a7ee4c3e-ea3a-4237-ba75-4c85411c9889 cbabd9368dc24fea84fd2e43935fddfa
>> 975a7d3840a141b0a20a9dc60e3da6cd - default default] Flavor pref -1:-1:-1
>> _get_cpu_topology_constraints
>> /openstack/venvs/nova-17.1.12/lib/python2.7/site-packages/nova/virt/hardware.py:331
>> 2020-04-16 16:12:28.340 219284 DEBUG nova.virt.hardware
>> [req-a7ee4c3e-ea3a-4237-ba75-4c85411c9889 cbabd9368dc24fea84fd2e43935fddfa
>> 975a7d3840a141b0a20a9dc60e3da6cd - default default] Image pref -1:-1:-1
>> _get_cpu_topology_constraints
>> /openstack/venvs/nova-17.1.12/lib/python2.7/site-packages/nova/virt/hardware.py:350
>> 2020-04-16 16:12:28.340 219284 DEBUG nova.virt.hardware
>> [req-a7ee4c3e-ea3a-4237-ba75-4c85411c9889 cbabd9368dc24fea84fd2e43935fddfa
>> 975a7d3840a141b0a20a9dc60e3da6cd - default default] Chosen -1:-1:-1 limits
>> 65536:65536:65536 _get_cpu_topology_constraints
>> /openstack/venvs/nova-17.1.12/lib/python2.7/site-packages/nova/virt/hardware.py:379
>> 2020-04-16 16:12:28.340 219284 DEBUG nova.virt.hardware
>> [req-a7ee4c3e-ea3a-4237-ba75-4c85411c9889 cbabd9368dc24fea84fd2e43935fddfa
>> 975a7d3840a141b0a20a9dc60e3da6cd - default default] Topology preferred
>> VirtCPUTopology(cores=-1,sockets=-1,threads=-1), maximum
>> VirtCPUTopology(cores=65536,sockets=65536,threads=65536)
>> _get_desirable_cpu_topologies
>> /openstack/venvs/nova-17.1.12/lib/python2.7/site-packages/nova/virt/hardware.py:555
>> 2020-04-16 16:12:28.340 219284 DEBUG nova.virt.hardware
>> [req-a7ee4c3e-ea3a-4237-ba75-4c85411c9889 cbabd9368dc24fea84fd2e43935fddfa
>> 975a7d3840a141b0a20a9dc60e3da6cd - default default] Build topologies for 16
>> vcpu(s) 16:16:16 _get_possible_cpu_topologies
>> /openstack/venvs/nova-17.1.12/lib/python2.7/site-packages/nova/virt/hardware.py:418
>> 2020-04-16 16:12:28.341 219284 DEBUG nova.virt.hardware
>> [req-a7ee4c3e-ea3a-4237-ba75-4c85411c9889 cbabd9368dc24fea84fd2e43935fddfa
>> 975a7d3840a141b0a20a9dc60e3da6cd - default default] Got 15 possible
>> topologies _get_possible_cpu_topologies
>> /openstack/venvs/nova-17.1.12/lib/python2.7/site-packages/nova/virt/hardware.py:445
>> 2020-04-16 16:12:28.341 219284 DEBUG nova.virt.hardware
>> [req-a7ee4c3e-ea3a-4237-ba75-4c85411c9889 cbabd9368dc24fea84fd2e43935fddfa
>> 975a7d3840a141b0a20a9dc60e3da6cd - default default] Possible topologies
>> [VirtCPUTopology(cores=1,sockets=16,threads=1),
>> VirtCPUTopology(cores=2,sockets=8,threads=1),
>> VirtCPUTopology(cores=4,sockets=4,threads=1),
>> VirtCPUTopology(cores=8,sockets=2,threads=1),
>> VirtCPUTopology(cores=16,sockets=1,threads=1),
>> VirtCPUTopology(cores=1,sockets=8,threads=2),
>> VirtCPUTopology(cores=2,sockets=4,threads=2),
>> VirtCPUTopology(cores=4,sockets=2,threads=2),
>> VirtCPUTopology(cores=8,sockets=1,threads=2),
>> VirtCPUTopology(cores=1,sockets=4,threads=4),
>> VirtCPUTopology(cores=2,sockets=2,threads=4),
>> VirtCPUTopology(cores=4,sockets=1,threads=4),
>> VirtCPUTopology(cores=1,sockets=2,threads=8),
>> VirtCPUTopology(cores=2,sockets=1,threads=8),
>> VirtCPUTopology(cores=1,sockets=1,threads=16)]
>> _get_desirable_cpu_topologies
>> /openstack/venvs/nova-17.1.12/lib/python2.7/site-packages/nova/virt/hardware.py:560
>> 2020-04-16 16:12:28.341 219284 DEBUG nova.virt.hardware
>> [req-a7ee4c3e-ea3a-4237-ba75-4c85411c9889 cbabd9368dc24fea84fd2e43935fddfa
>> 975a7d3840a141b0a20a9dc60e3da6cd - default default] Filtering topologies
>> best for 2 threads _get_desirable_cpu_topologies
>> /openstack/venvs/nova-17.1.12/lib/python2.7/site-packages/nova/virt/hardware.py:578
>> 2020-04-16 16:12:28.342 219284 DEBUG nova.virt.hardware
>> [req-a7ee4c3e-ea3a-4237-ba75-4c85411c9889 cbabd9368dc24fea84fd2e43935fddfa
>> 975a7d3840a141b0a20a9dc60e3da6cd - default default] Remaining possible
>> topologies [VirtCPUTopology(cores=1,sockets=8,threads=2),
>> VirtCPUTopology(cores=2,sockets=4,threads=2),
>> VirtCPUTopology(cores=4,sockets=2,threads=2),
>> VirtCPUTopology(cores=8,sockets=1,threads=2)] _get_desirable_cpu_topologies
>> /openstack/venvs/nova-17.1.12/lib/python2.7/site-packages/nova/virt/hardware.py:583
>> 2020-04-16 16:12:28.342 219284 DEBUG nova.virt.hardware
>> [req-a7ee4c3e-ea3a-4237-ba75-4c85411c9889 cbabd9368dc24fea84fd2e43935fddfa
>> 975a7d3840a141b0a20a9dc60e3da6cd - default default] Sorted desired
>> topologies [VirtCPUTopology(cores=1,sockets=8,threads=2),
>> VirtCPUTopology(cores=2,sockets=4,threads=2),
>> VirtCPUTopology(cores=4,sockets=2,threads=2),
>> VirtCPUTopology(cores=8,sockets=1,threads=2)] _get_desirable_cpu_topologies
>> /openstack/venvs/nova-17.1.12/lib/python2.7/site-packages/nova/virt/hardware.py:586
>> 2020-04-16 16:12:28.344 219284 DEBUG nova.virt.libvirt.driver
>> [req-48d96e5d-f071-44e7-94d2-e9fcb2a13087 cbabd9368dc24fea84fd2e43935fddfa
>> 975a7d3840a141b0a20a9dc60e3da6cd - default default] CPU mode 'host-model'
>> model '' was chosen, with extra flags: '' _get_guest_cpu_model_config
>> /openstack/venvs/nova-17.1.12/lib/python2.7/site-packages/nova/virt/libvirt/driver.py:3909
>> 2020-04-16 16:12:28.345 219284 DEBUG nova.virt.hardware
>> [req-48d96e5d-f071-44e7-94d2-e9fcb2a13087 cbabd9368dc24fea84fd2e43935fddfa
>> 975a7d3840a141b0a20a9dc60e3da6cd - default default] Getting desirable
>> topologies for flavor
>> Flavor(created_at=2020-03-23T11:20:34Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw:cpu_policy='dedicated',hw:mem_page_size='1048576'},flavorid='d60b66d4-c0e0-4292-9113-1df2d94d57a5',id=90,is_public=False,memory_mb=57344,name='scef-traffic-flavor',projects=<?>,root_gb=60,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=16)
>> and image_meta
>> ImageMeta(checksum='3fe0e06194e0b5327ba38bb2367f760d',container_format='bare',created_at=2020-03-03T06:04:55Z,direct_url=<?>,disk_format='qcow2',id=c255bbbc-c8c3-462e-b827-1d35db08d283,min_disk=0,min_ram=0,name='vnf-scef-18.5',owner='36c70ae400e74fc2859f44815d0c9afb',properties=ImageMetaProps,protected=<?>,size=7143292928,status='active',tags=<?>,updated_at=2020-03-03T06:05:49Z,virtual_size=<?>,visibility=<?>),
>> allow threads: True _get_desirable_cpu_topologies
>> /openstack/venvs/nova-17.1.12/lib/python2.7/site-packages/nova/virt/hardware.py:551
>> 2020-04-16 16:12:28.345 219284 DEBUG nova.virt.hardware
>> [req-48d96e5d-f071-44e7-94d2-e9fcb2a13087 cbabd9368dc24fea84fd2e43935fddfa
>> 975a7d3840a141b0a20a9dc60e3da6cd - default default] Flavor limits
>> 65536:65536:65536 _get_cpu_topology_constraints
>> /openstack/venvs/nova-17.1.12/lib/python2.7/site-packages/nova/virt/hardware.py:297
>> 2020-04-16 16:12:28.345 219284 DEBUG nova.virt.hardware
>> [req-48d96e5d-f071-44e7-94d2-e9fcb2a13087 cbabd9368dc24fea84fd2e43935fddfa
>> 975a7d3840a141b0a20a9dc60e3da6cd - default default] Image limits
>> 65536:65536:65536 _get_cpu_topology_constraints
>> /openstack/venvs/nova-17.1.12/lib/python2.7/site-packages/nova/virt/hardware.py:308
>> 2020-04-16 16:12:28.346 219284 DEBUG nova.virt.hardware
>> [req-48d96e5d-f071-44e7-94d2-e9fcb2a13087 cbabd9368dc24fea84fd2e43935fddfa
>> 975a7d3840a141b0a20a9dc60e3da6cd - default default] Flavor pref -1:-1:-1
>> _get_cpu_topology_constraints
>> /openstack/venvs/nova-17.1.12/lib/python2.7/site-packages/nova/virt/hardware.py:331
>> 2020-04-16 16:12:28.346 219284 DEBUG nova.virt.hardware
>> [req-48d96e5d-f071-44e7-94d2-e9fcb2a13087 cbabd9368dc24fea84fd2e43935fddfa
>> 975a7d3840a141b0a20a9dc60e3da6cd - default default] Image pref -1:-1:-1
>> _get_cpu_topology_constraints
>> /openstack/venvs/nova-17.1.12/lib/python2.7/site-packages/nova/virt/hardware.py:350
>> 2020-04-16 16:12:28.346 219284 DEBUG nova.virt.hardware
>> [req-48d96e5d-f071-44e7-94d2-e9fcb2a13087 cbabd9368dc24fea84fd2e43935fddfa
>> 975a7d3840a141b0a20a9dc60e3da6cd - default default] Chosen -1:-1:-1 limits
>> 65536:65536:65536 _get_cpu_topology_constraints
>> /openstack/venvs/nova-17.1.12/lib/python2.7/site-packages/nova/virt/hardware.py:379
>> packages/nova/network/base_api.py:48
>> 2020-04-16 16:34:48.580 219284 DEBUG oslo_concurrency.lockutils
>> [req-dd4a2032-bbbd-4c0b-87ac-11605ffbf6c2 - - - - -] Releasing semaphore
>> "refresh_cache-f33b2602-ac5f-491e-bdb8-7e7f9376bcad" lock
>> /openstack/venvs/nova-17.1.12/lib/python2.7/site-packages/oslo_concurrency/lockutils.py:228
>> 2020-04-16 16:34:48.580 219284 DEBUG nova.compute.manager
>> [req-dd4a2032-bbbd-4c0b-87ac-11605ffbf6c2 - - - - -] [instance:
>> f33b2602-ac5f-491e-bdb8-7e7f9376bcad] Updated the network info_cache for
>> instance _heal_instance_info_cache
>> /openstack/venvs/nova-17.1.12/lib/python2.7/site-packages/nova/compute/manager.py:6827
>> 2020-04-16 16:34:50.580 219284 DEBUG oslo_service.periodic_task
>> [req-dd4a2032-bbbd-4c0b-87ac-11605ffbf6c2 - - - - -] Running periodic task
>> ComputeManager._run_image_cache_manager_pass run_periodic_tasks
>> /openstack/venvs/nova-17.1.12/lib/python2.7/site-packages/oslo_service/periodic_task.py:215
>> 2020-04-16 16:34:50.581 219284 DEBUG oslo_concurrency.lockutils
>> [req-dd4a2032-bbbd-4c0b-87ac-11605ffbf6c2 - - - - -] Lock
>> "storage-registry-lock" acquired by
>> "nova.virt.storage_users.do_register_storage_use" :: waited 0.000s inner
>> /openstack/venvs/nova-17.1.12/lib/python2.7/site-packages/oslo_concurrency/lockutils.py:273
>> 2020-04-16 16:34:50.581 219284 DEBUG oslo_concurrency.lockutils
>> [req-dd4a2032-bbbd-4c0b-87ac-11605ffbf6c2 - - - - -] Lock
>> "storage-registry-lock" released by
>> "nova.virt.storage_users.do_register_storage_use" :: held 0.000s inner
>> /openstack/venvs/nova-17.1.12/lib/python2.7/site-packages/oslo_concurrency/lockutils.py:285
>> 2020-04-16 16:34:50.581 219284 DEBUG oslo_concurrency.lockutils
>> [req-dd4a2032-bbbd-4c0b-87ac-11605ffbf6c2 - - - - -] Lock
>> "storage-registry-lock" acquired by
>> "nova.virt.storage_users.do_get_storage_users" :: waited 0.000s inner
>> /openstack/venvs/nova-17.1.12/lib/python2.7/site-packages/oslo_concurrency/lockutils.py:273
>> 2020-04-16 16:34:50.582 219284 DEBUG oslo_concurrency.lockutils
>> [req-dd4a2032-bbbd-4c0b-87ac-11605ffbf6c2 - - - - -] Lock
>> "storage-registry-lock" released by
>> "nova.virt.storage_users.do_get_storage_users" :: held 0.000s inner
>> /openstack/venvs/nova-17.1.12/lib/python2.7/site-packages/oslo_concurrency/lockutils.py:285
>> 2020-04-16 16:34:50.628 219284 DEBUG nova.virt.libvirt.imagecache
>> [req-dd4a2032-bbbd-4c0b-87ac-11605ffbf6c2 - - - - -] Verify base images
>> _age_and_verify_cached_images
>> /openstack/venvs/nova-17.1.12/lib/python2.7/site-packages/nova/virt/libvirt/imagecache.py:348
>> 2020-04-16 16:34:50.628 219284 DEBUG nova.virt.libvirt.imagecache
>> [req-dd4a2032-bbbd-4c0b-87ac-11605ffbf6c2 - - - - -] Image id yields
>> fingerprint da39a3ee5e6b4b0d3255bfef95601890afd80709
>> _age_and_verify_cached_images
>> /openstack/venvs/nova-17.1.12/lib/python2.7/site-packages/nova/virt/libvirt/imagecache.py:355
>> 2020-04-16 16:34:50.628 219284 DEBUG nova.virt.libvirt.imagecache
>> [req-dd4a2032-bbbd-4c0b-87ac-11605ffbf6c2 - - - - -] Image id
>> b8783f95-138b-4265-a09d-55ec9d9ad35d yields fingerprint
>> b40b27e04896d063bc591b19642da8910da3eb1f _age_and_verify_cached_images
>> /openstack/venvs/nova-17.1.12/lib/python2.7/site-packages/nova/virt/libvirt/imagecache.py:355
>> 2020-04-16 16:34:50.628 219284 INFO nova.virt.libvirt.imagecache
>> [req-dd4a2032-bbbd-4c0b-87ac-11605ffbf6c2 - - - - -] image
>> b8783f95-138b-4265-a09d-55ec9d9ad35d at
>> (/var/lib/nova/instances/_base/b40b27e04896d063bc591b19642da8910da3eb1f):
>> checking
>> 2020-04-16 16:34:50.628 219284 DEBUG nova.virt.libvirt.imagecache
>> [req-dd4a2032-bbbd-4c0b-87ac-11605ffbf6c2 - - - - -] image
>> b8783f95-138b-4265-a09d-55ec9d9ad35d at
>> (/var/lib/nova/instances/_base/b40b27e04896d063bc591b19642da8910da3eb1f):
>> image is in use _mark_in_use
>> /openstack/venvs/nova-17.1.12/lib/python2.7/site-packages/nova/virt/libvirt/imagecache.py:329
>> 2020-04-16 16:34:50.629 219284 DEBUG nova.virt.libvirt.imagecache
>> [req-dd4a2032-bbbd-4c0b-87ac-11605ffbf6c2 - - - - -] Image id
>> c255bbbc-c8c3-462e-b827-1d35db08d283 yields fingerprint
>> 86692cd1e738b8df7cf1f951967c61e92222fc4c _age_and_verify_cached_images
>> /openstack/venvs/nova-17.1.12/lib/python2.7/site-packages/nova/virt/libvirt/imagecache.py:355
>> 2020-04-16 16:34:50.630 219284 INFO nova.virt.libvirt.imagecache
>> [req-dd4a2032-bbbd-4c0b-87ac-11605ffbf6c2 - - - - -] image
>> c255bbbc-c8c3-462e-b827-1d35db08d283 at
>> (/var/lib/nova/instances/_base/86692cd1e738b8df7cf1f951967c61e92222fc4c):
>> checking
>> 2020-04-16 16:34:50.630 219284 DEBUG nova.virt.libvirt.imagecache
>> [req-dd4a2032-bbbd-4c0b-87ac-11605ffbf6c2 - - - - -] image
>> c255bbbc-c8c3-462e-b827-1d35db08d283 at
>> (/var/lib/nova/instances/_base/86692cd1e738b8df7cf1f951967c61e92222fc4c):
>> image is in use _mark_in_use
>> /openstack/venvs/nova-17.1.12/lib/python2.7/site-packages/nova/virt/libvirt/imagecache.py:329
>> 2020-04-16 16:34:50.630 219284 DEBUG nova.virt.libvirt.imagecache
>> [req-dd4a2032-bbbd-4c0b-87ac-11605ffbf6c2 - - - - -] Image id
>> d31e39bc-c2b7-42ad-968f-7e782dd72943 yields fingerprint
>> 5c538ead16d8375e4890e8b9bb1aa080edc75f33 _age_and_verify_cached_images
>> /openstack/venvs/nova-17.1.12/lib/python2.7/site-packages/nova/virt/libvirt/imagecache.py:355
>> 2020-04-16 16:34:50.630 219284 INFO nova.virt.libvirt.imagecache
>> [req-dd4a2032-bbbd-4c0b-87ac-11605ffbf6c2 - - - - -] image
>> d31e39bc-c2b7-42ad-968f-7e782dd72943 at
>> (/var/lib/nova/instances/_base/5c538ead16d8375e4890e8b9bb1aa080edc75f33):
>> checking
>> 2020-04-16 16:34:50.630 219284 DEBUG nova.virt.libvirt.imagecache
>> [req-dd4a2032-bbbd-4c0b-87ac-11605ffbf6c2 - - - - -] image
>> d31e39bc-c2b7-42ad-968f-7e782dd72943 at
>> (/var/lib/nova/instances/_base/5c538ead16d8375e4890e8b9bb1aa080edc75f33):
>> image is in use _mark_in_use
>> /openstack/venvs/nova-17.1.12/lib/python2.7/site-packages/nova/virt/libvirt/imagecache.py:329
>> 2020-04-16 16:34:50.631 219284 DEBUG nova.virt.libvirt.imagecache
>> [req-dd4a2032-bbbd-4c0b-87ac-11605ffbf6c2 - - - - -] Image id
>> b3af2bf0-055b-48fb-aedc-4683468a3f74 yields fingerprint
>> 7af98c4d49b766d82eec8169a5c87be4eb56e5eb _age_and_verify_cached_images
>> /openstack/venvs/nova-17.1.12/lib/python2.7/site-packages/nova/virt/libvirt/imagecache.py:355
>> 2020-04-16 16:34:50.631 219284 INFO nova.virt.libvirt.imagecache
>> [req-dd4a2032-bbbd-4c0b-87ac-11605ffbf6c2 - - - - -] image
>> b3af2bf0-055b-48fb-aedc-4683468a3f74 at
>> (/var/lib/nova/instances/_base/7af98c4d49b766d82eec8169a5c87be4eb56e5eb):
>> checking
>> 2020-04-16 16:34:50.631 219284 DEBUG nova.virt.libvirt.imagecache
>> [req-dd4a2032-bbbd-4c0b-87ac-11605ffbf6c2 - - - - -] image
>> b3af2bf0-055b-48fb-aedc-4683468a3f74 at
>> (/var/lib/nova/instances/_base/7af98c4d49b766d82eec8169a5c87be4eb56e5eb):
>> image is in use _mark_in_use
>> /openstack/venvs/nova-17.1.12/lib/python2.7/site-packages/nova/virt/libvirt/imagecache.py:329
>> 2020-04-16 16:34:50.632 219284 DEBUG nova.virt.libvirt.imagecache
>> [req-dd4a2032-bbbd-4c0b-87ac-11605ffbf6c2 - - - - -]
>> f33b2602-ac5f-491e-bdb8-7e7f9376bcad is a valid instance name
>> _list_backing_images
>> /openstack/venvs/nova-17.1.12/lib/python2.7/site-packages/nova/virt/libvirt/imagecache.py:169
>> 2020-04-16 16:34:50.632 219284 DEBUG nova.virt.libvirt.imagecache
>> [req-dd4a2032-bbbd-4c0b-87ac-11605ffbf6c2 - - - - -]
>> f33b2602-ac5f-491e-bdb8-7e7f9376bcad has a disk file _list_backing_images
>> /openstack/venvs/nova-17.1.12/lib/python2.7/site-packages/nova/virt/libvirt/imagecache.py:172
>> 2020-04-16 16:34:50.632 219284 DEBUG oslo_concurrency.processutils
>> [req-dd4a2032-bbbd-4c0b-87ac-11605ffbf6c2 - - - - -] Running cmd
>> (subprocess): /openstack/venvs/nova-17.1.12/bin/python -m
>> oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C
>> qemu-img info
>> /var/lib/nova/instances/f33b2602-ac5f-491e-bdb8-7e7f9376bcad/disk
>> --force-share execute
>> /openstack/venvs/nova-17.1.12/lib/python2.7/site-packages/oslo_concurrency/processutils.py:372
>> 2020-04-16 16:34:50.663 219284 DEBUG oslo_concurrency.processutils
>> [req-dd4a2032-bbbd-4c0b-87ac-11605ffbf6c2 - - - - -] CMD
>> "/openstack/venvs/nova-17.1.12/bin/python -m oslo_concurrency.prlimit
>> --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info
>> /var/lib/nova/instances/f33b2602-ac5f-491e-bdb8-7e7f9376bcad/disk
>> --force-share" returned: 0 in 0.031s execute
>> /openstack/venvs/nova-17.1.12/lib/python2.7/site-packages/oslo_concurrency/processutils.py:409
>> 2020-04-16 16:34:50.664 219284 DEBUG nova.virt.libvirt.imagecache
>> [req-dd4a2032-bbbd-4c0b-87ac-11605ffbf6c2 - - - - -] Instance
>> f33b2602-ac5f-491e-bdb8-7e7f9376bcad is backed by
>> b40b27e04896d063bc591b19642da8910da3eb1f _list_backing_images
>> /openstack/venvs/nova-17.1.12/lib/python2.7/site-packages/nova/virt/libvirt/imagecache.py:187
>> 2020-04-16 16:34:50.665 219284 DEBUG nova.virt.libvirt.imagecache
>> [req-dd4a2032-bbbd-4c0b-87ac-11605ffbf6c2 - - - - -]
>> f117eb96-06a9-4c91-9c5c-111228e24d66 is a valid instance name
>> _list_backing_images
>> /openstack/venvs/nova-17.1.12/lib/python2.7/site-packages/nova/virt/libvirt/imagecache.py:169
>> 2020-04-16 16:34:50.665 219284 DEBUG nova.virt.libvirt.imagecache
>> [req-dd4a2032-bbbd-4c0b-87ac-11605ffbf6c2 - - - - -]
>> f117eb96-06a9-4c91-9c5c-111228e24d66 has a disk file _list_backing_images
>> /openstack/venvs/nova-17.1.12/lib/python2.7/site-packages/nova/virt/libvirt/imagecache.py:172
>> 2020-04-16 16:34:50.665 219284 DEBUG oslo_concurrency.processutils
>> [req-dd4a2032-bbbd-4c0b-87ac-11605ffbf6c2 - - - - -] Running cmd
>> (subprocess): /openstack/venvs/nova-17.1.12/bin/python -m
>> oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C
>> qemu-img info
>> /var/lib/nova/instances/f117eb96-06a9-4c91-9c5c-111228e24d66/disk
>> --force-share execute
>> /openstack/venvs/nova-17.1.12/lib/python2.7/site-packages/oslo_concurrency/processutils.py:372
>> 2020-04-16 16:34:50.694 219284 DEBUG oslo_concurrency.processutils
>> [req-dd4a2032-bbbd-4c0b-87ac-11605ffbf6c2 - - - - -] CMD
>> "/openstack/venvs/nova-17.1.12/bin/python -m oslo_concurrency.prlimit
>> --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info
>> /var/lib/nova/instances/f117eb96-06a9-4c91-9c5c-111228e24d66/disk
>> --force-share" returned: 0 in 0.029s execute
>> /openstack/venvs/nova-17.1.12/lib/python2.7/site-packages/oslo_concurrency/processutils.py:409
>> 2020-04-16 16:34:50.695 219284 DEBUG nova.virt.libvirt.imagecache
>> [req-dd4a2032-bbbd-4c0b-87ac-11605ffbf6c2 - - - - -] Instance
>> f117eb96-06a9-4c91-9c5c-111228e24d66 is backed by
>> 7af98c4d49b766d82eec8169a5c87be4eb56e5eb _list_backing_images
>> /openstack/venvs/nova-17.1.12/lib/python2.7/site-packages/nova/virt/libvirt/imagecache.py:187
>> 2020-04-16 16:34:50.695 219284 DEBUG nova.virt.libvirt.imagecache
>> [req-dd4a2032-bbbd-4c0b-87ac-11605ffbf6c2 - - - - -]
>> 5ba39de3-f5f8-46a2-908d-c43b901e1696 is a valid instance name
>> _list_backing_images
>> /openstack/venvs/nova-17.1.12/lib/python2.7/site-packages/nova/virt/libvirt/imagecache.py:169
>> 2020-04-16 16:34:50.695 219284 DEBUG nova.virt.libvirt.imagecache
>> [req-dd4a2032-bbbd-4c0b-87ac-11605ffbf6c2 - - - - -]
>> 5ba39de3-f5f8-46a2-908d-c43b901e1696 has a disk file _list_backing_images
>> /openstack/venvs/nova-17.1.12/lib/python2.7/site-packages/nova/virt/libvirt/imagecache.py:172
>> 2020-04-16 16:34:50.695 219284 DEBUG oslo_concurrency.processutils
>> [req-dd4a2032-bbbd-4c0b-87ac-11605ffbf6c2 - - - - -] Running cmd
>> (subprocess): /openstack/venvs/nova-17.1.12/bin/python -m
>> oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C
>> qemu-img info
>> /var/lib/nova/instances/5ba39de3-f5f8-46a2-908d-c43b901e1696/disk
>> --force-share execute
>> /openstack/venvs/nova-17.1.12/lib/python2.7/site-packages/oslo_concurrency/processutils.py:372
>> 2020-04-16 16:34:50.723 219284 DEBUG oslo_concurrency.processutils
>> [req-dd4a2032-bbbd-4c0b-87ac-11605ffbf6c2 - - - - -] CMD
>> "/openstack/venvs/nova-17.1.12/bin/python -m oslo_concurrency.prlimit
>> --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info
>> /var/lib/nova/instances/5ba39de3-f5f8-46a2-908d-c43b901e1696/disk
>> --force-share" returned: 0 in 0.028s execute
>> /openstack/venvs/nova-17.1.12/lib/python2.7/site-packages/oslo_concurrency/processutils.py:409
>> 2020-04-16 16:34:50.724 219284 DEBUG nova.virt.libvirt.imagecache
>> [req-dd4a2032-bbbd-4c0b-87ac-11605ffbf6c2 - - - - -] Instance
>> 5ba39de3-f5f8-46a2-908d-c43b901e1696 is backed by
>> 86692cd1e738b8df7cf1f951967c61e92222fc4c _list_backing_images
>> /openstack/venvs/nova-17.1.12/lib/python2.7/site-packages/nova/virt/libvirt/imagecache.py:187
>> 2020-04-16 16:34:50.724 219284 DEBUG nova.virt.libvirt.imagecache
>> [req-dd4a2032-bbbd-4c0b-87ac-11605ffbf6c2 - - - - -]
>> d3d2837b-49c3-4822-b26b-4b3c03d344ae is a valid instance name
>> _list_backing_images
>> /openstack/venvs/nova-17.1.12/lib/python2.7/site-packages/nova/virt/libvirt/imagecache.py:169
>> 2020-04-16 16:34:50.724 219284 DEBUG nova.virt.libvirt.imagecache
>> [req-dd4a2032-bbbd-4c0b-87ac-11605ffbf6c2 - - - - -]
>> d3d2837b-49c3-4822-b26b-4b3c03d344ae has a disk file _list_backing_images
>> /openstack/venvs/nova-17.1.12/lib/python2.7/site-packages/nova/virt/libvirt/imagecache.py:172
>> 2020-04-16 16:34:50.725 219284 DEBUG oslo_concurrency.processutils
>> [req-dd4a2032-bbbd-4c0b-87ac-11605ffbf6c2 - - - - -] Running cmd
>> (subprocess): /openstack/venvs/nova-17.1.12/bin/python -m
>> oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C
>> qemu-img info
>> /var/lib/nova/instances/d3d2837b-49c3-4822-b26b-4b3c03d344ae/disk
>> --force-share execute
>> /openstack/venvs/nova-17.1.12/lib/python2.7/site-packages/oslo_concurrency/processutils.py:372
>> 2020-04-16 16:34:50.752 219284 DEBUG oslo_concurrency.processutils
>> [req-dd4a2032-bbbd-4c0b-87ac-11605ffbf6c2 - - - - -] CMD
>> "/openstack/venvs/nova-17.1.12/bin/python -m oslo_concurrency.prlimit
>> --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info
>> /var/lib/nova/instances/d3d2837b-49c3-4822-b26b-4b3c03d344ae/disk
>> --force-share" returned: 0 in 0.028s execute
>> /openstack/venvs/nova-17.1.12/lib/python2.7/site-packages/oslo_concurrency/processutils.py:409
>> 2020-04-16 16:34:50.753 219284 DEBUG nova.virt.libvirt.imagecache
>> [req-dd4a2032-bbbd-4c0b-87ac-11605ffbf6c2 - - - - -] Instance
>> d3d2837b-49c3-4822-b26b-4b3c03d344ae is backed by
>> 5c538ead16d8375e4890e8b9bb1aa080edc75f33 _list_backing_images
>> /openstack/venvs/nova-17.1.12/lib/python2.7/site-packages/nova/virt/libvirt/imagecache.py:187
>> 2020-04-16 16:34:50.753 219284 DEBUG nova.virt.libvirt.imagecache
>> [req-dd4a2032-bbbd-4c0b-87ac-11605ffbf6c2 - - - - -]
>> dfa80e78-ee02-46e5-ba7a-0874fa37da56 is a valid instance name
>> _list_backing_images
>> /openstack/venvs/nova-17.1.12/lib/python2.7/site-packages/nova/virt/libvirt/imagecache.py:169
>> 2020-04-16 16:34:50.754 219284 DEBUG nova.virt.libvirt.imagecache
>> [req-dd4a2032-bbbd-4c0b-87ac-11605ffbf6c2 - - - - -]
>> dfa80e78-ee02-46e5-ba7a-0874fa37da56 has a disk file _list_backing_images
>> /openstack/venvs/nova-17.1.12/lib/python2.7/site-packages/nova/virt/libvirt/imagecache.py:172
>> 2020-04-16 16:34:50.754 219284 DEBUG oslo_concurrency.processutils
>> [req-dd4a2032-bbbd-4c0b-87ac-11605ffbf6c2 - - - - -] Running cmd
>> (subprocess): /openstack/venvs/nova-17.1.12/bin/python -m
>> oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C
>> qemu-img info
>> /var/lib/nova/instances/dfa80e78-ee02-46e5-ba7a-0874fa37da56/disk
>> --force-share execute
>> /openstack/venvs/nova-17.1.12/lib/python2.7/site-packages/oslo_concurrency/processutils.py:372
>> 2020-04-16 16:34:50.781 219284 DEBUG oslo_concurrency.processutils
>> [req-dd4a2032-bbbd-4c0b-87ac-11605ffbf6c2 - - - - -] CMD
>> "/openstack/venvs/nova-17.1.12/bin/python -m oslo_concurrency.prlimit
>> --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info
>> /var/lib/nova/instances/dfa80e78-ee02-46e5-ba7a-0874fa37da56/disk
>> --force-share" returned: 0 in 0.027s execute
>> /openstack/venvs/nova-17.1.12/lib/python2.7/site-packages/oslo_concurrency/processutils.py:409
>> 2020-04-16 16:34:50.782 219284 DEBUG nova.virt.libvirt.imagecache
>> [req-dd4a2032-bbbd-4c0b-87ac-11605ffbf6c2 - - - - -] Instance
>> dfa80e78-ee02-46e5-ba7a-0874fa37da56 is backed by
>> 86692cd1e738b8df7cf1f951967c61e92222fc4c _list_backing_images
>> /openstack/venvs/nova-17.1.12/lib/python2.7/site-packages/nova/virt/libvirt/imagecache.py:187
>> 2020-04-16 16:34:50.782 219284 INFO nova.virt.libvirt.imagecache
>> [req-dd4a2032-bbbd-4c0b-87ac-11605ffbf6c2 - - - - -] Active base files:
>> /var/lib/nova/instances/_base/b40b27e04896d063bc591b19642da8910da3eb1f
>> /var/lib/nova/instances/_base/86692cd1e738b8df7cf1f951967c61e92222fc4c
>> /var/lib/nova/instances/_base/5c538ead16d8375e4890e8b9bb1aa080edc75f33
>> /var/lib/nova/instances/_base/7af98c4d49b766d82eec8169a5c87be4eb56e5eb
>> 2020-04-16 16:34:50.783 219284 DEBUG nova.virt.libvirt.imagecache
>> [req-dd4a2032-bbbd-4c0b-87ac-11605ffbf6c2 - - - - -] Verification complete
>> _age_and_verify_cached_images
>> /openstack/venvs/nova-17.1.12/lib/python2.7/site-packages/nova/virt/libvirt/imagecache.py:384
>> 2020-04-16 16:34:50.783 219284 DEBUG nova.virt.libvirt.imagecache
>> [req-dd4a2032-bbbd-4c0b-87ac-11605ffbf6c2 - - - - -] Verify swap images
>> _age_and_verify_swap_images
>> /openstack/venvs/nova-17.1.12/lib/python2.7/site-packages/nova/virt/libvirt/imagecache.py:333
>> 2020-04-16 16:35:01.887 219284 DEBUG oslo_service.periodic_task
>> [req-dd4a2032-bbbd-4c0b-87ac-11605ffbf6c2 - - - - -] Running periodic task
>> ComputeManager.update_available_resource run_periodic_tasks
>> /openstack/venvs/nova-17.1.12/lib/python2.7/site-packages/oslo_service/periodic_task.py:215
>> 2020-04-16 16:35:01.910 219284 DEBUG nova.compute.resource_tracker
>> [req-dd4a2032-bbbd-4c0b-87ac-11605ffbf6c2 - - - - -] Auditing locally
>> available compute resources for KO1A3D02O131106CM07 (node:
>> KO1A3D02O131106CM07.openstack.local) update_available_resource
>> /openstack/venvs/nova-17.1.12/lib/python2.7/site-packages/nova/compute/resource_tracker.py:689
>> 2020-04-16 16:35:02.009 219284 DEBUG oslo_concurrency.processutils
>> [req-dd4a2032-bbbd-4c0b-87ac-11605ffbf6c2 - - - - -] Running cmd
>> (subprocess): /openstack/venvs/nova-17.1.12/bin/python -m
>> oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C
>> qemu-img info
>> /var/lib/nova/instances/f33b2602-ac5f-491e-bdb8-7e7f9376bcad/disk
>> --force-share execute
>> /openstack/venvs/nova-17.1.12/lib/python2.7/site-packages/oslo_concurrency/processutils.py:372
>> 2020-04-16 16:35:02.040 219284 DEBUG oslo_concurrency.processutils
>> [req-dd4a2032-bbbd-4c0b-87ac-11605ffbf6c2 - - - - -] CMD
>> "/openstack/venvs/nova-17.1.12/bin/python -m oslo_concurrency.prlimit
>> --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info
>> /var/lib/nova/instances/f33b2602-ac5f-491e-bdb8-7e7f9376bcad/disk
>> --force-share" returned: 0 in 0.031s execute
>> /openstack/venvs/nova-17.1.12/lib/python2.7/site-packages/oslo_concurrency/processutils.py:409
>> 2020-04-16 16:35:02.041 219284 DEBUG oslo_concurrency.processutils
>> [req-dd4a2032-bbbd-4c0b-87ac-11605ffbf6c2 - - - - -] Running cmd
>> (subprocess): /openstack/venvs/nova-17.1.12/bin/python -m
>> oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C
>> qemu-img info
>> /var/lib/nova/instances/f33b2602-ac5f-491e-bdb8-7e7f9376bcad/disk
>> --force-share execute
>> /openstack/venvs/nova-17.1.12/lib/python2.7/site-packages/oslo_concurrency/processutils.py:372
>> 2020-04-16 16:35:02.070 219284 DEBUG oslo_concurrency.processutils
>> [req-dd4a2032-bbbd-4c0b-87ac-11605ffbf6c2 - - - - -] CMD
>> "/openstack/venvs/nova-17.1.12/bin/python -m oslo_concurrency.prlimit
>> --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info
>> /var/lib/nova/instances/f33b2602-ac5f-491e-bdb8-7e7f9376bcad/disk
>> --force-share" returned: 0 in 0.029s execute
>> /openstack/venvs/nova-17.1.12/lib/python2.7/site-packages/oslo_concurrency/processutils.py:409
>> 2020-04-16 16:35:02.070 219284 DEBUG nova.virt.libvirt.driver
>> [req-dd4a2032-bbbd-4c0b-87ac-11605ffbf6c2 - - - - -] skipping disk for
>> instance-00000636 as it does not have a path
>> _get_instance_disk_info_from_config
>> /openstack/venvs/nova-17.1.12/lib/python2.7/site-packages/nova/virt/libvirt/driver.py:7840
>> 2020-04-16 16:35:02.071 219284 DEBUG nova.virt.libvirt.driver
>> [req-dd4a2032-bbbd-4c0b-87ac-11605ffbf6c2 - - - - -] skipping disk for
>> instance-00000636 as it does not have a path
>> _get_instance_disk_info_from_config
>> /openstack/venvs/nova-17.1.12/lib/python2.7/site-packages/nova/virt/libvirt/driver.py:7840
>> 2020-04-16 16:35:02.073 219284 DEBUG oslo_concurrency.processutils
>> [req-dd4a2032-bbbd-4c0b-87ac-11605ffbf6c2 - - - - -] Running cmd
>> (subprocess): /openstack/venvs/nova-17.1.12/bin/python -m
>> oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C
>> qemu-img info
>> /var/lib/nova/instances/d3d2837b-49c3-4822-b26b-4b3c03d344ae/disk
>> --force-share execute
>> /openstack/venvs/nova-17.1.12/lib/python2.7/site-packages/oslo_concurrency/processutils.py:372
>> 2020-04-16 16:35:02.101 219284 DEBUG oslo_concurrency.processutils
>> [req-dd4a2032-bbbd-4c0b-87ac-11605ffbf6c2 - - - - -] CMD
>> "/openstack/venvs/nova-17.1.12/bin/python -m oslo_concurrency.prlimit
>> --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info
>> /var/lib/nova/instances/d3d2837b-49c3-4822-b26b-4b3c03d344ae/disk
>> --force-share" returned: 0 in 0.028s execute
>> /openstack/venvs/nova-17.1.12/lib/python2.7/site-packages/oslo_concurrency/processutils.py:409
>> 2020-04-16 16:35:02.101 219284 DEBUG oslo_concurrency.processutils
>> [req-dd4a2032-bbbd-4c0b-87ac-11605ffbf6c2 - - - - -] Running cmd
>> (subprocess): /openstack/venvs/nova-17.1.12/bin/python -m
>> oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C
>> qemu-img info
>> /var/lib/nova/instances/d3d2837b-49c3-4822-b26b-4b3c03d344ae/disk
>> --force-share execute
>> /openstack/venvs/nova-17.1.12/lib/python2.7/site-packages/oslo_concurrency/processutils.py:372
>> 2020-04-16 16:35:02.129 219284 DEBUG oslo_concurrency.processutils
>> [req-dd4a2032-bbbd-4c0b-87ac-11605ffbf6c2 - - - - -] CMD
>> "/openstack/venvs/nova-17.1.12/bin/python -m oslo_concurrency.prlimit
>> --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info
>> /var/lib/nova/instances/d3d2837b-49c3-4822-b26b-4b3c03d344ae/disk
>> --force-share" returned: 0 in 0.028s execute
>> /openstack/venvs/nova-17.1.12/lib/python2.7/site-packages/oslo_concurrency/processutils.py:409
>> 2020-04-16 16:35:02.132 219284 DEBUG oslo_concurrency.processutils
>> [req-dd4a2032-bbbd-4c0b-87ac-11605ffbf6c2 - - - - -] Running cmd
>> (subprocess): /openstack/venvs/nova-17.1.12/bin/python -m
>> oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C
>> qemu-img info
>> /var/lib/nova/instances/dfa80e78-ee02-46e5-ba7a-0874fa37da56/disk
>> --force-share execute
>> /openstack/venvs/nova-17.1.12/lib/python2.7/site-packages/oslo_concurrency/processutils.py:372
>> 2020-04-16 16:35:02.159 219284 DEBUG oslo_concurrency.processutils
>> [req-dd4a2032-bbbd-4c0b-87ac-11605ffbf6c2 - - - - -] CMD
>> "/openstack/venvs/nova-17.1.12/bin/python -m oslo_concurrency.prlimit
>> --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info
>> /var/lib/nova/instances/dfa80e78-ee02-46e5-ba7a-0874fa37da56/disk
>> --force-share" returned: 0 in 0.028s execute
>> /openstack/venvs/nova-17.1.12/lib/python2.7/site-packages/oslo_concurrency/processutils.py:409
>> 2020-04-16 16:35:02.160 219284 DEBUG oslo_concurrency.processutils
>> [req-dd4a2032-bbbd-4c0b-87ac-11605ffbf6c2 - - - - -] Running cmd
>> (subprocess): /openstack/venvs/nova-17.1.12/bin/python -m
>> oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C
>> qemu-img info
>> /var/lib/nova/instances/dfa80e78-ee02-46e5-ba7a-0874fa37da56/disk
>> --force-share execute
>> /openstack/venvs/nova-17.1.12/lib/python2.7/site-packages/oslo_concurrency/processutils.py:372
>> 2020-04-16 16:35:02.187 219284 DEBUG oslo_concurrency.processutils
>> [req-dd4a2032-bbbd-4c0b-87ac-11605ffbf6c2 - - - - -] CMD
>> "/openstack/venvs/nova-17.1.12/bin/python -m oslo_concurrency.prlimit
>> --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info
>> /var/lib/nova/instances/dfa80e78-ee02-46e5-ba7a-0874fa37da56/disk
>> --force-share" returned: 0 in 0.027s execute
>> /openstack/venvs/nova-17.1.12/lib/python2.7/site-packages/oslo_concurrency/processutils.py:409
>> 2020-04-16 16:35:02.190 219284 DEBUG oslo_concurrency.processutils
>> [req-dd4a2032-bbbd-4c0b-87ac-11605ffbf6c2 - - - - -] Running cmd
>> (subprocess): /openstack/venvs/nova-17.1.12/bin/python -m
>> oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C
>> qemu-img info
>> /var/lib/nova/instances/f117eb96-06a9-4c91-9c5c-111228e24d66/disk
>> --force-share execute
>> /openstack/venvs/nova-17.1.12/lib/python2.7/site-packages/oslo_concurrency/processutils.py:372
>> 2020-04-16 16:35:02.217 219284 DEBUG oslo_concurrency.processutils
>> [req-dd4a2032-bbbd-4c0b-87ac-11605ffbf6c2 - - - - -] CMD
>> "/openstack/venvs/nova-17.1.12/bin/python -m oslo_concurrency.prlimit
>> --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info
>> /var/lib/nova/instances/f117eb96-06a9-4c91-9c5c-111228e24d66/disk
>> --force-share" returned: 0 in 0.027s execute
>> /openstack/venvs/nova-17.1.12/lib/python2.7/site-packages/oslo_concurrency/processutils.py:409
>> 2020-04-16 16:35:02.218 219284 DEBUG oslo_concurrency.processutils
>> [req-dd4a2032-bbbd-4c0b-87ac-11605ffbf6c2 - - - - -] Running cmd
>> (subprocess): /openstack/venvs/nova-17.1.12/bin/python -m
>> oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C
>> qemu-img info
>> /var/lib/nova/instances/f117eb96-06a9-4c91-9c5c-111228e24d66/disk
>> --force-share execute
>> /openstack/venvs/nova-17.1.12/lib/python2.7/site-packages/oslo_concurrency/processutils.py:372
>> 2020-04-16 16:35:02.245 219284 DEBUG oslo_concurrency.processutils
>> [req-dd4a2032-bbbd-4c0b-87ac-11605ffbf6c2 - - - - -] CMD
>> "/openstack/venvs/nova-17.1.12/bin/python -m oslo_concurrency.prlimit
>> --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info
>> /var/lib/nova/instances/f117eb96-06a9-4c91-9c5c-111228e24d66/disk
>> --force-share" returned: 0 in 0.027s execute
>> /openstack/venvs/nova-17.1.12/lib/python2.7/site-packages/oslo_concurrency/processutils.py:409
>> 2020-04-16 16:35:02.247 219284 DEBUG oslo_concurrency.processutils
>> [req-dd4a2032-bbbd-4c0b-87ac-11605ffbf6c2 - - - - -] Running cmd
>> (subprocess): /openstack/venvs/nova-17.1.12/bin/python -m
>> oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C
>> qemu-img info
>> /var/lib/nova/instances/5ba39de3-f5f8-46a2-908d-c43b901e1696/disk
>> --force-share execute
>> /openstack/venvs/nova-17.1.12/lib/python2.7/site-packages/oslo_concurrency/processutils.py:372
>> 2020-04-16 16:35:02.274 219284 DEBUG oslo_concurrency.processutils
>> [req-dd4a2032-bbbd-4c0b-87ac-11605ffbf6c2 - - - - -] CMD
>> "/openstack/venvs/nova-17.1.12/bin/python -m oslo_concurrency.prlimit
>> --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info
>> /var/lib/nova/instances/5ba39de3-f5f8-46a2-908d-c43b901e1696/disk
>> --force-share" returned: 0 in 0.027s execute
>> /openstack/venvs/nova-17.1.12/lib/python2.7/site-packages/oslo_concurrency/processutils.py:409
>> 2020-04-16 16:35:02.275 219284 DEBUG oslo_concurrency.processutils
>> [req-dd4a2032-bbbd-4c0b-87ac-11605ffbf6c2 - - - - -] Running cmd
>> (subprocess): /openstack/venvs/nova-17.1.12/bin/python -m
>> oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C
>> qemu-img info
>> /var/lib/nova/instances/5ba39de3-f5f8-46a2-908d-c43b901e1696/disk
>> --force-share execute
>> /openstack/venvs/nova-17.1.12/lib/python2.7/site-packages/oslo_concurrency/processutils.py:372
>> 2020-04-16 16:35:02.302 219284 DEBUG oslo_concurrency.processutils
>> [req-dd4a2032-bbbd-4c0b-87ac-11605ffbf6c2 - - - - -] CMD
>> "/openstack/venvs/nova-17.1.12/bin/python -m oslo_concurrency.prlimit
>> --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info
>> /var/lib/nova/instances/5ba39de3-f5f8-46a2-908d-c43b901e1696/disk
>> --force-share" returned: 0 in 0.027s execute
>> /openstack/venvs/nova-17.1.12/lib/python2.7/site-packages/oslo_concurrency/processutils.py:409
>> 2020-04-16 16:35:02.669 219284 DEBUG nova.compute.resource_tracker
>> [req-dd4a2032-bbbd-4c0b-87ac-11605ffbf6c2 - - - - -] Hypervisor/Node
>> resource view: name=KO1A3D02O131106CM07.openstack.local free_ram=72406MB
>> free_disk=523GB free_vcpus=10 pci_devices=[{"dev_id": "pci_0000_3a_0a_7",
>> "product_id": "2047",
>> /openstack/venvs/nova-17.1.12/lib/python2.7/site-packages/nova/compute/resource_tracker.py:828
>> 2020-04-16 16:35:02.670 219284 DEBUG oslo_concurrency.lockutils
>> [req-dd4a2032-bbbd-4c0b-87ac-11605ffbf6c2 - - - - -] Lock
>> "compute_resources" acquired by
>> "nova.compute.resource_tracker._update_available_resource" :: waited 0.000s
>> inner
>> /openstack/venvs/nova-17.1.12/lib/python2.7/site-packages/oslo_concurrency/lockutils.py:273
>> 2020-04-16 16:35:02.729 219284 DEBUG nova.compute.resource_tracker
>> [req-dd4a2032-bbbd-4c0b-87ac-11605ffbf6c2 - - - - -] Compute driver doesn't
>> require allocation refresh and we're on a compute host in a deployment that
>> only has compute hosts with Nova versions >=16 (Pike). Skipping
>> auto-correction of allocations. _update_usage_from_instances
>> /openstack/venvs/nova-17.1.12/lib/python2.7/site-packages/nova/compute/resource_tracker.py:1247
>> 2020-04-16 16:35:02.784 219284 DEBUG nova.compute.resource_tracker
>> [req-dd4a2032-bbbd-4c0b-87ac-11605ffbf6c2 - - - - -] Instance
>> 5ba39de3-f5f8-46a2-908d-c43b901e1696 actively managed on this compute host
>> and has allocations in placement: {u'resources': {u'VCPU': 16,
>> u'MEMORY_MB': 57344, u'DISK_GB': 60}}.
>> _remove_deleted_instances_allocations
>> /openstack/venvs/nova-17.1.12/lib/python2.7/site-packages/nova/compute/resource_tracker.py:1269
>> 2020-04-16 16:35:02.785 219284 DEBUG nova.compute.resource_tracker
>> [req-dd4a2032-bbbd-4c0b-87ac-11605ffbf6c2 - - - - -] Instance
>> dfa80e78-ee02-46e5-ba7a-0874fa37da56 actively managed on this compute host
>> and has allocations in placement: {u'resources': {u'VCPU': 12,
>> u'MEMORY_MB': 24576, u'DISK_GB': 60}}.
>> _remove_deleted_instances_allocations
>> /openstack/venvs/nova-17.1.12/lib/python2.7/site-packages/nova/compute/resource_tracker.py:1269
>> 2020-04-16 16:35:02.785 219284 DEBUG nova.compute.resource_tracker
>> [req-dd4a2032-bbbd-4c0b-87ac-11605ffbf6c2 - - - - -] Instance
>> f33b2602-ac5f-491e-bdb8-7e7f9376bcad actively managed on this compute host
>> and has allocations in placement: {u'resources': {u'VCPU': 16,
>> u'MEMORY_MB': 49152, u'DISK_GB': 40}}.
>> _remove_deleted_instances_allocations
>> /openstack/venvs/nova-17.1.12/lib/python2.7/site-packages/nova/compute/resource_tracker.py:1269
>> 2020-04-16 16:35:02.785 219284 DEBUG nova.compute.resource_tracker
>> [req-dd4a2032-bbbd-4c0b-87ac-11605ffbf6c2 - - - - -] Instance
>> d3d2837b-49c3-4822-b26b-4b3c03d344ae actively managed on this compute host
>> and has allocations in placement: {u'resources': {u'VCPU': 16,
>> u'MEMORY_MB': 49152, u'DISK_GB': 60}}.
>> _remove_deleted_instances_allocations
>> /openstack/venvs/nova-17.1.12/lib/python2.7/site-packages/nova/compute/resource_tracker.py:1269
>> 2020-04-16 16:35:02.785 219284 DEBUG nova.compute.resource_tracker
>> [req-dd4a2032-bbbd-4c0b-87ac-11605ffbf6c2 - - - - -] Instance
>> f117eb96-06a9-4c91-9c5c-111228e24d66 actively managed on this compute host
>> and has allocations in placement: {u'resources': {u'VCPU': 2, u'MEMORY_MB':
>> 4096, u'DISK_GB': 20}}. _remove_deleted_instances_allocations
>> /openstack/venvs/nova-17.1.12/lib/python2.7/site-packages/nova/compute/resource_tracker.py:1269
>> 2020-04-16 16:35:02.785 219284 DEBUG nova.compute.resource_tracker
>> [req-dd4a2032-bbbd-4c0b-87ac-11605ffbf6c2 - - - - -] Total usable vcpus:
>> 72, total allocated vcpus: 62 _report_final_resource_view
>> /openstack/venvs/nova-17.1.12/lib/python2.7/site-packages/nova/compute/resource_tracker.py:844
>> 2020-04-16 16:35:02.786 219284 INFO nova.compute.resource_tracker
>> [req-dd4a2032-bbbd-4c0b-87ac-11605ffbf6c2 - - - - -] Final resource view:
>> name=KO1A3D02O131106CM07.openstack.local phys_ram=385391MB
>> used_ram=192512MB phys_disk=548GB used_disk=250GB total_vcpus=72
>> used_vcpus=62 pci_stats=[]
>> 2020-04-16 16:35:02.814 219284 DEBUG nova.compute.resource_tracker
>> [req-dd4a2032-bbbd-4c0b-87ac-11605ffbf6c2 - - - - -] Compute_service record
>> updated for KO1A3D02O131106CM07:KO1A3D02O131106CM07.openstack.local
>> _update_available_resource
>> /openstack/venvs/nova-17.1.12/lib/python2.7/site-packages/nova/compute/resource_tracker.py:784
>> 2020-04-16 16:35:02.814 219284 DEBUG oslo_concurrency.lockutils
>> [req-dd4a2032-bbbd-4c0b-87ac-11605ffbf6c2 - - - - -] Lock
>> "compute_resources" released by
>> "nova.compute.resource_tracker._update_available_resource" :: held 0.144s
>> inner
>> /openstack/venvs/nova-17.1.12/lib/python2.7/site-packages/oslo_concurrency/lockutils.py:285
>> 2020-04-16 16:35:37.612 219284 DEBUG oslo_service.periodic_task
>> [req-dd4a2032-bbbd-4c0b-87ac-11605ffbf6c2 - - - - -] Running periodic task
>> ComputeManager._reclaim_queued_deletes run_periodic_tasks
>> /openstack/venvs/nova-17.1.12/lib/python2.7/site-packages/oslo_service/periodic_task.py:215
>> 2020-04-16 16:35:37.613 219284 DEBUG nova.compute.manager
>> [req-dd4a2032-bbbd-4c0b-87ac-11605ffbf6c2 - - - - -]
>> CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes
>> /openstack/venvs/nova-17.1.12/lib/python2.7/site-packages/nova/compute/manager.py:7438
>> 2020-04-16 16:35:38.685 219284 DEBUG oslo_service.periodic_task
>> [req-dd4a2032-bbbd-4c0b-87ac-11605ffbf6c2 - - - - -] Running periodic task
>> ComputeManager._poll_rebooting_instances run_periodic_tasks
>> /openstack/venvs/nova-17.1.12/lib/python2.7/site-packages/oslo_service/periodic_task.py:215
>> 2020-04-16 16:35:39.685 219284 DEBUG oslo_service.periodic_task
>> [req-dd4a2032-bbbd-4c0b-87ac-11605ffbf6c2 - - - - -] Running periodic task
>> ComputeManager._instance_usage_audit run_periodic_tasks
>> /openstack/venvs/nova-17.1.12/lib/python2.7/site-packages/oslo_service/periodic_task.py:215
>>
>> --
>> Best Regards
>> Pradeep Kumar
>>
>
>
> --
> Ruslanas Gžibovskis
> +370 6030 7030
>
--
Best Regards
Pradeep Kumar
4 years, 7 months