RDO | OpenStack VMs | XFS Metadata Corruption
by Pradeep Antil
Hi Techies,
I have below RDO setup,
- RDO 13
- Base OS for Controllers & Compute is Ubuntu
- Neutron with vxlan + VLAN (for provider N/W)
- Cinder backend is CHEF
- HugePages and CPU Pinning for VNF's VMs
I am trying to deploy a stack which is suppose to create 18 VMs across 11
computes nodes internal disk, but every time 3 to 4 VMs out of 18 doesn't
spawned properly. At console of these VMs i am getting below errors,
Any idea and suggestion how to troubleshoot this? and resolve the issue.
[ 100.681552] ffff8b37f8f86020: 00 00 00 00 00 00 00 00 00 00 00 00 00 00
00 00 ................
[ 100.681553] ffff8b37f8f86030: 00 00 00 00 00 00 00 00 00 00 00 00 00 00
00 00 ................
[ 100.681560] XFS (vda1): Metadata corruption detected at
xfs_inode_buf_verify+0x79/0x100 [xfs], xfs_inode block 0x179b800
[ 100.681561] XFS (vda1): Unmount and run xfs_repair
[ 100.681561] XFS (vda1): First 64 bytes of corrupted metadata buffer:
[ 100.681562] ffff8b37f8f86000: 00 00 00 00 00 00 00 00 00 00 00 00 00 00
00 00 ................
[ 100.681562] ffff8b37f8f86010: 00 00 00 00 00 00 00 00 00 00 00 00 00 00
00 00 ................
[ 100.681563] ffff8b37f8f86020: 00 00 00 00 00 00 00 00 00 00 00 00 00 00
00 00 ................
[ 100.681564] ffff8b37f8f86030: 00 00 00 00 00 00 00 00 00 00 00 00 00 00
00 00 ................
[ 100.681596] XFS (vda1): metadata I/O error: block 0x179b800
("xfs_trans_read_buf_map") error 117 numblks 32
[ 100.681599] XFS (vda1): xfs_imap_to_bp: xfs_trans_read_buf() returned
error -117.
[ 99.585766] [cloud-init[32m OK [0m[2530]: ] Cloud-init v. 18.2 running
'init-local' at Thu, 16 Apr 2020 10:44:21 +0000. Up 99.55 seconds.Started
oVirt Guest Agent.
[ 101.086566] XFS (vda1): Metadata corruption detected at
xfs_inode_buf_verify+0x79/0x100 [xfs], xfs_inode block 0x179b800
[ 101.092093] XFS (vda1): Unmount and run xfs_repair
[ 101.094660] XFS (vda1): First 64 bytes of corrupted metadata buffer:
[ 101.097787] ffff8b37fef07000: 00 00 00 00 00 00 00 00 00 00 00 00 00 00
00 00 ................
[ 101.105959] ffff8b37fef07010: 00 00 00 00 00 00 00 00 00 00 00 00 00 00
00 00 ................
[ 101.110718] ffff8b37fef07020: 00 00 00 00 00 00 00 00 00 00 00 00 00 00
00 00 ................
[ 101.115412] ffff8b37fef07030: 00 00 00 00 00 00 00 00 00 00 00 00 00 00
00 00 ................
[ 101.120166] XFS (vda1): Metadata corruption detected at
xfs_inode_buf_verify+0x79/0x100 [xfs], xfs_inode block 0x179b800
[ 101.125644] XFS (vda1): Unmount and run xfs_repair
[ 101.128229] XFS (vda1): First 64 bytes of corrupted metadata buffer:
[ 101.131370] ffff8b37fef07000: 00 00 00 00 00 00 00 00 00 00 00 00 00 00
00 00 ................
[ 101.138671] ffff8b37fef07010: 00 00 00 00 00 00 00 00 00 00 00 00 00 00
00 00 ................
[ 101.143427] ffff8b37fef07020: 00 00 00 00 00 00 00 00 00 00 00 00 00 00
00 00 ................
[ 101.148235] ffff8b37fef07030: 00 00 00 00 00 00 00 00 00 00 00 00 00 00
00 00 ................
[ 101.152999] XFS (vda1): Metadata corruption detected at
xfs_inode_buf_verify+0x79/0x100 [xfs], xfs_inode block 0x179b800
[ 101.158479] XFS (vda1): Unmount and run xfs_repair
[ 101.161068] XFS (vda1): First 64 bytes of corrupted metadata buffer:
[ 101.169883] ffff8b37fef07000: 00 00 00 00 00 00 00 00 00 00 00 00 00 00
00 00 ................
[ 101.174751] ffff8b37fef07010: 00 00 00 00 00 00 00 00 00 00 00 00 00 00
00 00 ................
[ 101.179639] ffff8b37fef07020: 00 00 00 00 00 00 00 00 00 00 00 00 00 00
00 00 ................
[ 101.184285] ffff8b37fef07030: 00 00 00 00 00 00 00 00 00 00 00 00 00 00
00 00 ................
[ 101.189104] XFS (vda1): Metadata corruption detected at
xfs_inode_buf_verify+0x79/0x100 [xfs], xfs_inode block 0x179b800
[ 101.194619] XFS (vda1): Unmount and run xfs_repair
[ 101.197228] XFS (vda1): First 64 bytes of corrupted metadata buffer:
[ 101.201109] ffff8b37fef07000: 00 00 00 00 00 00 00 00 00 00 00 00 00 00
00 00 ................
[ 101.205976] ffff8b37fef07010: 00 00 00 00 00 00 00 00 00 00 00 00 00 00
00 00 ................
[ 101.210709] ffff8b37fef07020: 00 00 00 00 00 00 00 00 00 00 00 00 00 00
00 00 ................
[ 101.215442] ffff8b37fef07030: 00 00 00 00 00 00 00 00 00 00 00 00 00 00
00 00 ................
[ 101.220196] XFS (vda1): Metadata corruption detected at
xfs_inode_buf_verify+0x79/0x100 [xfs], xfs_inode block 0x179b800
[ 101.225708] XFS (vda1): Unmount and run xfs_repair
[ 101.228296] XFS (vda1): First 64 bytes of corrupted metadata buffer:
[ 101.232058] ffff8b37fef07000: 00 00 00 00 00 00 00 00 00 00 00 00 00 00
00 00 ................
[ 101.236803] ffff8b37fef07010: 00 00 00 00 00 00 00 00 00 00 00 00 00 00
00 00 ................
[ 101.241538] ffff8b37fef07020: 00 00 00 00 00 00 00 00 00 00 00 00 00 00
00 00 ................
[ 101.246252] ffff8b37fef07030: 00 00 00 00 00 00 00 00 00 00 00 00 00 00
00 00 ................
[ 101.250997] XFS (vda1): Metadata corruption detected at
xfs_inode_buf_verify+0x79/0x100 [xfs], xfs_inode block 0x179b800
[ 101.256518] XFS (vda1): Unmount and run xfs_repair
[ 101.259105] XFS (vda1): First 64 bytes of corrupted metadata buffer:
[ 101.262912] ffff8b37fef07000: 00 00 00 00 00 00 00 00 00 00 00 00 00 00
00 00 ................
[ 101.267649] ffff8b37fef07010: 00 00 00 00 00 00 00 00 00 00 00 00 00 00
00 00 ................
[ 101.272360] ffff8b37fef07020: 00 00 00 00 00 00 00 00 00 00 00 00 00 00
00 00 ................
[ 101.277088] ffff8b37fef07030: 00 00 00 00 00 00 00 00 00 00 00 00 00 00
00 00 ................
[ 101.281831] XFS (vda1): Metadata corruption detected at
xfs_inode_buf_verify+0x79/0x100 [xfs], xfs_inode block 0x179b800
[ 101.287322] XFS (vda1): Unmount and run xfs_repair
[ 101.295401] XFS (vda1): First 64 bytes of corrupted metadata buffer:
[ 101.298546] ffff8b37fef07000: 00 00 00 00 00 00 00 00 00 00 00 00 00 00
00 00 ................
[ 101.303283] ffff8b37fef07010: 00 00 00 00 00 00 00 00 00 00 00 00 00 00
00 00 ................
[ 101.308009] ffff8b37fef07020: 00 00 00 00 00 00 00 00 00 00 00 00 00 00
00 00 ................
[ 101.312747] ffff8b37fef07030: 00 00 00 00 00 00 00 00 00 00 00 00 00 00
00 00 ................
[ 101.317460] XFS (vda1): Metadata corruption detected at
xfs_inode_buf_verify+0x79/0x100 [xfs], xfs_inode block 0x179b800
[ 101.322960] XFS (vda1): Unmount and run xfs_repair
[ 101.326233] XFS (vda1): First 64 bytes of corrupted metadata buffer:
[ 101.329383] ffff8b37fef07000: 00 00 00 00 00 00 00 00 00 00 00 00 00 00
00 00 ................
[ 101.334100] ffff8b37fef07010: 00 00 00 00 00 00 00 00 00 00 00 00 00 00
00 00 ................
[ 101.338822] ffff8b37fef07020: 00 00 00 00 00 00 00 00 00 00 00 00 00 00
00 00 ................
[ 101.343549] ffff8b37fef07030: 00 00 00 00 00 00 00 00 00 00 00 00 00 00
00 00 ................
[ 101.348297] XFS (vda1): Metadata corruption detected at
xfs_inode_buf_verify+0x79/0x100 [xfs], xfs_inode block 0x179b800
[ 101.353793] XFS (vda1): Unmount and run xfs_repair
[ 101.357102] XFS (vda1): First 64 bytes of corrupted metadata buffer:
Below are the Nova Compute logs of the hypervisor where it is scheduled to
spawned,
3T06:04:55Z,direct_url=<?>,disk_format='qcow2',id=c255bbbc-c8c3-462e-b827-1d35db08d283,min_disk=0,min_ram=0,name='vnf-scef-18.5',owner='36c70ae400e74fc2859f44815d0c9afb',properties=ImageMetaProps,protected=<?>,size=7143292928,status='active',tags=<?>,updated_at=2020-03-03T06:05:49Z,virtual_size=<?>,visibility=<?>)
rescue=None block_device_info={'swap': None, 'root_device_name':
u'/dev/vda', 'ephemerals': [], 'block_device_mapping': []} _get_guest_xml
/openstack/venvs/nova-17.1.12/lib/python2.7/site-packages/nova/virt/libvirt/driver.py:5419
2020-04-16 16:12:28.310 219284 DEBUG oslo_concurrency.processutils
[req-5a53263c-928c-4a0c-a03c-8b698339efca cbabd9368dc24fea84fd2e43935fddfa
975a7d3840a141b0a20a9dc60e3da6cd - default default] CMD
"/openstack/venvs/nova-17.1.12/bin/python -m oslo_concurrency.prlimit
--as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info
/var/lib/nova/instances/dfa80e78-ee02-46e5-ba7a-0874fa37da56/disk
--force-share" returned: 0 in 0.031s execute
/openstack/venvs/nova-17.1.12/lib/python2.7/site-packages/oslo_concurrency/processutils.py:409
2020-04-16 16:12:28.310 219284 DEBUG oslo_concurrency.processutils
[req-5a53263c-928c-4a0c-a03c-8b698339efca cbabd9368dc24fea84fd2e43935fddfa
975a7d3840a141b0a20a9dc60e3da6cd - default default] Running cmd
(subprocess): qemu-img resize
/var/lib/nova/instances/dfa80e78-ee02-46e5-ba7a-0874fa37da56/disk
64424509440 execute
/openstack/venvs/nova-17.1.12/lib/python2.7/site-packages/oslo_concurrency/processutils.py:372
2020-04-16 16:12:28.322 219284 DEBUG oslo_concurrency.processutils
[req-5a53263c-928c-4a0c-a03c-8b698339efca cbabd9368dc24fea84fd2e43935fddfa
975a7d3840a141b0a20a9dc60e3da6cd - default default] CMD "qemu-img resize
/var/lib/nova/instances/dfa80e78-ee02-46e5-ba7a-0874fa37da56/disk
64424509440" returned: 0 in 0.012s execute
/openstack/venvs/nova-17.1.12/lib/python2.7/site-packages/oslo_concurrency/processutils.py:409
2020-04-16 16:12:28.323 219284 DEBUG oslo_concurrency.lockutils
[req-5a53263c-928c-4a0c-a03c-8b698339efca cbabd9368dc24fea84fd2e43935fddfa
975a7d3840a141b0a20a9dc60e3da6cd - default default] Lock
"86692cd1e738b8df7cf1f951967c61e92222fc4c" released by
"nova.virt.libvirt.imagebackend.copy_qcow2_image" :: held 0.092s inner
/openstack/venvs/nova-17.1.12/lib/python2.7/site-packages/oslo_concurrency/lockutils.py:285
2020-04-16 16:12:28.323 219284 DEBUG oslo_concurrency.processutils
[req-5a53263c-928c-4a0c-a03c-8b698339efca cbabd9368dc24fea84fd2e43935fddfa
975a7d3840a141b0a20a9dc60e3da6cd - default default] Running cmd
(subprocess): /openstack/venvs/nova-17.1.12/bin/python -m
oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C
qemu-img info
/var/lib/nova/instances/_base/86692cd1e738b8df7cf1f951967c61e92222fc4c
--force-share execute
/openstack/venvs/nova-17.1.12/lib/python2.7/site-packages/oslo_concurrency/processutils.py:372
2020-04-16 16:12:28.338 219284 DEBUG nova.virt.libvirt.driver
[req-a7ee4c3e-ea3a-4237-ba75-4c85411c9889 cbabd9368dc24fea84fd2e43935fddfa
975a7d3840a141b0a20a9dc60e3da6cd - default default] CPU mode 'host-model'
model '' was chosen, with extra flags: '' _get_guest_cpu_model_config
/openstack/venvs/nova-17.1.12/lib/python2.7/site-packages/nova/virt/libvirt/driver.py:3909
2020-04-16 16:12:28.338 219284 DEBUG nova.virt.hardware
[req-a7ee4c3e-ea3a-4237-ba75-4c85411c9889 cbabd9368dc24fea84fd2e43935fddfa
975a7d3840a141b0a20a9dc60e3da6cd - default default] Getting desirable
topologies for flavor
Flavor(created_at=2020-03-23T11:20:41Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw:cpu_policy='dedicated',hw:mem_page_size='1048576'},flavorid='03e45d45-f4f4-4c24-8b70-678c3703402f',id=102,is_public=False,memory_mb=49152,name='dmdc-traffic-flavor',projects=<?>,root_gb=60,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=16)
and image_meta
ImageMeta(checksum='69f8c18e59db9669d669d04824507a82',container_format='bare',created_at=2020-03-03T06:07:18Z,direct_url=<?>,disk_format='qcow2',id=d31e39bc-c2b7-42ad-968f-7e782dd72943,min_disk=0,min_ram=0,name='vnf-dmdc-18.5.0',owner='36c70ae400e74fc2859f44815d0c9afb',properties=ImageMetaProps,protected=<?>,size=5569380352,status='active',tags=<?>,updated_at=2020-03-03T06:08:03Z,virtual_size=<?>,visibility=<?>),
allow threads: True _get_desirable_cpu_topologies
/openstack/venvs/nova-17.1.12/lib/python2.7/site-packages/nova/virt/hardware.py:551
2020-04-16 16:12:28.339 219284 DEBUG nova.virt.hardware
[req-a7ee4c3e-ea3a-4237-ba75-4c85411c9889 cbabd9368dc24fea84fd2e43935fddfa
975a7d3840a141b0a20a9dc60e3da6cd - default default] Flavor limits
65536:65536:65536 _get_cpu_topology_constraints
/openstack/venvs/nova-17.1.12/lib/python2.7/site-packages/nova/virt/hardware.py:297
2020-04-16 16:12:28.339 219284 DEBUG nova.virt.hardware
[req-a7ee4c3e-ea3a-4237-ba75-4c85411c9889 cbabd9368dc24fea84fd2e43935fddfa
975a7d3840a141b0a20a9dc60e3da6cd - default default] Image limits
65536:65536:65536 _get_cpu_topology_constraints
/openstack/venvs/nova-17.1.12/lib/python2.7/site-packages/nova/virt/hardware.py:308
2020-04-16 16:12:28.339 219284 DEBUG nova.virt.hardware
[req-a7ee4c3e-ea3a-4237-ba75-4c85411c9889 cbabd9368dc24fea84fd2e43935fddfa
975a7d3840a141b0a20a9dc60e3da6cd - default default] Flavor pref -1:-1:-1
_get_cpu_topology_constraints
/openstack/venvs/nova-17.1.12/lib/python2.7/site-packages/nova/virt/hardware.py:331
2020-04-16 16:12:28.340 219284 DEBUG nova.virt.hardware
[req-a7ee4c3e-ea3a-4237-ba75-4c85411c9889 cbabd9368dc24fea84fd2e43935fddfa
975a7d3840a141b0a20a9dc60e3da6cd - default default] Image pref -1:-1:-1
_get_cpu_topology_constraints
/openstack/venvs/nova-17.1.12/lib/python2.7/site-packages/nova/virt/hardware.py:350
2020-04-16 16:12:28.340 219284 DEBUG nova.virt.hardware
[req-a7ee4c3e-ea3a-4237-ba75-4c85411c9889 cbabd9368dc24fea84fd2e43935fddfa
975a7d3840a141b0a20a9dc60e3da6cd - default default] Chosen -1:-1:-1 limits
65536:65536:65536 _get_cpu_topology_constraints
/openstack/venvs/nova-17.1.12/lib/python2.7/site-packages/nova/virt/hardware.py:379
2020-04-16 16:12:28.340 219284 DEBUG nova.virt.hardware
[req-a7ee4c3e-ea3a-4237-ba75-4c85411c9889 cbabd9368dc24fea84fd2e43935fddfa
975a7d3840a141b0a20a9dc60e3da6cd - default default] Topology preferred
VirtCPUTopology(cores=-1,sockets=-1,threads=-1), maximum
VirtCPUTopology(cores=65536,sockets=65536,threads=65536)
_get_desirable_cpu_topologies
/openstack/venvs/nova-17.1.12/lib/python2.7/site-packages/nova/virt/hardware.py:555
2020-04-16 16:12:28.340 219284 DEBUG nova.virt.hardware
[req-a7ee4c3e-ea3a-4237-ba75-4c85411c9889 cbabd9368dc24fea84fd2e43935fddfa
975a7d3840a141b0a20a9dc60e3da6cd - default default] Build topologies for 16
vcpu(s) 16:16:16 _get_possible_cpu_topologies
/openstack/venvs/nova-17.1.12/lib/python2.7/site-packages/nova/virt/hardware.py:418
2020-04-16 16:12:28.341 219284 DEBUG nova.virt.hardware
[req-a7ee4c3e-ea3a-4237-ba75-4c85411c9889 cbabd9368dc24fea84fd2e43935fddfa
975a7d3840a141b0a20a9dc60e3da6cd - default default] Got 15 possible
topologies _get_possible_cpu_topologies
/openstack/venvs/nova-17.1.12/lib/python2.7/site-packages/nova/virt/hardware.py:445
2020-04-16 16:12:28.341 219284 DEBUG nova.virt.hardware
[req-a7ee4c3e-ea3a-4237-ba75-4c85411c9889 cbabd9368dc24fea84fd2e43935fddfa
975a7d3840a141b0a20a9dc60e3da6cd - default default] Possible topologies
[VirtCPUTopology(cores=1,sockets=16,threads=1),
VirtCPUTopology(cores=2,sockets=8,threads=1),
VirtCPUTopology(cores=4,sockets=4,threads=1),
VirtCPUTopology(cores=8,sockets=2,threads=1),
VirtCPUTopology(cores=16,sockets=1,threads=1),
VirtCPUTopology(cores=1,sockets=8,threads=2),
VirtCPUTopology(cores=2,sockets=4,threads=2),
VirtCPUTopology(cores=4,sockets=2,threads=2),
VirtCPUTopology(cores=8,sockets=1,threads=2),
VirtCPUTopology(cores=1,sockets=4,threads=4),
VirtCPUTopology(cores=2,sockets=2,threads=4),
VirtCPUTopology(cores=4,sockets=1,threads=4),
VirtCPUTopology(cores=1,sockets=2,threads=8),
VirtCPUTopology(cores=2,sockets=1,threads=8),
VirtCPUTopology(cores=1,sockets=1,threads=16)]
_get_desirable_cpu_topologies
/openstack/venvs/nova-17.1.12/lib/python2.7/site-packages/nova/virt/hardware.py:560
2020-04-16 16:12:28.341 219284 DEBUG nova.virt.hardware
[req-a7ee4c3e-ea3a-4237-ba75-4c85411c9889 cbabd9368dc24fea84fd2e43935fddfa
975a7d3840a141b0a20a9dc60e3da6cd - default default] Filtering topologies
best for 2 threads _get_desirable_cpu_topologies
/openstack/venvs/nova-17.1.12/lib/python2.7/site-packages/nova/virt/hardware.py:578
2020-04-16 16:12:28.342 219284 DEBUG nova.virt.hardware
[req-a7ee4c3e-ea3a-4237-ba75-4c85411c9889 cbabd9368dc24fea84fd2e43935fddfa
975a7d3840a141b0a20a9dc60e3da6cd - default default] Remaining possible
topologies [VirtCPUTopology(cores=1,sockets=8,threads=2),
VirtCPUTopology(cores=2,sockets=4,threads=2),
VirtCPUTopology(cores=4,sockets=2,threads=2),
VirtCPUTopology(cores=8,sockets=1,threads=2)] _get_desirable_cpu_topologies
/openstack/venvs/nova-17.1.12/lib/python2.7/site-packages/nova/virt/hardware.py:583
2020-04-16 16:12:28.342 219284 DEBUG nova.virt.hardware
[req-a7ee4c3e-ea3a-4237-ba75-4c85411c9889 cbabd9368dc24fea84fd2e43935fddfa
975a7d3840a141b0a20a9dc60e3da6cd - default default] Sorted desired
topologies [VirtCPUTopology(cores=1,sockets=8,threads=2),
VirtCPUTopology(cores=2,sockets=4,threads=2),
VirtCPUTopology(cores=4,sockets=2,threads=2),
VirtCPUTopology(cores=8,sockets=1,threads=2)] _get_desirable_cpu_topologies
/openstack/venvs/nova-17.1.12/lib/python2.7/site-packages/nova/virt/hardware.py:586
2020-04-16 16:12:28.344 219284 DEBUG nova.virt.libvirt.driver
[req-48d96e5d-f071-44e7-94d2-e9fcb2a13087 cbabd9368dc24fea84fd2e43935fddfa
975a7d3840a141b0a20a9dc60e3da6cd - default default] CPU mode 'host-model'
model '' was chosen, with extra flags: '' _get_guest_cpu_model_config
/openstack/venvs/nova-17.1.12/lib/python2.7/site-packages/nova/virt/libvirt/driver.py:3909
2020-04-16 16:12:28.345 219284 DEBUG nova.virt.hardware
[req-48d96e5d-f071-44e7-94d2-e9fcb2a13087 cbabd9368dc24fea84fd2e43935fddfa
975a7d3840a141b0a20a9dc60e3da6cd - default default] Getting desirable
topologies for flavor
Flavor(created_at=2020-03-23T11:20:34Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw:cpu_policy='dedicated',hw:mem_page_size='1048576'},flavorid='d60b66d4-c0e0-4292-9113-1df2d94d57a5',id=90,is_public=False,memory_mb=57344,name='scef-traffic-flavor',projects=<?>,root_gb=60,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=16)
and image_meta
ImageMeta(checksum='3fe0e06194e0b5327ba38bb2367f760d',container_format='bare',created_at=2020-03-03T06:04:55Z,direct_url=<?>,disk_format='qcow2',id=c255bbbc-c8c3-462e-b827-1d35db08d283,min_disk=0,min_ram=0,name='vnf-scef-18.5',owner='36c70ae400e74fc2859f44815d0c9afb',properties=ImageMetaProps,protected=<?>,size=7143292928,status='active',tags=<?>,updated_at=2020-03-03T06:05:49Z,virtual_size=<?>,visibility=<?>),
allow threads: True _get_desirable_cpu_topologies
/openstack/venvs/nova-17.1.12/lib/python2.7/site-packages/nova/virt/hardware.py:551
2020-04-16 16:12:28.345 219284 DEBUG nova.virt.hardware
[req-48d96e5d-f071-44e7-94d2-e9fcb2a13087 cbabd9368dc24fea84fd2e43935fddfa
975a7d3840a141b0a20a9dc60e3da6cd - default default] Flavor limits
65536:65536:65536 _get_cpu_topology_constraints
/openstack/venvs/nova-17.1.12/lib/python2.7/site-packages/nova/virt/hardware.py:297
2020-04-16 16:12:28.345 219284 DEBUG nova.virt.hardware
[req-48d96e5d-f071-44e7-94d2-e9fcb2a13087 cbabd9368dc24fea84fd2e43935fddfa
975a7d3840a141b0a20a9dc60e3da6cd - default default] Image limits
65536:65536:65536 _get_cpu_topology_constraints
/openstack/venvs/nova-17.1.12/lib/python2.7/site-packages/nova/virt/hardware.py:308
2020-04-16 16:12:28.346 219284 DEBUG nova.virt.hardware
[req-48d96e5d-f071-44e7-94d2-e9fcb2a13087 cbabd9368dc24fea84fd2e43935fddfa
975a7d3840a141b0a20a9dc60e3da6cd - default default] Flavor pref -1:-1:-1
_get_cpu_topology_constraints
/openstack/venvs/nova-17.1.12/lib/python2.7/site-packages/nova/virt/hardware.py:331
2020-04-16 16:12:28.346 219284 DEBUG nova.virt.hardware
[req-48d96e5d-f071-44e7-94d2-e9fcb2a13087 cbabd9368dc24fea84fd2e43935fddfa
975a7d3840a141b0a20a9dc60e3da6cd - default default] Image pref -1:-1:-1
_get_cpu_topology_constraints
/openstack/venvs/nova-17.1.12/lib/python2.7/site-packages/nova/virt/hardware.py:350
2020-04-16 16:12:28.346 219284 DEBUG nova.virt.hardware
[req-48d96e5d-f071-44e7-94d2-e9fcb2a13087 cbabd9368dc24fea84fd2e43935fddfa
975a7d3840a141b0a20a9dc60e3da6cd - default default] Chosen -1:-1:-1 limits
65536:65536:65536 _get_cpu_topology_constraints
/openstack/venvs/nova-17.1.12/lib/python2.7/site-packages/nova/virt/hardware.py:379
packages/nova/network/base_api.py:48
2020-04-16 16:34:48.580 219284 DEBUG oslo_concurrency.lockutils
[req-dd4a2032-bbbd-4c0b-87ac-11605ffbf6c2 - - - - -] Releasing semaphore
"refresh_cache-f33b2602-ac5f-491e-bdb8-7e7f9376bcad" lock
/openstack/venvs/nova-17.1.12/lib/python2.7/site-packages/oslo_concurrency/lockutils.py:228
2020-04-16 16:34:48.580 219284 DEBUG nova.compute.manager
[req-dd4a2032-bbbd-4c0b-87ac-11605ffbf6c2 - - - - -] [instance:
f33b2602-ac5f-491e-bdb8-7e7f9376bcad] Updated the network info_cache for
instance _heal_instance_info_cache
/openstack/venvs/nova-17.1.12/lib/python2.7/site-packages/nova/compute/manager.py:6827
2020-04-16 16:34:50.580 219284 DEBUG oslo_service.periodic_task
[req-dd4a2032-bbbd-4c0b-87ac-11605ffbf6c2 - - - - -] Running periodic task
ComputeManager._run_image_cache_manager_pass run_periodic_tasks
/openstack/venvs/nova-17.1.12/lib/python2.7/site-packages/oslo_service/periodic_task.py:215
2020-04-16 16:34:50.581 219284 DEBUG oslo_concurrency.lockutils
[req-dd4a2032-bbbd-4c0b-87ac-11605ffbf6c2 - - - - -] Lock
"storage-registry-lock" acquired by
"nova.virt.storage_users.do_register_storage_use" :: waited 0.000s inner
/openstack/venvs/nova-17.1.12/lib/python2.7/site-packages/oslo_concurrency/lockutils.py:273
2020-04-16 16:34:50.581 219284 DEBUG oslo_concurrency.lockutils
[req-dd4a2032-bbbd-4c0b-87ac-11605ffbf6c2 - - - - -] Lock
"storage-registry-lock" released by
"nova.virt.storage_users.do_register_storage_use" :: held 0.000s inner
/openstack/venvs/nova-17.1.12/lib/python2.7/site-packages/oslo_concurrency/lockutils.py:285
2020-04-16 16:34:50.581 219284 DEBUG oslo_concurrency.lockutils
[req-dd4a2032-bbbd-4c0b-87ac-11605ffbf6c2 - - - - -] Lock
"storage-registry-lock" acquired by
"nova.virt.storage_users.do_get_storage_users" :: waited 0.000s inner
/openstack/venvs/nova-17.1.12/lib/python2.7/site-packages/oslo_concurrency/lockutils.py:273
2020-04-16 16:34:50.582 219284 DEBUG oslo_concurrency.lockutils
[req-dd4a2032-bbbd-4c0b-87ac-11605ffbf6c2 - - - - -] Lock
"storage-registry-lock" released by
"nova.virt.storage_users.do_get_storage_users" :: held 0.000s inner
/openstack/venvs/nova-17.1.12/lib/python2.7/site-packages/oslo_concurrency/lockutils.py:285
2020-04-16 16:34:50.628 219284 DEBUG nova.virt.libvirt.imagecache
[req-dd4a2032-bbbd-4c0b-87ac-11605ffbf6c2 - - - - -] Verify base images
_age_and_verify_cached_images
/openstack/venvs/nova-17.1.12/lib/python2.7/site-packages/nova/virt/libvirt/imagecache.py:348
2020-04-16 16:34:50.628 219284 DEBUG nova.virt.libvirt.imagecache
[req-dd4a2032-bbbd-4c0b-87ac-11605ffbf6c2 - - - - -] Image id yields
fingerprint da39a3ee5e6b4b0d3255bfef95601890afd80709
_age_and_verify_cached_images
/openstack/venvs/nova-17.1.12/lib/python2.7/site-packages/nova/virt/libvirt/imagecache.py:355
2020-04-16 16:34:50.628 219284 DEBUG nova.virt.libvirt.imagecache
[req-dd4a2032-bbbd-4c0b-87ac-11605ffbf6c2 - - - - -] Image id
b8783f95-138b-4265-a09d-55ec9d9ad35d yields fingerprint
b40b27e04896d063bc591b19642da8910da3eb1f _age_and_verify_cached_images
/openstack/venvs/nova-17.1.12/lib/python2.7/site-packages/nova/virt/libvirt/imagecache.py:355
2020-04-16 16:34:50.628 219284 INFO nova.virt.libvirt.imagecache
[req-dd4a2032-bbbd-4c0b-87ac-11605ffbf6c2 - - - - -] image
b8783f95-138b-4265-a09d-55ec9d9ad35d at
(/var/lib/nova/instances/_base/b40b27e04896d063bc591b19642da8910da3eb1f):
checking
2020-04-16 16:34:50.628 219284 DEBUG nova.virt.libvirt.imagecache
[req-dd4a2032-bbbd-4c0b-87ac-11605ffbf6c2 - - - - -] image
b8783f95-138b-4265-a09d-55ec9d9ad35d at
(/var/lib/nova/instances/_base/b40b27e04896d063bc591b19642da8910da3eb1f):
image is in use _mark_in_use
/openstack/venvs/nova-17.1.12/lib/python2.7/site-packages/nova/virt/libvirt/imagecache.py:329
2020-04-16 16:34:50.629 219284 DEBUG nova.virt.libvirt.imagecache
[req-dd4a2032-bbbd-4c0b-87ac-11605ffbf6c2 - - - - -] Image id
c255bbbc-c8c3-462e-b827-1d35db08d283 yields fingerprint
86692cd1e738b8df7cf1f951967c61e92222fc4c _age_and_verify_cached_images
/openstack/venvs/nova-17.1.12/lib/python2.7/site-packages/nova/virt/libvirt/imagecache.py:355
2020-04-16 16:34:50.630 219284 INFO nova.virt.libvirt.imagecache
[req-dd4a2032-bbbd-4c0b-87ac-11605ffbf6c2 - - - - -] image
c255bbbc-c8c3-462e-b827-1d35db08d283 at
(/var/lib/nova/instances/_base/86692cd1e738b8df7cf1f951967c61e92222fc4c):
checking
2020-04-16 16:34:50.630 219284 DEBUG nova.virt.libvirt.imagecache
[req-dd4a2032-bbbd-4c0b-87ac-11605ffbf6c2 - - - - -] image
c255bbbc-c8c3-462e-b827-1d35db08d283 at
(/var/lib/nova/instances/_base/86692cd1e738b8df7cf1f951967c61e92222fc4c):
image is in use _mark_in_use
/openstack/venvs/nova-17.1.12/lib/python2.7/site-packages/nova/virt/libvirt/imagecache.py:329
2020-04-16 16:34:50.630 219284 DEBUG nova.virt.libvirt.imagecache
[req-dd4a2032-bbbd-4c0b-87ac-11605ffbf6c2 - - - - -] Image id
d31e39bc-c2b7-42ad-968f-7e782dd72943 yields fingerprint
5c538ead16d8375e4890e8b9bb1aa080edc75f33 _age_and_verify_cached_images
/openstack/venvs/nova-17.1.12/lib/python2.7/site-packages/nova/virt/libvirt/imagecache.py:355
2020-04-16 16:34:50.630 219284 INFO nova.virt.libvirt.imagecache
[req-dd4a2032-bbbd-4c0b-87ac-11605ffbf6c2 - - - - -] image
d31e39bc-c2b7-42ad-968f-7e782dd72943 at
(/var/lib/nova/instances/_base/5c538ead16d8375e4890e8b9bb1aa080edc75f33):
checking
2020-04-16 16:34:50.630 219284 DEBUG nova.virt.libvirt.imagecache
[req-dd4a2032-bbbd-4c0b-87ac-11605ffbf6c2 - - - - -] image
d31e39bc-c2b7-42ad-968f-7e782dd72943 at
(/var/lib/nova/instances/_base/5c538ead16d8375e4890e8b9bb1aa080edc75f33):
image is in use _mark_in_use
/openstack/venvs/nova-17.1.12/lib/python2.7/site-packages/nova/virt/libvirt/imagecache.py:329
2020-04-16 16:34:50.631 219284 DEBUG nova.virt.libvirt.imagecache
[req-dd4a2032-bbbd-4c0b-87ac-11605ffbf6c2 - - - - -] Image id
b3af2bf0-055b-48fb-aedc-4683468a3f74 yields fingerprint
7af98c4d49b766d82eec8169a5c87be4eb56e5eb _age_and_verify_cached_images
/openstack/venvs/nova-17.1.12/lib/python2.7/site-packages/nova/virt/libvirt/imagecache.py:355
2020-04-16 16:34:50.631 219284 INFO nova.virt.libvirt.imagecache
[req-dd4a2032-bbbd-4c0b-87ac-11605ffbf6c2 - - - - -] image
b3af2bf0-055b-48fb-aedc-4683468a3f74 at
(/var/lib/nova/instances/_base/7af98c4d49b766d82eec8169a5c87be4eb56e5eb):
checking
2020-04-16 16:34:50.631 219284 DEBUG nova.virt.libvirt.imagecache
[req-dd4a2032-bbbd-4c0b-87ac-11605ffbf6c2 - - - - -] image
b3af2bf0-055b-48fb-aedc-4683468a3f74 at
(/var/lib/nova/instances/_base/7af98c4d49b766d82eec8169a5c87be4eb56e5eb):
image is in use _mark_in_use
/openstack/venvs/nova-17.1.12/lib/python2.7/site-packages/nova/virt/libvirt/imagecache.py:329
2020-04-16 16:34:50.632 219284 DEBUG nova.virt.libvirt.imagecache
[req-dd4a2032-bbbd-4c0b-87ac-11605ffbf6c2 - - - - -]
f33b2602-ac5f-491e-bdb8-7e7f9376bcad is a valid instance name
_list_backing_images
/openstack/venvs/nova-17.1.12/lib/python2.7/site-packages/nova/virt/libvirt/imagecache.py:169
2020-04-16 16:34:50.632 219284 DEBUG nova.virt.libvirt.imagecache
[req-dd4a2032-bbbd-4c0b-87ac-11605ffbf6c2 - - - - -]
f33b2602-ac5f-491e-bdb8-7e7f9376bcad has a disk file _list_backing_images
/openstack/venvs/nova-17.1.12/lib/python2.7/site-packages/nova/virt/libvirt/imagecache.py:172
2020-04-16 16:34:50.632 219284 DEBUG oslo_concurrency.processutils
[req-dd4a2032-bbbd-4c0b-87ac-11605ffbf6c2 - - - - -] Running cmd
(subprocess): /openstack/venvs/nova-17.1.12/bin/python -m
oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C
qemu-img info
/var/lib/nova/instances/f33b2602-ac5f-491e-bdb8-7e7f9376bcad/disk
--force-share execute
/openstack/venvs/nova-17.1.12/lib/python2.7/site-packages/oslo_concurrency/processutils.py:372
2020-04-16 16:34:50.663 219284 DEBUG oslo_concurrency.processutils
[req-dd4a2032-bbbd-4c0b-87ac-11605ffbf6c2 - - - - -] CMD
"/openstack/venvs/nova-17.1.12/bin/python -m oslo_concurrency.prlimit
--as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info
/var/lib/nova/instances/f33b2602-ac5f-491e-bdb8-7e7f9376bcad/disk
--force-share" returned: 0 in 0.031s execute
/openstack/venvs/nova-17.1.12/lib/python2.7/site-packages/oslo_concurrency/processutils.py:409
2020-04-16 16:34:50.664 219284 DEBUG nova.virt.libvirt.imagecache
[req-dd4a2032-bbbd-4c0b-87ac-11605ffbf6c2 - - - - -] Instance
f33b2602-ac5f-491e-bdb8-7e7f9376bcad is backed by
b40b27e04896d063bc591b19642da8910da3eb1f _list_backing_images
/openstack/venvs/nova-17.1.12/lib/python2.7/site-packages/nova/virt/libvirt/imagecache.py:187
2020-04-16 16:34:50.665 219284 DEBUG nova.virt.libvirt.imagecache
[req-dd4a2032-bbbd-4c0b-87ac-11605ffbf6c2 - - - - -]
f117eb96-06a9-4c91-9c5c-111228e24d66 is a valid instance name
_list_backing_images
/openstack/venvs/nova-17.1.12/lib/python2.7/site-packages/nova/virt/libvirt/imagecache.py:169
2020-04-16 16:34:50.665 219284 DEBUG nova.virt.libvirt.imagecache
[req-dd4a2032-bbbd-4c0b-87ac-11605ffbf6c2 - - - - -]
f117eb96-06a9-4c91-9c5c-111228e24d66 has a disk file _list_backing_images
/openstack/venvs/nova-17.1.12/lib/python2.7/site-packages/nova/virt/libvirt/imagecache.py:172
2020-04-16 16:34:50.665 219284 DEBUG oslo_concurrency.processutils
[req-dd4a2032-bbbd-4c0b-87ac-11605ffbf6c2 - - - - -] Running cmd
(subprocess): /openstack/venvs/nova-17.1.12/bin/python -m
oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C
qemu-img info
/var/lib/nova/instances/f117eb96-06a9-4c91-9c5c-111228e24d66/disk
--force-share execute
/openstack/venvs/nova-17.1.12/lib/python2.7/site-packages/oslo_concurrency/processutils.py:372
2020-04-16 16:34:50.694 219284 DEBUG oslo_concurrency.processutils
[req-dd4a2032-bbbd-4c0b-87ac-11605ffbf6c2 - - - - -] CMD
"/openstack/venvs/nova-17.1.12/bin/python -m oslo_concurrency.prlimit
--as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info
/var/lib/nova/instances/f117eb96-06a9-4c91-9c5c-111228e24d66/disk
--force-share" returned: 0 in 0.029s execute
/openstack/venvs/nova-17.1.12/lib/python2.7/site-packages/oslo_concurrency/processutils.py:409
2020-04-16 16:34:50.695 219284 DEBUG nova.virt.libvirt.imagecache
[req-dd4a2032-bbbd-4c0b-87ac-11605ffbf6c2 - - - - -] Instance
f117eb96-06a9-4c91-9c5c-111228e24d66 is backed by
7af98c4d49b766d82eec8169a5c87be4eb56e5eb _list_backing_images
/openstack/venvs/nova-17.1.12/lib/python2.7/site-packages/nova/virt/libvirt/imagecache.py:187
2020-04-16 16:34:50.695 219284 DEBUG nova.virt.libvirt.imagecache
[req-dd4a2032-bbbd-4c0b-87ac-11605ffbf6c2 - - - - -]
5ba39de3-f5f8-46a2-908d-c43b901e1696 is a valid instance name
_list_backing_images
/openstack/venvs/nova-17.1.12/lib/python2.7/site-packages/nova/virt/libvirt/imagecache.py:169
2020-04-16 16:34:50.695 219284 DEBUG nova.virt.libvirt.imagecache
[req-dd4a2032-bbbd-4c0b-87ac-11605ffbf6c2 - - - - -]
5ba39de3-f5f8-46a2-908d-c43b901e1696 has a disk file _list_backing_images
/openstack/venvs/nova-17.1.12/lib/python2.7/site-packages/nova/virt/libvirt/imagecache.py:172
2020-04-16 16:34:50.695 219284 DEBUG oslo_concurrency.processutils
[req-dd4a2032-bbbd-4c0b-87ac-11605ffbf6c2 - - - - -] Running cmd
(subprocess): /openstack/venvs/nova-17.1.12/bin/python -m
oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C
qemu-img info
/var/lib/nova/instances/5ba39de3-f5f8-46a2-908d-c43b901e1696/disk
--force-share execute
/openstack/venvs/nova-17.1.12/lib/python2.7/site-packages/oslo_concurrency/processutils.py:372
2020-04-16 16:34:50.723 219284 DEBUG oslo_concurrency.processutils
[req-dd4a2032-bbbd-4c0b-87ac-11605ffbf6c2 - - - - -] CMD
"/openstack/venvs/nova-17.1.12/bin/python -m oslo_concurrency.prlimit
--as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info
/var/lib/nova/instances/5ba39de3-f5f8-46a2-908d-c43b901e1696/disk
--force-share" returned: 0 in 0.028s execute
/openstack/venvs/nova-17.1.12/lib/python2.7/site-packages/oslo_concurrency/processutils.py:409
2020-04-16 16:34:50.724 219284 DEBUG nova.virt.libvirt.imagecache
[req-dd4a2032-bbbd-4c0b-87ac-11605ffbf6c2 - - - - -] Instance
5ba39de3-f5f8-46a2-908d-c43b901e1696 is backed by
86692cd1e738b8df7cf1f951967c61e92222fc4c _list_backing_images
/openstack/venvs/nova-17.1.12/lib/python2.7/site-packages/nova/virt/libvirt/imagecache.py:187
2020-04-16 16:34:50.724 219284 DEBUG nova.virt.libvirt.imagecache
[req-dd4a2032-bbbd-4c0b-87ac-11605ffbf6c2 - - - - -]
d3d2837b-49c3-4822-b26b-4b3c03d344ae is a valid instance name
_list_backing_images
/openstack/venvs/nova-17.1.12/lib/python2.7/site-packages/nova/virt/libvirt/imagecache.py:169
2020-04-16 16:34:50.724 219284 DEBUG nova.virt.libvirt.imagecache
[req-dd4a2032-bbbd-4c0b-87ac-11605ffbf6c2 - - - - -]
d3d2837b-49c3-4822-b26b-4b3c03d344ae has a disk file _list_backing_images
/openstack/venvs/nova-17.1.12/lib/python2.7/site-packages/nova/virt/libvirt/imagecache.py:172
2020-04-16 16:34:50.725 219284 DEBUG oslo_concurrency.processutils
[req-dd4a2032-bbbd-4c0b-87ac-11605ffbf6c2 - - - - -] Running cmd
(subprocess): /openstack/venvs/nova-17.1.12/bin/python -m
oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C
qemu-img info
/var/lib/nova/instances/d3d2837b-49c3-4822-b26b-4b3c03d344ae/disk
--force-share execute
/openstack/venvs/nova-17.1.12/lib/python2.7/site-packages/oslo_concurrency/processutils.py:372
2020-04-16 16:34:50.752 219284 DEBUG oslo_concurrency.processutils
[req-dd4a2032-bbbd-4c0b-87ac-11605ffbf6c2 - - - - -] CMD
"/openstack/venvs/nova-17.1.12/bin/python -m oslo_concurrency.prlimit
--as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info
/var/lib/nova/instances/d3d2837b-49c3-4822-b26b-4b3c03d344ae/disk
--force-share" returned: 0 in 0.028s execute
/openstack/venvs/nova-17.1.12/lib/python2.7/site-packages/oslo_concurrency/processutils.py:409
2020-04-16 16:34:50.753 219284 DEBUG nova.virt.libvirt.imagecache
[req-dd4a2032-bbbd-4c0b-87ac-11605ffbf6c2 - - - - -] Instance
d3d2837b-49c3-4822-b26b-4b3c03d344ae is backed by
5c538ead16d8375e4890e8b9bb1aa080edc75f33 _list_backing_images
/openstack/venvs/nova-17.1.12/lib/python2.7/site-packages/nova/virt/libvirt/imagecache.py:187
2020-04-16 16:34:50.753 219284 DEBUG nova.virt.libvirt.imagecache
[req-dd4a2032-bbbd-4c0b-87ac-11605ffbf6c2 - - - - -]
dfa80e78-ee02-46e5-ba7a-0874fa37da56 is a valid instance name
_list_backing_images
/openstack/venvs/nova-17.1.12/lib/python2.7/site-packages/nova/virt/libvirt/imagecache.py:169
2020-04-16 16:34:50.754 219284 DEBUG nova.virt.libvirt.imagecache
[req-dd4a2032-bbbd-4c0b-87ac-11605ffbf6c2 - - - - -]
dfa80e78-ee02-46e5-ba7a-0874fa37da56 has a disk file _list_backing_images
/openstack/venvs/nova-17.1.12/lib/python2.7/site-packages/nova/virt/libvirt/imagecache.py:172
2020-04-16 16:34:50.754 219284 DEBUG oslo_concurrency.processutils
[req-dd4a2032-bbbd-4c0b-87ac-11605ffbf6c2 - - - - -] Running cmd
(subprocess): /openstack/venvs/nova-17.1.12/bin/python -m
oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C
qemu-img info
/var/lib/nova/instances/dfa80e78-ee02-46e5-ba7a-0874fa37da56/disk
--force-share execute
/openstack/venvs/nova-17.1.12/lib/python2.7/site-packages/oslo_concurrency/processutils.py:372
2020-04-16 16:34:50.781 219284 DEBUG oslo_concurrency.processutils
[req-dd4a2032-bbbd-4c0b-87ac-11605ffbf6c2 - - - - -] CMD
"/openstack/venvs/nova-17.1.12/bin/python -m oslo_concurrency.prlimit
--as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info
/var/lib/nova/instances/dfa80e78-ee02-46e5-ba7a-0874fa37da56/disk
--force-share" returned: 0 in 0.027s execute
/openstack/venvs/nova-17.1.12/lib/python2.7/site-packages/oslo_concurrency/processutils.py:409
2020-04-16 16:34:50.782 219284 DEBUG nova.virt.libvirt.imagecache
[req-dd4a2032-bbbd-4c0b-87ac-11605ffbf6c2 - - - - -] Instance
dfa80e78-ee02-46e5-ba7a-0874fa37da56 is backed by
86692cd1e738b8df7cf1f951967c61e92222fc4c _list_backing_images
/openstack/venvs/nova-17.1.12/lib/python2.7/site-packages/nova/virt/libvirt/imagecache.py:187
2020-04-16 16:34:50.782 219284 INFO nova.virt.libvirt.imagecache
[req-dd4a2032-bbbd-4c0b-87ac-11605ffbf6c2 - - - - -] Active base files:
/var/lib/nova/instances/_base/b40b27e04896d063bc591b19642da8910da3eb1f
/var/lib/nova/instances/_base/86692cd1e738b8df7cf1f951967c61e92222fc4c
/var/lib/nova/instances/_base/5c538ead16d8375e4890e8b9bb1aa080edc75f33
/var/lib/nova/instances/_base/7af98c4d49b766d82eec8169a5c87be4eb56e5eb
2020-04-16 16:34:50.783 219284 DEBUG nova.virt.libvirt.imagecache
[req-dd4a2032-bbbd-4c0b-87ac-11605ffbf6c2 - - - - -] Verification complete
_age_and_verify_cached_images
/openstack/venvs/nova-17.1.12/lib/python2.7/site-packages/nova/virt/libvirt/imagecache.py:384
2020-04-16 16:34:50.783 219284 DEBUG nova.virt.libvirt.imagecache
[req-dd4a2032-bbbd-4c0b-87ac-11605ffbf6c2 - - - - -] Verify swap images
_age_and_verify_swap_images
/openstack/venvs/nova-17.1.12/lib/python2.7/site-packages/nova/virt/libvirt/imagecache.py:333
2020-04-16 16:35:01.887 219284 DEBUG oslo_service.periodic_task
[req-dd4a2032-bbbd-4c0b-87ac-11605ffbf6c2 - - - - -] Running periodic task
ComputeManager.update_available_resource run_periodic_tasks
/openstack/venvs/nova-17.1.12/lib/python2.7/site-packages/oslo_service/periodic_task.py:215
2020-04-16 16:35:01.910 219284 DEBUG nova.compute.resource_tracker
[req-dd4a2032-bbbd-4c0b-87ac-11605ffbf6c2 - - - - -] Auditing locally
available compute resources for KO1A3D02O131106CM07 (node:
KO1A3D02O131106CM07.openstack.local) update_available_resource
/openstack/venvs/nova-17.1.12/lib/python2.7/site-packages/nova/compute/resource_tracker.py:689
2020-04-16 16:35:02.009 219284 DEBUG oslo_concurrency.processutils
[req-dd4a2032-bbbd-4c0b-87ac-11605ffbf6c2 - - - - -] Running cmd
(subprocess): /openstack/venvs/nova-17.1.12/bin/python -m
oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C
qemu-img info
/var/lib/nova/instances/f33b2602-ac5f-491e-bdb8-7e7f9376bcad/disk
--force-share execute
/openstack/venvs/nova-17.1.12/lib/python2.7/site-packages/oslo_concurrency/processutils.py:372
2020-04-16 16:35:02.040 219284 DEBUG oslo_concurrency.processutils
[req-dd4a2032-bbbd-4c0b-87ac-11605ffbf6c2 - - - - -] CMD
"/openstack/venvs/nova-17.1.12/bin/python -m oslo_concurrency.prlimit
--as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info
/var/lib/nova/instances/f33b2602-ac5f-491e-bdb8-7e7f9376bcad/disk
--force-share" returned: 0 in 0.031s execute
/openstack/venvs/nova-17.1.12/lib/python2.7/site-packages/oslo_concurrency/processutils.py:409
2020-04-16 16:35:02.041 219284 DEBUG oslo_concurrency.processutils
[req-dd4a2032-bbbd-4c0b-87ac-11605ffbf6c2 - - - - -] Running cmd
(subprocess): /openstack/venvs/nova-17.1.12/bin/python -m
oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C
qemu-img info
/var/lib/nova/instances/f33b2602-ac5f-491e-bdb8-7e7f9376bcad/disk
--force-share execute
/openstack/venvs/nova-17.1.12/lib/python2.7/site-packages/oslo_concurrency/processutils.py:372
2020-04-16 16:35:02.070 219284 DEBUG oslo_concurrency.processutils
[req-dd4a2032-bbbd-4c0b-87ac-11605ffbf6c2 - - - - -] CMD
"/openstack/venvs/nova-17.1.12/bin/python -m oslo_concurrency.prlimit
--as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info
/var/lib/nova/instances/f33b2602-ac5f-491e-bdb8-7e7f9376bcad/disk
--force-share" returned: 0 in 0.029s execute
/openstack/venvs/nova-17.1.12/lib/python2.7/site-packages/oslo_concurrency/processutils.py:409
2020-04-16 16:35:02.070 219284 DEBUG nova.virt.libvirt.driver
[req-dd4a2032-bbbd-4c0b-87ac-11605ffbf6c2 - - - - -] skipping disk for
instance-00000636 as it does not have a path
_get_instance_disk_info_from_config
/openstack/venvs/nova-17.1.12/lib/python2.7/site-packages/nova/virt/libvirt/driver.py:7840
2020-04-16 16:35:02.071 219284 DEBUG nova.virt.libvirt.driver
[req-dd4a2032-bbbd-4c0b-87ac-11605ffbf6c2 - - - - -] skipping disk for
instance-00000636 as it does not have a path
_get_instance_disk_info_from_config
/openstack/venvs/nova-17.1.12/lib/python2.7/site-packages/nova/virt/libvirt/driver.py:7840
2020-04-16 16:35:02.073 219284 DEBUG oslo_concurrency.processutils
[req-dd4a2032-bbbd-4c0b-87ac-11605ffbf6c2 - - - - -] Running cmd
(subprocess): /openstack/venvs/nova-17.1.12/bin/python -m
oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C
qemu-img info
/var/lib/nova/instances/d3d2837b-49c3-4822-b26b-4b3c03d344ae/disk
--force-share execute
/openstack/venvs/nova-17.1.12/lib/python2.7/site-packages/oslo_concurrency/processutils.py:372
2020-04-16 16:35:02.101 219284 DEBUG oslo_concurrency.processutils
[req-dd4a2032-bbbd-4c0b-87ac-11605ffbf6c2 - - - - -] CMD
"/openstack/venvs/nova-17.1.12/bin/python -m oslo_concurrency.prlimit
--as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info
/var/lib/nova/instances/d3d2837b-49c3-4822-b26b-4b3c03d344ae/disk
--force-share" returned: 0 in 0.028s execute
/openstack/venvs/nova-17.1.12/lib/python2.7/site-packages/oslo_concurrency/processutils.py:409
2020-04-16 16:35:02.101 219284 DEBUG oslo_concurrency.processutils
[req-dd4a2032-bbbd-4c0b-87ac-11605ffbf6c2 - - - - -] Running cmd
(subprocess): /openstack/venvs/nova-17.1.12/bin/python -m
oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C
qemu-img info
/var/lib/nova/instances/d3d2837b-49c3-4822-b26b-4b3c03d344ae/disk
--force-share execute
/openstack/venvs/nova-17.1.12/lib/python2.7/site-packages/oslo_concurrency/processutils.py:372
2020-04-16 16:35:02.129 219284 DEBUG oslo_concurrency.processutils
[req-dd4a2032-bbbd-4c0b-87ac-11605ffbf6c2 - - - - -] CMD
"/openstack/venvs/nova-17.1.12/bin/python -m oslo_concurrency.prlimit
--as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info
/var/lib/nova/instances/d3d2837b-49c3-4822-b26b-4b3c03d344ae/disk
--force-share" returned: 0 in 0.028s execute
/openstack/venvs/nova-17.1.12/lib/python2.7/site-packages/oslo_concurrency/processutils.py:409
2020-04-16 16:35:02.132 219284 DEBUG oslo_concurrency.processutils
[req-dd4a2032-bbbd-4c0b-87ac-11605ffbf6c2 - - - - -] Running cmd
(subprocess): /openstack/venvs/nova-17.1.12/bin/python -m
oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C
qemu-img info
/var/lib/nova/instances/dfa80e78-ee02-46e5-ba7a-0874fa37da56/disk
--force-share execute
/openstack/venvs/nova-17.1.12/lib/python2.7/site-packages/oslo_concurrency/processutils.py:372
2020-04-16 16:35:02.159 219284 DEBUG oslo_concurrency.processutils
[req-dd4a2032-bbbd-4c0b-87ac-11605ffbf6c2 - - - - -] CMD
"/openstack/venvs/nova-17.1.12/bin/python -m oslo_concurrency.prlimit
--as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info
/var/lib/nova/instances/dfa80e78-ee02-46e5-ba7a-0874fa37da56/disk
--force-share" returned: 0 in 0.028s execute
/openstack/venvs/nova-17.1.12/lib/python2.7/site-packages/oslo_concurrency/processutils.py:409
2020-04-16 16:35:02.160 219284 DEBUG oslo_concurrency.processutils
[req-dd4a2032-bbbd-4c0b-87ac-11605ffbf6c2 - - - - -] Running cmd
(subprocess): /openstack/venvs/nova-17.1.12/bin/python -m
oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C
qemu-img info
/var/lib/nova/instances/dfa80e78-ee02-46e5-ba7a-0874fa37da56/disk
--force-share execute
/openstack/venvs/nova-17.1.12/lib/python2.7/site-packages/oslo_concurrency/processutils.py:372
2020-04-16 16:35:02.187 219284 DEBUG oslo_concurrency.processutils
[req-dd4a2032-bbbd-4c0b-87ac-11605ffbf6c2 - - - - -] CMD
"/openstack/venvs/nova-17.1.12/bin/python -m oslo_concurrency.prlimit
--as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info
/var/lib/nova/instances/dfa80e78-ee02-46e5-ba7a-0874fa37da56/disk
--force-share" returned: 0 in 0.027s execute
/openstack/venvs/nova-17.1.12/lib/python2.7/site-packages/oslo_concurrency/processutils.py:409
2020-04-16 16:35:02.190 219284 DEBUG oslo_concurrency.processutils
[req-dd4a2032-bbbd-4c0b-87ac-11605ffbf6c2 - - - - -] Running cmd
(subprocess): /openstack/venvs/nova-17.1.12/bin/python -m
oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C
qemu-img info
/var/lib/nova/instances/f117eb96-06a9-4c91-9c5c-111228e24d66/disk
--force-share execute
/openstack/venvs/nova-17.1.12/lib/python2.7/site-packages/oslo_concurrency/processutils.py:372
2020-04-16 16:35:02.217 219284 DEBUG oslo_concurrency.processutils
[req-dd4a2032-bbbd-4c0b-87ac-11605ffbf6c2 - - - - -] CMD
"/openstack/venvs/nova-17.1.12/bin/python -m oslo_concurrency.prlimit
--as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info
/var/lib/nova/instances/f117eb96-06a9-4c91-9c5c-111228e24d66/disk
--force-share" returned: 0 in 0.027s execute
/openstack/venvs/nova-17.1.12/lib/python2.7/site-packages/oslo_concurrency/processutils.py:409
2020-04-16 16:35:02.218 219284 DEBUG oslo_concurrency.processutils
[req-dd4a2032-bbbd-4c0b-87ac-11605ffbf6c2 - - - - -] Running cmd
(subprocess): /openstack/venvs/nova-17.1.12/bin/python -m
oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C
qemu-img info
/var/lib/nova/instances/f117eb96-06a9-4c91-9c5c-111228e24d66/disk
--force-share execute
/openstack/venvs/nova-17.1.12/lib/python2.7/site-packages/oslo_concurrency/processutils.py:372
2020-04-16 16:35:02.245 219284 DEBUG oslo_concurrency.processutils
[req-dd4a2032-bbbd-4c0b-87ac-11605ffbf6c2 - - - - -] CMD
"/openstack/venvs/nova-17.1.12/bin/python -m oslo_concurrency.prlimit
--as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info
/var/lib/nova/instances/f117eb96-06a9-4c91-9c5c-111228e24d66/disk
--force-share" returned: 0 in 0.027s execute
/openstack/venvs/nova-17.1.12/lib/python2.7/site-packages/oslo_concurrency/processutils.py:409
2020-04-16 16:35:02.247 219284 DEBUG oslo_concurrency.processutils
[req-dd4a2032-bbbd-4c0b-87ac-11605ffbf6c2 - - - - -] Running cmd
(subprocess): /openstack/venvs/nova-17.1.12/bin/python -m
oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C
qemu-img info
/var/lib/nova/instances/5ba39de3-f5f8-46a2-908d-c43b901e1696/disk
--force-share execute
/openstack/venvs/nova-17.1.12/lib/python2.7/site-packages/oslo_concurrency/processutils.py:372
2020-04-16 16:35:02.274 219284 DEBUG oslo_concurrency.processutils
[req-dd4a2032-bbbd-4c0b-87ac-11605ffbf6c2 - - - - -] CMD
"/openstack/venvs/nova-17.1.12/bin/python -m oslo_concurrency.prlimit
--as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info
/var/lib/nova/instances/5ba39de3-f5f8-46a2-908d-c43b901e1696/disk
--force-share" returned: 0 in 0.027s execute
/openstack/venvs/nova-17.1.12/lib/python2.7/site-packages/oslo_concurrency/processutils.py:409
2020-04-16 16:35:02.275 219284 DEBUG oslo_concurrency.processutils
[req-dd4a2032-bbbd-4c0b-87ac-11605ffbf6c2 - - - - -] Running cmd
(subprocess): /openstack/venvs/nova-17.1.12/bin/python -m
oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C
qemu-img info
/var/lib/nova/instances/5ba39de3-f5f8-46a2-908d-c43b901e1696/disk
--force-share execute
/openstack/venvs/nova-17.1.12/lib/python2.7/site-packages/oslo_concurrency/processutils.py:372
2020-04-16 16:35:02.302 219284 DEBUG oslo_concurrency.processutils
[req-dd4a2032-bbbd-4c0b-87ac-11605ffbf6c2 - - - - -] CMD
"/openstack/venvs/nova-17.1.12/bin/python -m oslo_concurrency.prlimit
--as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info
/var/lib/nova/instances/5ba39de3-f5f8-46a2-908d-c43b901e1696/disk
--force-share" returned: 0 in 0.027s execute
/openstack/venvs/nova-17.1.12/lib/python2.7/site-packages/oslo_concurrency/processutils.py:409
2020-04-16 16:35:02.669 219284 DEBUG nova.compute.resource_tracker
[req-dd4a2032-bbbd-4c0b-87ac-11605ffbf6c2 - - - - -] Hypervisor/Node
resource view: name=KO1A3D02O131106CM07.openstack.local free_ram=72406MB
free_disk=523GB free_vcpus=10 pci_devices=[{"dev_id": "pci_0000_3a_0a_7",
"product_id": "2047",
/openstack/venvs/nova-17.1.12/lib/python2.7/site-packages/nova/compute/resource_tracker.py:828
2020-04-16 16:35:02.670 219284 DEBUG oslo_concurrency.lockutils
[req-dd4a2032-bbbd-4c0b-87ac-11605ffbf6c2 - - - - -] Lock
"compute_resources" acquired by
"nova.compute.resource_tracker._update_available_resource" :: waited 0.000s
inner
/openstack/venvs/nova-17.1.12/lib/python2.7/site-packages/oslo_concurrency/lockutils.py:273
2020-04-16 16:35:02.729 219284 DEBUG nova.compute.resource_tracker
[req-dd4a2032-bbbd-4c0b-87ac-11605ffbf6c2 - - - - -] Compute driver doesn't
require allocation refresh and we're on a compute host in a deployment that
only has compute hosts with Nova versions >=16 (Pike). Skipping
auto-correction of allocations. _update_usage_from_instances
/openstack/venvs/nova-17.1.12/lib/python2.7/site-packages/nova/compute/resource_tracker.py:1247
2020-04-16 16:35:02.784 219284 DEBUG nova.compute.resource_tracker
[req-dd4a2032-bbbd-4c0b-87ac-11605ffbf6c2 - - - - -] Instance
5ba39de3-f5f8-46a2-908d-c43b901e1696 actively managed on this compute host
and has allocations in placement: {u'resources': {u'VCPU': 16,
u'MEMORY_MB': 57344, u'DISK_GB': 60}}.
_remove_deleted_instances_allocations
/openstack/venvs/nova-17.1.12/lib/python2.7/site-packages/nova/compute/resource_tracker.py:1269
2020-04-16 16:35:02.785 219284 DEBUG nova.compute.resource_tracker
[req-dd4a2032-bbbd-4c0b-87ac-11605ffbf6c2 - - - - -] Instance
dfa80e78-ee02-46e5-ba7a-0874fa37da56 actively managed on this compute host
and has allocations in placement: {u'resources': {u'VCPU': 12,
u'MEMORY_MB': 24576, u'DISK_GB': 60}}.
_remove_deleted_instances_allocations
/openstack/venvs/nova-17.1.12/lib/python2.7/site-packages/nova/compute/resource_tracker.py:1269
2020-04-16 16:35:02.785 219284 DEBUG nova.compute.resource_tracker
[req-dd4a2032-bbbd-4c0b-87ac-11605ffbf6c2 - - - - -] Instance
f33b2602-ac5f-491e-bdb8-7e7f9376bcad actively managed on this compute host
and has allocations in placement: {u'resources': {u'VCPU': 16,
u'MEMORY_MB': 49152, u'DISK_GB': 40}}.
_remove_deleted_instances_allocations
/openstack/venvs/nova-17.1.12/lib/python2.7/site-packages/nova/compute/resource_tracker.py:1269
2020-04-16 16:35:02.785 219284 DEBUG nova.compute.resource_tracker
[req-dd4a2032-bbbd-4c0b-87ac-11605ffbf6c2 - - - - -] Instance
d3d2837b-49c3-4822-b26b-4b3c03d344ae actively managed on this compute host
and has allocations in placement: {u'resources': {u'VCPU': 16,
u'MEMORY_MB': 49152, u'DISK_GB': 60}}.
_remove_deleted_instances_allocations
/openstack/venvs/nova-17.1.12/lib/python2.7/site-packages/nova/compute/resource_tracker.py:1269
2020-04-16 16:35:02.785 219284 DEBUG nova.compute.resource_tracker
[req-dd4a2032-bbbd-4c0b-87ac-11605ffbf6c2 - - - - -] Instance
f117eb96-06a9-4c91-9c5c-111228e24d66 actively managed on this compute host
and has allocations in placement: {u'resources': {u'VCPU': 2, u'MEMORY_MB':
4096, u'DISK_GB': 20}}. _remove_deleted_instances_allocations
/openstack/venvs/nova-17.1.12/lib/python2.7/site-packages/nova/compute/resource_tracker.py:1269
2020-04-16 16:35:02.785 219284 DEBUG nova.compute.resource_tracker
[req-dd4a2032-bbbd-4c0b-87ac-11605ffbf6c2 - - - - -] Total usable vcpus:
72, total allocated vcpus: 62 _report_final_resource_view
/openstack/venvs/nova-17.1.12/lib/python2.7/site-packages/nova/compute/resource_tracker.py:844
2020-04-16 16:35:02.786 219284 INFO nova.compute.resource_tracker
[req-dd4a2032-bbbd-4c0b-87ac-11605ffbf6c2 - - - - -] Final resource view:
name=KO1A3D02O131106CM07.openstack.local phys_ram=385391MB
used_ram=192512MB phys_disk=548GB used_disk=250GB total_vcpus=72
used_vcpus=62 pci_stats=[]
2020-04-16 16:35:02.814 219284 DEBUG nova.compute.resource_tracker
[req-dd4a2032-bbbd-4c0b-87ac-11605ffbf6c2 - - - - -] Compute_service record
updated for KO1A3D02O131106CM07:KO1A3D02O131106CM07.openstack.local
_update_available_resource
/openstack/venvs/nova-17.1.12/lib/python2.7/site-packages/nova/compute/resource_tracker.py:784
2020-04-16 16:35:02.814 219284 DEBUG oslo_concurrency.lockutils
[req-dd4a2032-bbbd-4c0b-87ac-11605ffbf6c2 - - - - -] Lock
"compute_resources" released by
"nova.compute.resource_tracker._update_available_resource" :: held 0.144s
inner
/openstack/venvs/nova-17.1.12/lib/python2.7/site-packages/oslo_concurrency/lockutils.py:285
2020-04-16 16:35:37.612 219284 DEBUG oslo_service.periodic_task
[req-dd4a2032-bbbd-4c0b-87ac-11605ffbf6c2 - - - - -] Running periodic task
ComputeManager._reclaim_queued_deletes run_periodic_tasks
/openstack/venvs/nova-17.1.12/lib/python2.7/site-packages/oslo_service/periodic_task.py:215
2020-04-16 16:35:37.613 219284 DEBUG nova.compute.manager
[req-dd4a2032-bbbd-4c0b-87ac-11605ffbf6c2 - - - - -]
CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes
/openstack/venvs/nova-17.1.12/lib/python2.7/site-packages/nova/compute/manager.py:7438
2020-04-16 16:35:38.685 219284 DEBUG oslo_service.periodic_task
[req-dd4a2032-bbbd-4c0b-87ac-11605ffbf6c2 - - - - -] Running periodic task
ComputeManager._poll_rebooting_instances run_periodic_tasks
/openstack/venvs/nova-17.1.12/lib/python2.7/site-packages/oslo_service/periodic_task.py:215
2020-04-16 16:35:39.685 219284 DEBUG oslo_service.periodic_task
[req-dd4a2032-bbbd-4c0b-87ac-11605ffbf6c2 - - - - -] Running periodic task
ComputeManager._instance_usage_audit run_periodic_tasks
/openstack/venvs/nova-17.1.12/lib/python2.7/site-packages/oslo_service/periodic_task.py:215
--
Best Regards
Pradeep Kumar
4 years, 7 months
[Meeting] RDO meeting (2020-04-15) minutes
by Alfredo Moralejo Alonso
==============================
#rdo: RDO meeting - 2020-04-15
==============================
Meeting started by amoralej at 14:00:34 UTC. The full logs are
available athttp://eavesdrop.openstack.org/meetings/rdo_meeting___2020_04_15/2020/r...
.
Meeting summary
---------------
* roll call (amoralej, 14:00:41)
* ppc64le package versions in delorean repos (amoralej, 14:06:45)
* list of problematic packages http://paste.openstack.org/show/792163/
(amoralej, 14:09:57)
* LINK:
http://mirror.centos.org/centos/8/messaging/ppc64le/rabbitmq-38/
(amoralej, 14:13:53)
* LINK: https://cbs.centos.org/koji/buildinfo?buildID=28443
(amoralej, 14:18:00)
* LINK: http://paste.openstack.org/show/792165/ current list of repos
that get automatically configured for ppc, I can get contents of
them if needed but the delorean one is pretty long =) (baha,
14:18:46)
* golang-github-vbatts-tar-split-0.11.1-1.el8 cbs build missed ppc64
build (amoralej, 14:18:47)
* LINK: http://paste.openstack.org/show/792166/ here's a proper dnf
repolist and http://paste.openstack.org/show/792167/ is the
master-testing repo (baha, 14:21:13)
* Ussuri preparation update (amoralej, 14:29:27)
* reqcheck and python2/3 compatibility cleanup is on progress for
libraries and clients (amoralej, 14:30:09)
* topic for ussuri preparation is
https://review.rdoproject.org/r/#/q/topic:ussuri-branching+(status:open+O...
(amoralej, 14:30:34)
* RDO Trunk centos8-ussuri is being bootstrapped
https://trunk.rdoproject.org/centos8-ussuri-bootstrap/report.html
(amoralej, 14:33:00)
* rfolco will work in promotion pipeline for centos8-ussuri repo on
next sprint (amoralej, 14:35:16)
* CentOS7-master DLRN builder has been stopped (amoralej, 14:38:31)
* Update about CentOS8 (amoralej, 14:39:17)
* RDO Phase 1 for CentOS8 master is being setup (amoralej, 14:39:32)
* puppet is upgraded to puppet 6 in CentOS8 Ussuri (ykarel, 14:40:20)
* ACTION: jcapitao to chair next RDO meeting (amoralej, 14:44:02)
* open floor (amoralej, 14:44:57)
Meeting ended at 14:50:08 UTC.
Action items, by person
-----------------------
* jcapitao
* jcapitao to chair next RDO meeting
People present (lines said)
---------------------------
* amoralej (104)
* baha (18)
* ykarel (11)
* openstack (8)
* rdogerrit (7)
* jcapitao (5)
* pojadhav (2)
* rfolco (2)
* mjturek (2)
* chandankumar (1)
* rh-jelabarre (1)
4 years, 7 months
[rdo-users] [Meeting] RDO meeting (2020-04-08) minutes
by YATIN KAREL
==============================
#rdo: RDO meeting - 2020-04-08
==============================
Meeting started by ykarel at 14:00:18 UTC. The full logs are available
athttp://eavesdrop.openstack.org/meetings/rdo_meeting___2020_04_08/2020/r...
.
Meeting summary
---------------
* roll call (ykarel, 14:00:41)
* Ussuri Updates (ykarel, 14:06:33)
* Ussuri Release is coming, RDO started release preparation for it
(ykarel, 14:06:56)
* LINK:
https://review.rdoproject.org/etherpad/p/ussuri-release-preparation
(ykarel, 14:07:09)
* LINK: https://trello.com/c/Fk3CzYKJ/736-ussuri-release-preparation
(ykarel, 14:07:17)
* LINK: https://review.rdoproject.org/r/#/q/topic:ussuri-branching
(ykarel, 14:07:33)
* Dropping CentOS7 for Ussuri (ykarel, 14:11:02)
* Ussuri is no longer supported on CentOS7, Ussuri release will be
CentOS8 only (ykarel, 14:11:12)
* CentOS7 jobs are removed from kolla, packstack, puppet-openstack and
TripleO in Ussuri (ykarel, 14:11:28)
* if some project is still running centos7 jobs should not use those
in Ussuri (ykarel, 14:11:57)
* CentOS7 DLRN will be stopped in upcoming weeks. (ykarel, 14:12:14)
* Adding some epel packages in RDO CentOS8 repo for Ussuri (ykarel,
14:13:26)
* LINK:
https://review.rdoproject.org/etherpad/p/mock-el8-non-epel-repo
(ykarel, 14:13:47)
* LINK: https://trello.com/c/OmeqSzC8/734-use-mock-from-non-epel-repo
(ykarel, 14:13:51)
* chair for next week (ykarel, 14:19:04)
* ACTION: amoralej to chair next week (ykarel, 14:20:48)
* Open floor (ykarel, 14:21:24)
Meeting ended at 14:45:16 UTC.
Action items, by person
-----------------------
* amoralej
* amoralej to chair next week
People present (lines said)
---------------------------
* ykarel (70)
* weshay_ (13)
* amoralej (7)
* rdogerrit (6)
* openstack (5)
* jcapitao (3)
* chandankumar (1)
Generated by `MeetBot`_ 0.1.4
______________________________________________
users mailing list
users(a)lists.rdoproject.org
http://lists.rdoproject.org/mailman/listinfo/users
To unsubscribe: users-unsubscribe(a)lists.rdoproject.org
4 years, 7 months
[RDO] Weekly Status for 2020-04-03
by Alfredo Moralejo Alonso
RDO Updates
Promotions
* Latest promotions (TripleO CI):
* Master CentOS8: 1st April
* Train: 2nd April
* Stein: 1st April
Packages
* ansible-tripleo-ipa is Added to ussuri and being added in Train
* python-ovn-octavia-provider is being added to ussuri
* Ansible is being updated to 2.9.6 in Ussuri
* python-etcd3gw is being updated to 0.2.5 in Ussuri and Train
* libsodium is updated to 1.0.18 in Train
* octavia-tempest-plugin is being updated to 0.3.0 in Train
* subunit is updated to 1.4.0 in Ussuri and Train
* python-XStatic-mdi has been updated in Ussuri
* ndisc6 is being added to Ussuri on CentOS8 deps
* puppet is being updated to 6.9.0 in CentOS 8
Vexxhost
* ci.centos weirdo jobs are migrated to vexxhost, Other RDO jobs are also
being evaluated on vexxhost
* https://review.rdoproject.org/r/#/q/topic:vexxhost
* Review.rdoproject.org and softwarefactory-project.io have been
successfully migrated from rdo-cloud to vexxhost.
Other
* CentOS8 Preparation:
* Good progress in adding CentOS8 jobs by TripleO CI team:
* https://hackmd.io/HrQd03c9SxOMtFPFrq50tg?view
* Advanced Virtualization has been built by Virt SIG in CBS and it’s
expected that we will be able to start using them soon.
* Some epel packages are being added to RDO CentOS8 repos
* https://trello.com/c/OmeqSzC8/734-use-mock-from-non-epel-repo
* Rabbitmq has been built for CentOS 8 in Messaging SIG. RDO will install
it from their repos:
* https://cbs.centos.org/koji/builds?tagID=2036
On behalf of RDO
4 years, 7 months
[Meeting] RDO Meeting (2020-04-01) minutes
by Joel Capitao
==============================
#rdo: RDO meeting - 2020-04-01
==============================
Meeting started by jcapitao at 14:00:35 UTC. The full logs are
available athttp://eavesdrop.openstack.org/meetings/rdo_meeting___2020_04_01/2020/r...
.
Meeting summary
---------------
* roll call (jcapitao, 14:01:12)
* Rocky moving to Extended Maintenance (jcapitao, 14:05:51)
* most projects are already in Extended Maintenance in Rocky upstream
branch (amoralej, 14:06:57)
* LINK: https://review.opendev.org/#/q/topic:rocky-em (amoralej,
14:07:05)
* CloudSIG Rocky will be EOL soon (amoralej, 14:08:13)
* RDO Trunk rocky will be alive and following stable/rocky branches
(amoralej, 14:08:58)
* rdopkg reqcheck (jcapitao, 14:11:15)
* LINK: https://review.rdoproject.org/r/#/c/26165/ (jcapitao,
14:13:55)
* Ussuri GA is coming (jcapitao, 14:40:45)
* next week is deps freeze (amoralej, 14:43:42)
* RC1 is on April 20th week (amoralej, 14:43:51)
* ACTION: jcapitao to create new trello card for Ussuri release
(amoralej, 14:47:48)
* chair for next week (jcapitao, 14:52:10)
* ACTION: ykarel to chair next week (jcapitao, 14:52:47)
* open floor (jcapitao, 14:52:58)
* LINK:
https://review.rdoproject.org/r/#/q/owner:%22Tobias+Urdin+%253Ctobias.urd...
(tobias-urdin, 14:53:34)
Meeting ended at 15:01:27 UTC.
Action items, by person
-----------------------
* jcapitao
* jcapitao to create new trello card for Ussuri release
* ykarel
* ykarel to chair next week
People present (lines said)
---------------------------
* amoralej (78)
* jcapitao (50)
* ykarel (27)
* rdogerrit (9)
* openstack (6)
* tobias-urdin (2)
* PagliaccisCloud (1)
* jpena (1)
Generated by `MeetBot`_ 0.1.4
4 years, 7 months