[rdo-list] Overprovisioning of local disk storage possible?

Alvaro Aleman alv2412 at googlemail.com
Wed Mar 8 14:29:26 UTC 2017


Hello people,

After setting up an Ocata cloud and spinning up a few instances in it all
of them using local storage I noticed I wasn't able to create more than a
handfull of instances all thought the cloud had much more capacity.

After some debugging I noticed the compute node reporting more disk space
as used than actual disk space available, which caused an Exception in the
nova api service:


Controller:

2017-03-08 14:48:53.271 26649 ERROR
nova.api.openstack.placement.handlers.allocation
[req-ea35f8ee-b5fc-4502-8b5e-8400e7a275b7 - - - - -] Bad inventory
2017-03-08 14:48:53.271 26649 ERROR
nova.api.openstack.placement.handlers.allocation
Traceback (most recent call last):
2017-03-08 14:48:53.271 26649 ERROR
nova.api.openstack.placement.handlers.allocation
  File "/usr/lib/python2.7/site-packages/nova/api/openstack/
placement/handlers/allocation.py", line 254, in set_allocations
2017-03-08 14:48:53.271 26649 ERROR
nova.api.openstack.placement.handlers.allocation
    allocations.create_all()
2017-03-08 14:48:53.271 26649 ERROR
nova.api.openstack.placement.handlers.allocation
  File "/usr/lib/python2.7/site-packages/nova/objects/resource_provider.py",
line 1184, in create_all
2017-03-08 14:48:53.271 26649 ERROR
nova.api.openstack.placement.handlers.allocation
    self._set_allocations(self._context, self.objects)
2017-03-08 14:48:53.271 26649 ERROR
nova.api.openstack.placement.handlers.allocation
  File "/usr/lib/python2.7/site-packages/oslo_db/sqlalchemy/enginefacade.py",
line 894, in wrapper
2017-03-08 14:48:53.271 26649 ERROR
nova.api.openstack.placement.handlers.allocation
    return fn(*args, **kwargs)
2017-03-08 14:48:53.271 26649 ERROR
nova.api.openstack.placement.handlers.allocation
  File "/usr/lib/python2.7/site-packages/nova/objects/resource_provider.py",
line 1146, in _set_allocations
2017-03-08 14:48:53.271 26649 ERROR
nova.api.openstack.placement.handlers.allocation
    before_gens = _check_capacity_exceeded(conn, allocs)
2017-03-08 14:48:53.271 26649 ERROR
nova.api.openstack.placement.handlers.allocation
  File "/usr/lib/python2.7/site-packages/nova/objects/resource_provider.py",
line 1074, in _check_capacity_exceeded
2017-03-08 14:48:53.271 26649 ERROR
nova.api.openstack.placement.handlers.allocation
    resource_provider=rp_uuid)
2017-03-08 14:48:53.271 26649 ERROR
nova.api.openstack.placement.handlers.allocation
InvalidAllocationCapacityExceeded: Unable to create allocation for
'DISK_GB' on resource provider '19315d4c-3834-4835-90b1-a70639f44f9b'. The
requested amount would exceed the capacity.


Compute:
2017-03-08 14:43:47.209 3386 INFO nova.compute.resource_tracker
[req-1486e04b-7548-4b17-b095-b5b08768ba67 - - - - -] Final resource view:
name=redacted_hostname phys_ram=12287MB used_ram=6656MB phys_disk=48GB
used_disk=60GB total_vcpus=4 used_vcpus=3 pci_stats=[]

The underlying issue (more disk space reported than available) seems to not
be new, people already asked about it back in '14[1] which is why I didn't
create a bugzilla but asking here.

I assume this is caused by nova counting the virtual disk size of all
instances on that host as 'used_disk'. This in turn means it is not
possible to overprovision local storage. Is there a configuration setting
or a workaround for this?


BR
Alvaro Aleman


[1] https://ask.openstack.org/en/question/32919/hypervisor-summary-confused/
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.rdoproject.org/pipermail/dev/attachments/20170308/2df28fd7/attachment.html>


More information about the dev mailing list