[rdo-users] [rdo-dev] Cinder NFS backend issue

Cody codeology.lab at gmail.com
Tue Oct 9 13:31:43 UTC 2018


For the migration test, I used instances (cirros) without any volume
attached, since Cinder was not available.

On Tue, Oct 9, 2018 at 9:28 AM Cody <codeology.lab at gmail.com> wrote:

> Hi Tzach,
>
> Thank you very much for verifying and reporting the bug.
>
> As I moved on to deploy with Ceph, the Cinder volume service is still
> unavailable. Perhaps the issue is more than just with using NFS?
>
> With Ceph, all other pools for Nova (vms) and Glance (images) are working
> fine; only the Cinder (volumes) has problems.
>
> [heat-admin at overcloud-controller-2 ~]$ ceph status
>   cluster:
>     id:     7900258e-cb68-11e8-b7cf-002590a2d123
>     health: HEALTH_WARN
>             application not enabled on 1 pool(s)
>
>   services:
>     mon: 3 daemons, quorum
> overcloud-controller-2,overcloud-controller-0,overcloud-controller-1
>     mgr: overcloud-controller-2(active), standbys: overcloud-controller-0,
> overcloud-controller-1
>     osd: 6 osds: 6 up, 6 in
>
>   data:
>     pools:   4 pools, 240 pgs
>     objects: 8 objects, 12.1MiB
>     usage:   687MiB used, 10.9TiB / 10.9TiB avail
>     pgs:     240 active+clean
>
> My testing environment is like following:
>
> 3 controller nodes
> 3 ceph storage nodes (non-collocated, 1 SSD for journal + 2 HDDs for OSD
> on each node)
> 2 compute nodes
>
> The deployment is to test an HA cluster (both controller HA and instance
> HA) with DVR. The cold and live migration would work only after I address
> this issue [1]. Other than that, the Cinder volume is the only major
> issue for now.
>
> [1] https://lists.rdoproject.org/pipermail/dev/2018-October/008934.html
>
>
> Thank you,
> Cody
>
>
>
>
>
>
>
> On Mon, Oct 8, 2018 at 3:44 PM Tzach Shefi <tshefi at redhat.com> wrote:
>
>> Hey Cody,
>>
>> The bad news, after our email figured I'd check Rocky's status, still not
>> working.
>> I've thus opened two new bugs, I'll clone these back for Queens and Pike
>> as well.
>> https://bugzilla.redhat.com/show_bug.cgi?id=1637014
>> https://bugzilla.redhat.com/show_bug.cgi?id=1637030
>>
>> Any ways per debugging, as you rightfully mentioned docker.
>> As of Queens Cinder was containerized meaning:
>> log location: /var/log/container/cinder/cinder-volume.log
>> Config file path is :
>> /var/lib/config-data/puppet-generated/cinder/etc/cinder/
>> So nfs_shares should be found/reside under this path ^ path.
>>
>> However if you login to Cinder's volume docker
>> # docker ps | grep cinder     somethign like
>> -> openstack-cinder-volume-docker-0
>> # docker exec -it openstack-cinder-volume-docker-0 /bin/bash
>> You should see cinder.conf plus the share file under /etc/cinder/
>> In side the docker /etc/cinder/ path is valid, outside of docker mapping
>> goes to /var/lib/config-data..
>>
>> I'd me more than happy to take a look at your volume log,
>> should you be willing to share it in public or private with me.
>>
>> Tzach
>>
>>
>>
>> On Sun, Oct 7, 2018 at 7:38 PM Cody <codeology.lab at gmail.com> wrote:
>>
>>> Hi Tzach,
>>>
>>> Thank you for getting back! I tested it again with
>>> CinderNfsMountOptions: 'rw,sync,nosharecache' in Queens, but still to
>>> no avail.
>>>
>>> I also noticed that the file /etc/cinder/nfs_shares does not exist on
>>> any controller, although in cinder.conf it has "#nfs_shares_config =
>>> /etc/cinder/nfs_shares". I am not sure if this is normal for using NFS
>>> with the containerized cinder service.
>>>
>>> Thank you,
>>> Cody
>>>
>>>
>>> On Sun, Oct 7, 2018 at 5:03 AM Tzach Shefi <tshefi at redhat.com> wrote:
>>> >
>>> >
>>> > Hey Cody,
>>> >
>>> > I recall hitting a related problem, when both Glance and Cinder use
>>> the same NFS server, while each service uses it's own share if both shares
>>> reside on same NFS server you may hit an selinux issue.
>>> >
>>> > The original bug I hit/reported, was closed EOL.
>>> > https://bugzilla.redhat.com/show_bug.cgi?id=1491597
>>> > Notice 4th comment, adding  nosharecache mount option helped me.
>>> >
>>> > I'll re-check this on queens as well maybe need to rebug it.
>>> > Thanks
>>> >
>>> > Tzach
>>> >
>>> >
>>> > On Sun, Oct 7, 2018 at 6:32 AM Cody <codeology.lab at gmail.com> wrote:
>>> >>
>>> >> Hi everyone,
>>> >>
>>> >> I have an issue with using TripleO (Queens) to setup an NFS backend
>>> for Cinder.
>>> >>
>>> >> My storage.yaml is as follows:
>>> >>
>>> >> parameter_defaults:
>>> >>   CinderEnableIscsiBackend: false
>>> >>   CinderEnableRbdBackend: false
>>> >>   CinderEnableNfsBackend: true
>>> >>   NovaEnableRbdBackend: false
>>> >>   GlanceBackend: 'file'
>>> >>
>>> >>   CinderNfsMountOptions: 'rw,sync'
>>> >>   CinderNfsServers: '192.168.24.1:/export/cinder'
>>> >>
>>> >>   GlanceNfsEnabled: true
>>> >>   GlanceNfsShare: '192.168.24.1:/export/glance'
>>> >>   GlanceNfsOptions:
>>> 'rw,sync,context=system_u:object_r:glance_var_lib_t:s0'
>>> >>
>>> >>   NovaNfsEnabled: true
>>> >>   NovaNfsShare: '192.168.24.1:/export/nova'
>>> >>   NovaNfsOptions: 'rw,sync,context=system_u:object_r:nfs_t:s0'
>>> >>
>>> >> I used the undercloud node as an NFS server for testing purposes.
>>> Iptables is set accordingly. The /etc/exportfs on the NFS server is as
>>> follows:
>>> >>
>>> >> /export/nova 192.168.24.0/24(rw,no_root_squash)
>>> >> /export/glance 192.168.24.0/24(rw,no_root_squash)
>>> >> /export/cinder 192.168.24.0/24(rw,no_root_squash)
>>> >>
>>> >> All three folders are set to chmod 777. Nova and Glance work as
>>> expected. Only Cinder remains problematic. I can try to upload volumes from
>>> overcloud, but nothing would show up in the cinder folder. Also Horizon
>>> gives errors like unable to retrieve volume and volume snapshots. Did I
>>> miss something here? I do plan to use Ceph later, but I wish to use NFS for
>>> now to test migration and failover. Any helps would be appreciated. Thank
>>> you!
>>> >>
>>> >> Best regards,
>>> >> Cody
>>> >>
>>> >>
>>> >>
>>> >> _______________________________________________
>>> >> dev mailing list
>>> >> dev at lists.rdoproject.org
>>> >> http://lists.rdoproject.org/mailman/listinfo/dev
>>> >>
>>> >> To unsubscribe: dev-unsubscribe at lists.rdoproject.org
>>> >
>>> >
>>> >
>>> > --
>>> >
>>> > Tzach Shefi
>>> >
>>> > Senior Quality Engineer, RHCSA
>>> >
>>> > Red Hat
>>> >
>>> > tshefi at redaht.com    M: +972-54-4701080     IM: tshefi
>>>
>>
>>
>> --
>>
>> Tzach Shefi
>>
>> Senior Quality Engineer, RHCSA
>>
>> Red Hat
>>
>> <https://www.redhat.com>
>>
>> tshefi at redaht.com    M: +972-54-4701080     IM: tshefi
>> <https://red.ht/sig>
>>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.rdoproject.org/pipermail/users/attachments/20181009/14a75ff0/attachment-0001.html>


More information about the users mailing list