[Rdo-list] Puppet glusterers unite

Jiří Stránský jistr at redhat.com
Wed Apr 9 08:54:38 UTC 2014


On 9.4.2014 09:29, Crag Wolfe wrote:
> On 04/08/2014 02:42 PM, John Eckersberg wrote:
>> Greetings,
>>
>> For those of you in the To: line, I believe you are all doing something
>> with gluster and puppet at the moment.  For anyone else on rdo-list that
>> might be interested, jump in :)
>>
>> Primarily I want to get everyone talking to make sure we don't step on
>> each other's toes.  I know James has done some great work with the
>> puppet-gluster module, and Gilles is currently working to switch off of
>> the now-deprecated puppet-openstack-storage module and onto
>> puppet-gluster.  Crag, Jiří, and myself are working gluster-related
>> bugs.  So let's keep in touch.
>>
>> I'm working to configure libgfapi support on nova compute nodes.  In the
>> old gluster module, there was a gluster::client class that just
>> installed the required glusterfs-fuse package.  This class is used by
>> astapor in a few places (compute/cinder/glance).  However there's no
>> gluster::client class in the new module, so we'll need to remedy that
>> somehow.
>>
>> There is a class, gluster::mount::base, that ensures the packages are
>> installed, and that class is used by each instance of gluster::mount.
>> I'd like to reuse some of this, but I don't think we need all of it on
>> the compute nodes (really we just need to install glusterfs-api).  The
>> simple way would be to create a new class glusterfs::apiclient that just
>> installs the package, and include that for the nova compute case.
>> However I'm concerned with the other places we were previously using
>> gluster::client.  Can we use the new gluster::mount define to replace
>> all of these instances?  Or are we going to need to refactor in those
>> places as well?  I'd like to have some idea where this is all going
>> before I start ripping it apart.
>>
>> Thoughts?
>>
>> -John
>>
> [Also CC'ing Steve and Jacob who worked a bit with gluster / foreman in
> recent history]
>
> In the context of the HA-all-in-one-controller host group, I believe we
> just would need to include the gluster::mount::base class so that we are
> capable of mounting glusterfs volumes.  Pacemaker would be responsible
> for mounting the shared storage, and would do that the way Steve
> illustrated here:
> https://bugzilla.redhat.com/show_bug.cgi?id=1064050#c4
>
> Not that that helps clarify any of your above questions.  :-)
>
> --Crag
>

Seems like we wouldn't use Pacemaker for mounting GlusterFS for use with 
Cinder. (The BZ above is about Glance, which might behave differently.) 
The info i was able to dig up suggests that the GlusterFS driver for 
Cinder expects to do the mounting by itself [1,2]. But i guess we'd 
still need gluster::mount::base.

Jirka

[1] 
http://www.gluster.org/community/documentation/index.php/GlusterFS_Cinder
[2] 
https://github.com/stackforge/puppet-cinder/blob/164163a7a267ae4139e2d97bab1a385a6da2ac5f/manifests/volume/glusterfs.pp#L31-L33




More information about the dev mailing list