[Rdo-list] Puppet glusterers unite

James Shubin jshubin at redhat.com
Tue Apr 8 22:14:39 UTC 2014


On Tue, 2014-04-08 at 17:42 -0400, John Eckersberg wrote:
> Greetings,
> 
> For those of you in the To: line, I believe you are all doing something
> with gluster and puppet at the moment.  For anyone else on rdo-list that
> might be interested, jump in :)
I'm not on rdo-list, so be sure to cc me if you want me to see the
messages :)

> 
> Primarily I want to get everyone talking to make sure we don't step on
> each other's toes.  I know James has done some great work with the
> puppet-gluster module,
Thanks! For reference, the upstream project is:
https://github.com/purpleidea/puppet-gluster
I've got a lot of articles and background here:
https://ttboj.wordpress.com/


>  and Gilles is currently working to switch off of
> the now-deprecated puppet-openstack-storage module and onto
> puppet-gluster.  Crag, Jiří, and myself are working gluster-related
> bugs.  So let's keep in touch.
> 
> I'm working to configure libgfapi support on nova compute nodes.  In the
> old gluster module, there was a gluster::client class that just
> installed the required glusterfs-fuse package.  This class is used by
> astapor in a few places (compute/cinder/glance).  However there's no
> gluster::client class in the new module, so we'll need to remedy that
> somehow.
I think you've got the right idea in the next paragraph. I'll add some
more info...

> 
> There is a class, gluster::mount::base, that ensures the packages are
> installed, and that class is used by each instance of gluster::mount.
> I'd like to reuse some of this, but I don't think we need all of it on
> the compute nodes (really we just need to install glusterfs-api).  The
> simple way would be to create a new class glusterfs::apiclient that just
> installs the package, and include that for the nova compute case.
Can you detail what functionality is missing and needed? Please make
sure to state if this is relevant to upstream (GlusterFS) or just
downstream (RHS/RHEL OSP, etc...) Is it just to install one package or
is there anything else?


> However I'm concerned with the other places we were previously using
> gluster::client.  Can we use the new gluster::mount define to replace
> all of these instances?  Or are we going to need to refactor in those
> places as well?  I'd like to have some idea where this is all going
> before I start ripping it apart.
Okay here's the info you're probably looking for:

gluster::mount is the thing you probably want. This is probably
equivalent to what you might call gluster::client (although I don't know
what the old gluster::client does.) It pulls in the gluster::mount::base
(which you mentioned above) which has the dependencies. gluster::mount
is a type (see the DOCUMENTATION) file. If this is missing any features
found in your gluster::client let me know please!

As for gluster::client (where is that anyways?) gluster::client doesn't
officially exist upstream yet. It's something that I was mid-hack on
when RedHat hired me. Basically it does "advanced client mounting
magic". I doubt you need this for anything yet, but it will be a cool
feature when it comes out. The reason it's "advanced" is because
puppet-gluster (besides being a fully working, awesome way to do
glusterfs) is also a bit of a research module for me that I use to
demonstrate some new and advanced puppet concepts.

> 
> Thoughts?
If you have any feature requests, bugs, or complaints, please let me
know!

One HUGE caveat: RedHat doesn't currently seem to have a build of
PuppetDB. See: https://bugzilla.redhat.com/show_bug.cgi?id=1068867
MANY fancy current and future features of puppet-gluster (and many other
puppet modules in the world) need some way to do "exported resources".
If you have any resources or magic spells to help solve this problem,
please help out! Basically the problem is dependency hell to get
everything needed to build puppetdb into Fedora. (This is a repeat of
previously discussed information for some people in the cc. Any talk
about this please put it into BZ and not into an email, so we keep track
of all comments.)

> 
> -John
> 
HTH,
James





More information about the dev mailing list