[Rdo-list] [Softwarefactory-dev] Gitnetics integration, or how can we automate upstream changes polling

Gabriele Cerami gcerami at redhat.com
Fri Feb 5 14:06:26 UTC 2016


Hi, 

Apologies for cross-posting.
I don't like replies branching, so I'm answering to all replies in one
big email.

On Thu, 2016-02-04 at 11:30 +0100, Fabien Boucher wrote:

> In RPM Factory patches are handled in form of Gerrit review (never
> merged) and

If the patches are never merged how do you know they don't create
conflicts, what is the merge base of the patches ? patches are never
rebased on each other but only to a common base ?
Since SF is package-centered, I know this is less relevant, because all
the patches end in the package at some point, and you test that.

> About the master version of RDO, AFAIK yes Delorean does not use any
> patches when
> trying to use rpm-master distgit to test the packaging against each
> upstream changes
> so there is not need to run unit test in that case IMO.

Unit tests, no. Acceptance tests, yes. Upstream CI will never be enough
because all the projects always download bleeding edge versions for
dependencies that may not be present, or have different version in the
distro packages.

> After, in the case we want to sync be doing a cherry-pick of each
> upstream changes
> on top of a modified version of a "mirror" repo/branch, yes I agree
> it is
> safer to run unit test during the sync and I understand Gitnetics
> will
> help in that way. 

not on top, and not unit tests. Gitnetics, in one of its mode, merges
clean upstream branch, with local patches branch, and create a third
branch that is the merge between the two. On the other mode, just
proposes (without merging) backports to a -patches branch.
Currently OPM-CI is running upstream acceptance tests, but forcing
modules to use dependencies available from distro packages, then
something called stability tests, to see if modules are deploying stuff
like we expect them do to. (usually catch when a variable changes its
default)

> Alright do you think it possible to have such a workflow : ?
> - Gitnetics is triggered on a slave by a periodic job to sync (by
> cherry-picking)
>   each "mirror" repositories (configured to be managed by Gitnetics).
> - Each upstream changes are one by one cherry-picked on top of the
> liberty-patches
>   branch and triggering a unit test job attached to that
> project/branch (already
>   the case with RPM Factory). If unit test pass then the Gerrit
> review is merged.
> - When a cherry-pick cannot apply a patch on top of the branch then
> Gitnetics create
>   the notification for warning the maintainer about the sync issue
> and let him take
>   an action

This is exactly what happens, except gitnetics can run on both periodic
jobs and triggered by gerrit event, and in its "lock and backports"
mode there is no automatic merging, all is tested in temporary branch
and only if unit tests pass, the change is proposed as backport to the
branch. If something fails, a developer may just add DISCARD as comment
to the paused review, and everything will be canceled.

On Thu, 2016-02-04 at 12:31 +0100, Haïkel wrote:

> I do not have the whole context, so this is just few comments:
> 1. we're trying to split the OPM behemoth and goal is to converge to
> the point where we don't need any downstream patches.

Not having downstream patches is a great goal to have, not having a
backup plan for that one time that is really needed and you don't have
the infrastructure to handle it, is a bad move

> 2. until then, we can just use our own mirrors instead of upstream
> 3. depending how much time, it'll take, it would be worth considering
> having vanilla puppet modules packages available for testing
> Moreover associated w/ delorean, it will help us to catch glitches as
> soon as puppet modules gets broken.

One of the big things I'm trying to undestand here, is how much test do
we want, on OPM and in general.
OPM-CI is currently trying to run tests after each upstream change is
merged, but before the same change is merged downstream. After this, a
package may be created and tested with the rest of the puppet modules,
but testing before merge catches errors very early in the chain, and we
immediately know which package caused the havoc, because we are testing
only one, not the entire set of modules at once.

> 4. I don't like cherry-pick upstream commits on top of own branches,
> it only encourage us to widen the gap with upstream.

I think everyone here is talking about cherry-picking in -patches
branches. Main branches downstream are always an exact copy of the
upstream equivalent branch

> If we aim to be the first (as in excellence) vanilla distribution of
> OpenStack, no exception for puppet modules. If upstream model is
> broken, for us, then fix it.

Can you be more specific in what you consider upstream model, and in
what way this is going to be broken ?
Anyway, I expect to be corrected by some of the opm developers, but I
don't think vanilla puppet modules in rdo is realistic goal.

> We've been working with Emilien (upstream Puppet Module PTL) to fix
> upstream CI w/ RDO, so it's doable.

Only 22 modules out of 59 present in OPM package are from upstream
openstack. The rest is scattered around the galaxy. Fixing upstream CI
in openstack may not be enough.

On Thu, 2016-02-04 at 16:49 +0100, Emilien Macchi wrote:

> While we're pushing "upstream first", we'll probably have custom
> patches, for the long term maintenance process.
> I don't say "I want to have downstream-only patches", I just say "we
> need to be able to backport upstream patches to our downstream
branches,
> because not all upstream projects accepts backports to stable
branches.
> For example in Puppet modules: Upstream accepts backports until
> stable/kilo but not after. What if we have a case where we need to do
so?

Other reasons to have a plan B when "no downstream patches" is not an
applicable policy.


Fabien, can we begin to transfer gitnetics jobs to SF ? I'd like to
know what happens when the two meet.





More information about the dev mailing list