[rdo-dev] [Octavia] Providing service VM images in RDO

Javier Pena jpena at redhat.com
Wed Jan 31 17:06:06 UTC 2018



----- Original Message -----
> 
> 
> ----- Original Message -----
> > Bumping the thead, upstream patches are merged now [0]
> > 
> > With current upstream code, I can generate an image from master packages
> > with:
> > $ wget
> > https://cloud.centos.org/centos/7/images/CentOS-7-x86_64-GenericCloud-1801-01.qcow2
> > $ virt-customize -a CentOS-7-x86_64-GenericCloud-1801-01.qcow2
> > --selinux-relabel --run-command 'yum-config-manager --add-repo
> > http://trunk.rdoproject.org/centos7/delorean-deps.repo'
> > $ virt-customize -a CentOS-7-x86_64-GenericCloud-1801-01.qcow2
> > --selinux-relabel --run-command 'yum-config-manager --add-repo
> > https://trunk.rdoproject.org/centos7/current-passed-ci/delorean.repo'
> > $ DIB_LOCAL_IMAGE=/home/stack/CentOS-7-x86_64-GenericCloud-1801-01.qcow2
> >  /opt/stack/octavia/diskimage-create/diskimage-create.sh -p -i centos
> > -o amphora-x64-haproxy-centos.qcow2
> > 
> > This is with devstack, but will be mostly the same when RDO packages
> > are updated (just the script location that then comes from
> > openstack-octavia-diskimage-create package)
> > 
> 
> I have run a quick test with the latest openstack-octavia-diskimage-create
> package from RDO Trunk, and it works like a charm.
> 
> > So what are the next steps here? missing information, place to track
> > this, item for next meeting, action items, … ?
> > 
> 
> Let's add it as an item for the next meeting, so we define a plan. My
> proposal would be a daily build using a periodic job, storing images in a
> new path under images.rdoproject.org.
> 

Hi all,

We discussed the topic at today's RDO meeting (see [1] for minutes). Consensus was reached on using a periodic job to build the images, then store them in images.rdoproject.org.

This should be a first iteration of the concept, so we can provide a ready-made image for testing. If/when this image is used in gate jobs, we will have to revisit the concept and make sure we add some tests before publishing an image.

Now it is time to implement it. All help is welcome :).

Regards,
Javier

[1] - https://lists.rdoproject.org/pipermail/dev/2018-January/008521.html

> Regards,
> Javier
> 
> 
> > 
> > [0] https://review.openstack.org/#/c/522626/
> > 
> > On 12 January 2018 at 13:05, Bernard Cafarelli <bcafarel at redhat.com> wrote:
> > > On 11 January 2018 at 11:53, Javier Pena <jpena at redhat.com> wrote:
> > >> ----- Original Message -----
> > >>> On Wed, Jan 10, 2018 at 7:50 PM, Javier Pena <jpena at redhat.com> wrote:
> > >>> > If we want to deliver via RPM and build on each Octavia change, we
> > >>> > could
> > >>> > try to add it to the octavia spec and build it using DLRN. Does the
> > >>> > script
> > >>> > require many external resources besides diskimage-builder?
> > >>> > I'm not sure if that would work on CBS though, if we need to have
> > >>> > network
> > >>> > connectivity during the build process.
> > > I looked a bit initially into building the image directly in spec, one
> > > problem was how to pass the needed RDO packages properly to
> > > diskimage-builder (as a repo so that yum pulls them in).
> > > Apart from some configuration tweaks, most of the steps sum up to yum
> > > calls (system update - install haproxy, keepalived, … - install
> > > openstack-octavia-amphora-agent), these need network access, or at
> > > least local mirrors.
> > >>>
> > >>> I would be concerned with the storage required, also we need to
> > >>> trigger not only on Octavia distgit or upstream changes, all included
> > >>> RPMs need to be checked checked for updates.
> > >>> This could be simulated with dummy commits in distgit to force e.g.
> > >>> nightly refresh but due to storage requirements, I'd keep image builds
> > >>> outside trunk repos.
> > >>>
> > >>
> > >> I have been doing some tests, and it looks like running
> > >> diskimage-builder
> > >> from a chroot is not the best idea (it tries to mount some tmpfs and
> > >> fails), so even if we solved the storage issue it wouldn't work.
> > >> I think our best chance is to create a periodic job to rebuild the
> > >> images
> > >> (daily) then upload them to images.rdoproject.org. This would be a
> > >> similar approach to what we are currently doing with containers.
> > > That would work for "keeping other packages up to date" too
> > >
> > >> The only drawback of this alternative is that we would be distributing
> > >> the
> > >> qcow2 images instead of an RPM package, but we could still apply
> > >> retention policies, and add some CI jobs to test them if needed.
> > > On disk usage and retention polices, the images I build locally (with
> > > CentOS) are 500-500 MB qcow2 files
> > >
> > > --
> > > Bernard
> > 
> > 
> > 
> > --
> > Bernard Cafarelli
> > _______________________________________________
> > dev mailing list
> > dev at lists.rdoproject.org
> > http://lists.rdoproject.org/mailman/listinfo/dev
> > 
> > To unsubscribe: dev-unsubscribe at lists.rdoproject.org
> > 
> _______________________________________________
> dev mailing list
> dev at lists.rdoproject.org
> http://lists.rdoproject.org/mailman/listinfo/dev
> 
> To unsubscribe: dev-unsubscribe at lists.rdoproject.org
> 


More information about the dev mailing list