If the directory structure is the same across all directories and releases, is there a reason why we couldn't simply run a cron on the machine that woule regularly delete older images ?

The migration from the previous (fedora) machine on OS1 to the new CentOS server on RDO Cloud was largely manual and there wasn't any playbooks involved.
We'd like to run full automation like upstream -infrastructure does. This would allow anyone to submit a change to our playbooks, they'd be reviewed and applied automatically.

Setting up this cron could be one part of the tasks involved in setting up the image server.



David Moreau Simard
Senior Software Engineer | OpenStack RDO

dmsimard = [irc, github, twitter]

On Tue, Oct 24, 2017 at 10:55 AM, Gabriele Cerami <gcerami@redhat.com> wrote:
Hi,

we'd like to participate actively in cleaning up after ourselves the
images we upload at each tripleo promotion. We are planning to do so
also for the container images in dockerhub, so part of this process has
to be done anyway. (maybe we should also do it in rdoregistry)
Since our access to the server is limited to sftp, we are thinking about
using paramiko library in our promoter script, to get the list of hashes
uploaded and their mtimes, so we can delete the oldest ones.

Is there any better solution ?

Thanks
_______________________________________________
dev mailing list
dev@lists.rdoproject.org
http://lists.rdoproject.org/mailman/listinfo/dev

To unsubscribe: rdo-list-unsubscribe@redhat.com