[rdo-dev] [Infra] images server cleanup
Attila Darazs
adarazs at redhat.com
Thu Nov 23 14:27:42 UTC 2017
On 11/23/2017 03:16 PM, David Moreau Simard wrote:
> Since there hasn't been any progress on this, I'll implement a basic
> conservative cron on the image server for the time being in order to
> automatically delete older images.
>
> It won't be any different than the logic I've been using to clean things
> up manually so far.
> It won't have any logic around "keep 'N' last promotions", it will be
> based on an amount of days -- excluding the symlinked images.
>
> Please let me know once you have something else ready to use and I'll
> disable the cron.
+1. Thanks for the stop-gap solution, David, really appreciated.
Attila
>
> David Moreau Simard
> Senior Software Engineer | OpenStack RDO
>
> dmsimard = [irc, github, twitter]
>
> On Tue, Oct 31, 2017 at 8:42 AM, David Moreau Simard <dms at redhat.com
> <mailto:dms at redhat.com>> wrote:
>
> Has there been any progress on this ?
>
> I'm still fairly manually cleaning up older images because the
> server keeps filling up.
> I had planned additional storage for the server but it might not
> have been necessary because I heard about only keeping a few images
> worth of backlog.
>
> Pike is always the worst offender:
> ==
> # du -sh /var/www/html/images/* |sort -h
> 8.3G./aarch64
> 9.9G./ooo-snap
> 66G./master
> 74G./ocata
> 93G./newton
> 193G./pike
> ==
> # du -sh /var/www/html/images/pike/rdo_trunk/* |sort -h
> 0/var/www/html/images/pike/rdo_trunk/current-tripleo
> 0/var/www/html/images/pike/rdo_trunk/current-tripleo-rdo
> 0/var/www/html/images/pike/rdo_trunk/tripleo-ci-testing
> 1.6G/var/www/html/images/pike/rdo_trunk/tripleo-upstream
> 4.3G/var/www/html/images/pike/rdo_trunk/old-tripleo
> 8.5G/var/www/html/images/pike/rdo_trunk/0712ed3b8c6193ca4978becf70da62c6c31edabc_90cbfd4f
> 8.5G/var/www/html/images/pike/rdo_trunk/0de7665e14f222802fbed40fa7df93b4a4082b2d_90cbfd4f
> 8.5G/var/www/html/images/pike/rdo_trunk/480e79b7a3d2f0f6e6e22b92c6289426352d492c_c2957bbf
> 8.5G/var/www/html/images/pike/rdo_trunk/60d6e87cac10ff1f95a028c6176e768214ec8b77_9e72cb29
> 8.5G/var/www/html/images/pike/rdo_trunk/6beba54a71510525d5bbc4956d20d27bffa982e5_75873c3c
> 8.5G/var/www/html/images/pike/rdo_trunk/6d54c627703522921f41b5a83548380f1961034b_5b18c6af
> 8.5G/var/www/html/images/pike/rdo_trunk/75baddd6522aa86bd9028258937709c828fa1404_9e324686
> 8.5G/var/www/html/images/pike/rdo_trunk/8ef8f2bc46ac58385c6f92fbc9812ab6804a7ed2_4b120b84
> 8.5G/var/www/html/images/pike/rdo_trunk/a2369c6e219fe50fb0100806479009055ada73dc_566fe0ed
> 8.5G/var/www/html/images/pike/rdo_trunk/cf3665fe1c60d43aa39f1880e427875c9c571058_5b18c6af
> 8.5G/var/www/html/images/pike/rdo_trunk/d335965eb848cfde5cc06136b5b2fcc6b436a419_7941156c
> 8.5G/var/www/html/images/pike/rdo_trunk/ec8fc5d5154cab4b5167b917b11056d4bff4ef06_37239c88
> 8.5G/var/www/html/images/pike/rdo_trunk/f9a2508318c8e6b2c6083f1fd8f7199aba6fe1a4_0a2693a1
> 8.5G/var/www/html/images/pike/rdo_trunk/fd979d95d613e40be228695c0471c73cf9a5e3f4_9e324686
> 8.6G/var/www/html/images/pike/rdo_trunk/3c59aa392805d862096ed8ed6d9dbe4ee72f0630_e400a1b4
> 8.6G/var/www/html/images/pike/rdo_trunk/6bef899ed13e0dcc5ba6a99bc1859fb77682bb4c_566fe0ed
> 8.6G/var/www/html/images/pike/rdo_trunk/7153e0cbc5b0e6433313a3bc6051b2c0775d3804_7df0efc2
> 8.6G/var/www/html/images/pike/rdo_trunk/fe2afcb87218af6a3523be5a885d260ec54d24a5_31d058df
> 9.4G/var/www/html/images/pike/rdo_trunk/9adfb9df52a0c22da87d528da14a32d2c2b516b7_0a2693a1
> ==
>
> On another note, the undercloud.qcow2 image went from 2.8GB in
> Newton to 6.9GB in Pike... is that legitimate ? There's no bloat in
> there ?
>
>
>
> David Moreau Simard
> Senior Software Engineer | OpenStack RDO
>
> dmsimard = [irc, github, twitter]
>
> On Tue, Oct 24, 2017 at 12:25 PM, David Moreau Simard
> <dms at redhat.com <mailto:dms at redhat.com>> wrote:
>
> If the directory structure is the same across all directories
> and releases, is there a reason why we couldn't simply run a
> cron on the machine that woule regularly delete older images ?
>
> The migration from the previous (fedora) machine on OS1 to the
> new CentOS server on RDO Cloud was largely manual and there
> wasn't any playbooks involved.
> We'd like to run full automation like upstream -infrastructure
> does. This would allow anyone to submit a change to our
> playbooks, they'd be reviewed and applied automatically.
>
> Setting up this cron could be one part of the tasks involved in
> setting up the image server.
>
>
>
> David Moreau Simard
> Senior Software Engineer | OpenStack RDO
>
> dmsimard = [irc, github, twitter]
>
> On Tue, Oct 24, 2017 at 10:55 AM, Gabriele Cerami
> <gcerami at redhat.com <mailto:gcerami at redhat.com>> wrote:
>
> Hi,
>
> we'd like to participate actively in cleaning up after
> ourselves the
> images we upload at each tripleo promotion. We are planning
> to do so
> also for the container images in dockerhub, so part of this
> process has
> to be done anyway. (maybe we should also do it in rdoregistry)
> Since our access to the server is limited to sftp, we are
> thinking about
> using paramiko library in our promoter script, to get the
> list of hashes
> uploaded and their mtimes, so we can delete the oldest ones.
>
> Is there any better solution ?
>
> Thanks
> _______________________________________________
> dev mailing list
> dev at lists.rdoproject.org <mailto:dev at lists.rdoproject.org>
> http://lists.rdoproject.org/mailman/listinfo/dev
> <http://lists.rdoproject.org/mailman/listinfo/dev>
>
> To unsubscribe: rdo-list-unsubscribe at redhat.com
> <mailto:rdo-list-unsubscribe at redhat.com>
>
>
>
>
More information about the dev
mailing list