[Rdo-list] cinder speed (slow nova?)

Kaul, Yaniv Yaniv.Kaul at emc.com
Tue Dec 23 07:01:07 UTC 2014


> -----Original Message-----
> From: rdo-list-bounces at redhat.com [mailto:rdo-list-bounces at redhat.com] On
> Behalf Of Cristian Falcas
> Sent: Tuesday, December 23, 2014 1:30 AM
> To: Dmitry Makovey
> Cc: rdo-list
> Subject: Re: [Rdo-list] cinder speed (slow nova?)
> 
> Nova has as default cache for disks a very safe value (I think file=directsync or
> writethrough). Try to change it to writeback:
> 
> disk_cachemodes="file=writeback"

Better safe than sorry. You risk losing data unless you have a battery backed up storage.
Y.

> 
> On Tue, Dec 23, 2014 at 1:13 AM, Dmitry Makovey <dmitry at athabascau.ca>
> wrote:
> > note that all of below applies when cirros used as guest...
> >
> > On 12/22/2014 04:10 PM, Dmitry Makovey wrote:
> >> Hi everybody,
> >>
> >> using RDO IceHouse packages I've set up an infrastructure atop of
> >> RHEL6.6 and am seeing a very unpleasant performance for the storage.
> >>
> >> I've done some testing and here's what I get from the same storage:
> >> but different access points:
> >>
> >> cinder-volume # dd if=/dev/zero of=baloon bs=1048576 count=200
> >> 200+0 records in
> >> 200+0 records out
> >> 209715200 bytes (210 MB) copied, 0.162997 s, 1.3 GB/s
> >>
> >> nova-compute # dd if=/dev/zero of=baloon bs=1048576 count=200
> >> 200+0 records in
> >> 200+0 records out
> >> 209715200 bytes (210 MB) copied, 0.167905 s, 1.2 GB/s
> >>
> >> instance # dd if=/dev/zero of=baloon bs=1048576 count=200
> >> 200+0 records in
> >> 200+0 records out
> >> 209715200 bytes (210 MB) copied, 10.064 s, 20.8 MB/s
> >>
> >> A bit of explanation: in above scenario I have created LV on
> >> cinder-node, then mounted it locally and ran command for
> >> "cinder-volume". Created an iSCSI target, mounted it on nova-compute,
> >> and ran command there. Then, via cinder created storage volume,
> >> booted the OS off it, and ran test from within it... Results are just
> >> miserable. going from 1.2G/s down to 20M/s seems to be a big
> >> degradation. What should I look for? I have also tried running the
> >> same command within our RHEL KVM instance and got great performance.
> >>
> >> I have checked under /var/lib/nova/instances/* and libvirt.xml seems
> >> to indicate that virtio is being employed:
> >>
> >>     <disk type="block" device="disk">
> >>       <driver name="qemu" type="raw" cache="none"/>
> >>       <source
> >> dev="/dev/disk/by-path/ip-192.168.46.18:3260-iscsi-iqn.2010-
> 10.org.openstack:volume-955b25eb-bb48-43c3-a14d-222c9e8c7019-lun-1"/>
> >>       <target bus="virtio" dev="vda"/>
> >>       <serial>955b25eb-bb48-43c3-a14d-222c9e8c7019</serial>
> >>     </disk>
> >>
> >> guest used - is rhel-guest-image-6.6-20140926.0.x86_64.qcow2
> >> downloaded off RH site.
> >
> >
> >
> > --
> > Dmitry Makovey
> > Web Systems Administrator
> > Athabasca University
> > (780) 675-6245
> > ---
> > Confidence is what you have before you understand the problem
> >     Woody Allen
> >
> > When in trouble when in doubt run in circles scream and shout
> >      http://www.wordwizard.com/phpbb3/viewtopic.php?f=16&t=19330
> >
> >
> > _______________________________________________
> > Rdo-list mailing list
> > Rdo-list at redhat.com
> > https://www.redhat.com/mailman/listinfo/rdo-list
> >
> 
> _______________________________________________
> Rdo-list mailing list
> Rdo-list at redhat.com
> https://www.redhat.com/mailman/listinfo/rdo-list




More information about the dev mailing list