<div dir="ltr">Also how are you uploading the images?</div><br><div class="gmail_quote"><div dir="ltr">On Mon, Nov 26, 2018 at 10:54 AM Donny Davis <<a href="mailto:donny@fortnebula.com">donny@fortnebula.com</a>> wrote:<br></div><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex"><div dir="ltr">What kind of images are you using? </div><br><div class="gmail_quote"><div dir="ltr">On Mon, Nov 26, 2018 at 9:14 AM John Fulton <<a href="mailto:johfulto@redhat.com" target="_blank">johfulto@redhat.com</a>> wrote:<br></div><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">On Sun, Nov 25, 2018 at 11:29 PM Cody <<a href="mailto:codeology.lab@gmail.com" target="_blank">codeology.lab@gmail.com</a>> wrote:<br>
><br>
> Hello,<br>
><br>
> My tripleO cluster is deployed with Ceph. Both Cinder and Nova use RBD<br>
> as backend. While all essential functions work, services involving<br>
> Ceph are getting very poor performance. E.g., it takes several hours<br>
> to upload an 8GB image into Cinder and about 20 minutes to completely<br>
> boot up an instance (from launch to ssh ready).<br>
><br>
> Running 'ceph -s' shows a top write speed at 6~700 KiB/s during image<br>
> upload and read speed 2 MiB/s during instance launch.<br>
><br>
> I used the default scheme for network isolation and a single 1G port<br>
> for all VLAN traffics on each overcloud node. I haven't set jumbo<br>
> frame on the storage network VLAN yet, but think the performance<br>
> should not be this bad with MTU 1500. Something must be wrong. Any<br>
> suggestions for debugging?<br>
<br>
Hi Cody,<br>
<br>
If you're using queens or rocky, then ceph luminous was deployed in<br>
containers. Though tripleo did the overall deployment, ceph-ansible<br>
would have done the actual ceph deployment and configuration and you<br>
can determine the ceph-ansible version via 'rpm -q ceph-ansible' on<br>
your undercloud. It probably makes sense for you to pass along what<br>
you mentioned above in addition to some other info, which I'll note<br>
below, to the ceph-users list<br>
(<a href="http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com" rel="noreferrer" target="_blank">http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com</a>), who will be<br>
focused on ceph itself. When you contact them (I'm on the list too)<br>
also let them know the following:<br>
<br>
1. How many OSD servers you have and how many OSDs per server<br>
2. What type of disks you're using per OSD and how you set up journaling<br>
3. Specs of your servers themselves (OpenStack controller servers w/<br>
CPU X and Ram Y for Ceph monitors and Ceph Storage servers RAM/CPU<br>
info)<br>
4. Did you override the RAM/CPU for the Mon, Mgr, and OSD containers?<br>
If so, what did you override them to?<br>
<br>
TripleO can pass any parameter you would normally pass to ceph-ansible<br>
as described in the following:<br>
<br>
<a href="https://docs.openstack.org/tripleo-docs/latest/install/advanced_deployment/ceph_config.html#customizing-ceph-conf-with-ceph-ansible" rel="noreferrer" target="_blank">https://docs.openstack.org/tripleo-docs/latest/install/advanced_deployment/ceph_config.html#customizing-ceph-conf-with-ceph-ansible</a><br>
<br>
So if you let them know things in terms of a containerized<br>
ceph-ansible luminous deployment and the ceph.conf and they have<br>
suggestions, then you can apply the suggestions back to ceph-ansible<br>
through tripleo as described above. If you start troubleshooting the<br>
cluster as per this troubleshooting guide [2] and share the results<br>
that would also help.<br>
<br>
I've gotten better performance than you describe on a completely<br>
virtualized deployment using my PC [1] using quickstart with the<br>
defaults that TripleO passes using queens and rocky. Though, TripleO<br>
tends to favor the defaults which ceph-ansible uses. However, with a<br>
single 1G port for all network traffic I don't expect great<br>
performance.<br>
<br>
Feel free to CC me when you email ceph-users and feel free to share on<br>
rdo-users a link to the thread you started there in case anyone else<br>
on this list is interested.<br>
<br>
John<br>
<br>
[1] <a href="http://blog.johnlikesopenstack.com/2018/08/pc-for-tripleo-quickstart.html" rel="noreferrer" target="_blank">http://blog.johnlikesopenstack.com/2018/08/pc-for-tripleo-quickstart.html</a><br>
[2] <a href="https://access.redhat.com/documentation/en-us/red_hat_ceph_storage/3/pdf/troubleshooting_guide/Red_Hat_Ceph_Storage-3-Troubleshooting_Guide-en-US.pdf" rel="noreferrer" target="_blank">https://access.redhat.com/documentation/en-us/red_hat_ceph_storage/3/pdf/troubleshooting_guide/Red_Hat_Ceph_Storage-3-Troubleshooting_Guide-en-US.pdf</a><br>
<br>
> Thank you very much.<br>
><br>
> Best regards,<br>
> Cody<br>
> _______________________________________________<br>
> users mailing list<br>
> <a href="mailto:users@lists.rdoproject.org" target="_blank">users@lists.rdoproject.org</a><br>
> <a href="http://lists.rdoproject.org/mailman/listinfo/users" rel="noreferrer" target="_blank">http://lists.rdoproject.org/mailman/listinfo/users</a><br>
><br>
> To unsubscribe: <a href="mailto:users-unsubscribe@lists.rdoproject.org" target="_blank">users-unsubscribe@lists.rdoproject.org</a><br>
_______________________________________________<br>
users mailing list<br>
<a href="mailto:users@lists.rdoproject.org" target="_blank">users@lists.rdoproject.org</a><br>
<a href="http://lists.rdoproject.org/mailman/listinfo/users" rel="noreferrer" target="_blank">http://lists.rdoproject.org/mailman/listinfo/users</a><br>
<br>
To unsubscribe: <a href="mailto:users-unsubscribe@lists.rdoproject.org" target="_blank">users-unsubscribe@lists.rdoproject.org</a><br>
</blockquote></div>
</blockquote></div>