<div dir="ltr">Hi Steve,<div><br></div><div>thank you for you reply. Concerning your first question I really don't know if supports it, I have a Dell PowerEdge R430. Running the command I see this:</div><div><br></div><div><div>[root@compute03 ~]# numactl --hardware</div><div>available: 2 nodes (0-1)</div><div>node 0 cpus: 0 2 4 6 8 10 12 14 16 18 20 22</div><div>node 0 size: 32543 MB</div><div>node 0 free: 193 MB</div><div>node 1 cpus: 1 3 5 7 9 11 13 15 17 19 21 23</div><div>node 1 size: 32768 MB</div><div>node 1 free: 238 MB</div><div>node distances:</div><div>node 0 1 </div><div> 0: 10 21 </div><div> 1: 21 10 </div></div><div><br></div><div><br></div><div>Concerning the second question, that 2 Nodes there shouldn't be used, as they are configured as "normal" flavor for nova and don't have vcpu_pin_set configured. I would expect that other node to appear there, but as I said, after I use a vm with 6 cpus doesn't allow me to launch more vms, so it can be something related with my topology/configuration.</div><div><br></div><div>Thanks,</div><div>Pedro Sousa</div><div><br></div><div>Regards,</div><div>Pedro Sousa</div></div><div class="gmail_extra"><br><div class="gmail_quote">On Fri, Nov 6, 2015 at 8:52 PM, Steve Gordon <span dir="ltr"><<a href="mailto:sgordon@redhat.com" target="_blank">sgordon@redhat.com</a>></span> wrote:<br><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex"><span class="">----- Original Message -----<br>
> From: "Pedro Sousa" <<a href="mailto:pgsousa@gmail.com">pgsousa@gmail.com</a>><br>
> To: "rdo-list" <<a href="mailto:rdo-list@redhat.com">rdo-list@redhat.com</a>><br>
><br>
> Hi all,<br>
><br>
> I have a rdo kilo deployment, using sr-iov ports to my instances. I'm<br>
> trying to configure NUMA topology and CPU pinning for some telco based<br>
> workloads based on this doc:<br>
> <a href="http://redhatstackblog.redhat.com/2015/05/05/cpu-pinning-and-numa-topology-awareness-in-openstack-compute/" rel="noreferrer" target="_blank">http://redhatstackblog.redhat.com/2015/05/05/cpu-pinning-and-numa-topology-awareness-in-openstack-compute/</a><br>
><br>
> I have 3 compute nodes, I'm trying to use one of them to use cpu pinning.<br>
><br>
> I've configured it like this:<br>
><br>
</span>> *Compute Node (total 24 cpus)*<br>
> */etc/nova/nova.conf*<br>
<span class="">> vcpu_pin_set=2,3,4,5,6,7,8,9,10,11,12,13,14,15,18,19,22,23<br>
><br>
> Changed grub to isolate my cpus:<br>
> #grubby --update-kernel=ALL<br>
> --args="isolcpus=2,3,4,5,6,7,8,9,10,11,12,13,14,15,18,19,22,23"<br>
><br>
> #grub2-install /dev/sda<br>
><br>
</span>> *Controller Nodes:* */etc/nova/nova.conf*<br>
<span class="">> scheduler_default_filters=RetryFilter,AvailabilityZoneFilter,RamFilter,ComputeFilter,ComputeCapabilitiesFilter,ImagePropertiesFilter,ServerGroupAntiAffinityFilter,ServerGroupAffinityFilter,PciPassthroughFilter,NUMATopologyFilter,AggregateInstanceExtraSpecsFilter<br>
> scheduler_available_filters = nova.scheduler.filters.all_filters<br>
> scheduler_available_filters =<br>
</span>> nova.scheduler.filters.pci_passthrough_filter.PciPassthroughFilter *Created<br>
> host aggregate performance * #nova aggregate-create performance #nova<br>
<span class="">> aggregate-set-metadata 1 pinned=true<br>
><br>
> #nova aggregate-add-host 1 compute03<br>
><br>
</span>> *Created host aggregate normal*<br>
<span class="">> #nova aggregate-create normal<br>
> #nova aggregate-set-metadata 2 pinned=false<br>
><br>
> #nova aggregate-add-host 2 compute01<br>
><br>
> #nova aggregate-add-host 2 compute02<br>
><br>
</span>> *Created the flavor with cpu pinning* #nova flavor-create m1.performance 6<br>
<span class="">> 2048 20 4 #nova flavor-key 6 set hw:cpu_policy=dedicated #nova flavor-key 6<br>
</span>> set aggregate_instance_extra_specs:pinned=true *The issue is:* With SR-IOV<br>
<span class="">> ports it only let's me create instances with 6 vcpus in total with the conf<br>
> described above. Without SR-IOV, using OVS, I don't have that limitation.<br>
> Is this a bug or something? I've seen this:<br>
> <a href="https://bugs.launchpad.net/nova/+bug/1441169" rel="noreferrer" target="_blank">https://bugs.launchpad.net/nova/+bug/1441169</a>, however I have the patch, and<br>
> as I said it works for the first 6 vcpus with my configuration.<br>
<br>
</span>Adding Nikola and Brent. Do you happen to know if your motherboard chipset supports NUMA locality of the PCIe devices and if so which NUMA nodes the SR-IOV cards are associated with? I *believe* numactl --hardware will tell you if this is the case (I don't presently have a machine in front of me with support for this). I'm wondering if or how the device locality code copes at the moment if the instance spans two nodes (obviously the device is only local to one of them).<br>
<br>
> *Some relevant logs:*<br>
><br>
> */var/log/nova/nova-scheduler.log*<br>
<div><div class="h5">><br>
> 2015-11-06 11:18:17.955 59494 DEBUG nova.filters<br>
> [req-9e20f8a9-384f-45c2-aa99-2d7b3100c98d<br>
> 9340dc4e70a14aeb82013e5a1631de80 d5ecb0eea96f4996b565fd983a768b11 - -<br>
> -] Starting with 3 host(s) get_filtered_objects<br>
> /usr/lib/python2.7/site-packages/nova/filters.py:70<br>
><br>
> 2015-11-06 11:18:17.955 59494 DEBUG nova.filters<br>
> [req-9e20f8a9-384f-45c2-aa99-2d7b3100c98d<br>
> 9340dc4e70a14aeb82013e5a1631de80 d5ecb0eea96f4996b565fd983a768b11 - -<br>
> -] Filter RetryFilter returned 3 host(s) get_filtered_objects<br>
> /usr/lib/python2.7/site-packages/nova/filters.py:84<br>
> 2015-11-06 11:18:17.955 59494 DEBUG nova.filters<br>
> [req-9e20f8a9-384f-45c2-aa99-2d7b3100c98d<br>
> 9340dc4e70a14aeb82013e5a1631de80 d5ecb0eea96f4996b565fd983a768b11 - -<br>
> -] Filter AvailabilityZoneFilter returned 3 host(s)<br>
> get_filtered_objects<br>
> /usr/lib/python2.7/site-packages/nova/filters.py:84<br>
> 2015-11-06 11:18:17.955 59494 DEBUG nova.filters<br>
> [req-9e20f8a9-384f-45c2-aa99-2d7b3100c98d<br>
> 9340dc4e70a14aeb82013e5a1631de80 d5ecb0eea96f4996b565fd983a768b11 - -<br>
> -] Filter RamFilter returned 3 host(s) get_filtered_objects<br>
> /usr/lib/python2.7/site-packages/nova/filters.py:84<br>
> 2015-11-06 11:18:17.956 59494 DEBUG nova.filters<br>
> [req-9e20f8a9-384f-45c2-aa99-2d7b3100c98d<br>
> 9340dc4e70a14aeb82013e5a1631de80 d5ecb0eea96f4996b565fd983a768b11 - -<br>
> -] Filter ComputeFilter returned 3 host(s) get_filtered_objects<br>
> /usr/lib/python2.7/site-packages/nova/filters.py:84<br>
> 2015-11-06 11:18:17.956 59494 DEBUG nova.filters<br>
> [req-9e20f8a9-384f-45c2-aa99-2d7b3100c98d<br>
> 9340dc4e70a14aeb82013e5a1631de80 d5ecb0eea96f4996b565fd983a768b11 - -<br>
> -] Filter ComputeCapabilitiesFilter returned 3 host(s)<br>
> get_filtered_objects<br>
> /usr/lib/python2.7/site-packages/nova/filters.py:84<br>
> 2015-11-06 11:18:17.956 59494 DEBUG nova.filters<br>
> [req-9e20f8a9-384f-45c2-aa99-2d7b3100c98d<br>
> 9340dc4e70a14aeb82013e5a1631de80 d5ecb0eea96f4996b565fd983a768b11 - -<br>
> -] Filter ImagePropertiesFilter returned 3 host(s)<br>
> get_filtered_objects<br>
> /usr/lib/python2.7/site-packages/nova/filters.py:84<br>
> 2015-11-06 11:18:17.956 59494 DEBUG nova.filters<br>
> [req-9e20f8a9-384f-45c2-aa99-2d7b3100c98d<br>
> 9340dc4e70a14aeb82013e5a1631de80 d5ecb0eea96f4996b565fd983a768b11 - -<br>
> -] Filter ServerGroupAntiAffinityFilter returned 3 host(s)<br>
> get_filtered_objects<br>
> /usr/lib/python2.7/site-packages/nova/filters.py:84<br>
> 2015-11-06 11:18:17.956 59494 DEBUG nova.filters<br>
> [req-9e20f8a9-384f-45c2-aa99-2d7b3100c98d<br>
> 9340dc4e70a14aeb82013e5a1631de80 d5ecb0eea96f4996b565fd983a768b11 - -<br>
> -] Filter ServerGroupAffinityFilter returned 3 host(s)<br>
> get_filtered_objects<br>
> /usr/lib/python2.7/site-packages/nova/filters.py:84<br>
> 2015-11-06 11:18:17.957 59494 DEBUG nova.filters<br>
> [req-9e20f8a9-384f-45c2-aa99-2d7b3100c98d<br>
> 9340dc4e70a14aeb82013e5a1631de80 d5ecb0eea96f4996b565fd983a768b11 - -<br>
> -] Filter PciPassthroughFilter returned 3 host(s) get_filtered_objects<br>
</div></div>> /usr/lib/python2.7/site-packages/nova/filters.py:84*2015-11-06<br>
<span class="">> 11:18:17.959 59494 DEBUG nova.filters<br>
> [req-9e20f8a9-384f-45c2-aa99-2d7b3100c98d<br>
> 9340dc4e70a14aeb82013e5a1631de80 d5ecb0eea96f4996b565fd983a768b11 - -<br>
> -] Filter NUMATopologyFilter returned 2 host(s) get_filtered_objects<br>
</span>> /usr/lib/python2.7/site-packages/nova/filters.py:84*<br>
<span class="">><br>
> Any help would be appreciated.<br>
<br>
</span>This looks like a successful run (still 2 hosts returned after NUMATopologyFilter)? Or did were you expecting the host filtered out by PciPassthroughFilter to still be in scope?<br>
<br>
Thanks,<br>
<span class="HOEnZb"><font color="#888888"><br>
--<br>
Steve Gordon,<br>
Sr. Technical Product Manager,<br>
Red Hat Enterprise Linux OpenStack Platform<br>
</font></span></blockquote></div><br></div>