Hi Steve,
thank you for you reply. Concerning your first question I really don't know
if supports it, I have a Dell PowerEdge R430. Running the command I see
this:
[root@compute03 ~]# numactl --hardware
available: 2 nodes (0-1)
node 0 cpus: 0 2 4 6 8 10 12 14 16 18 20 22
node 0 size: 32543 MB
node 0 free: 193 MB
node 1 cpus: 1 3 5 7 9 11 13 15 17 19 21 23
node 1 size: 32768 MB
node 1 free: 238 MB
node distances:
node 0 1
0: 10 21
1: 21 10
Concerning the second question, that 2 Nodes there shouldn't be used, as
they are configured as "normal" flavor for nova and don't have vcpu_pin_set
configured. I would expect that other node to appear there, but as I said,
after I use a vm with 6 cpus doesn't allow me to launch more vms, so it can
be something related with my topology/configuration.
Thanks,
Pedro Sousa
Regards,
Pedro Sousa
On Fri, Nov 6, 2015 at 8:52 PM, Steve Gordon <sgordon(a)redhat.com> wrote:
----- Original Message -----
> From: "Pedro Sousa" <pgsousa(a)gmail.com>
> To: "rdo-list" <rdo-list(a)redhat.com>
>
> Hi all,
>
> I have a rdo kilo deployment, using sr-iov ports to my instances. I'm
> trying to configure NUMA topology and CPU pinning for some telco based
> workloads based on this doc:
>
http://redhatstackblog.redhat.com/2015/05/05/cpu-pinning-and-numa-topolog...
>
> I have 3 compute nodes, I'm trying to use one of them to use cpu pinning.
>
> I've configured it like this:
>
> *Compute Node (total 24 cpus)*
> */etc/nova/nova.conf*
> vcpu_pin_set=2,3,4,5,6,7,8,9,10,11,12,13,14,15,18,19,22,23
>
> Changed grub to isolate my cpus:
> #grubby --update-kernel=ALL
> --args="isolcpus=2,3,4,5,6,7,8,9,10,11,12,13,14,15,18,19,22,23"
>
> #grub2-install /dev/sda
>
> *Controller Nodes:* */etc/nova/nova.conf*
>
scheduler_default_filters=RetryFilter,AvailabilityZoneFilter,RamFilter,ComputeFilter,ComputeCapabilitiesFilter,ImagePropertiesFilter,ServerGroupAntiAffinityFilter,ServerGroupAffinityFilter,PciPassthroughFilter,NUMATopologyFilter,AggregateInstanceExtraSpecsFilter
> scheduler_available_filters = nova.scheduler.filters.all_filters
> scheduler_available_filters =
> nova.scheduler.filters.pci_passthrough_filter.PciPassthroughFilter
*Created
> host aggregate performance * #nova aggregate-create performance #nova
> aggregate-set-metadata 1 pinned=true
>
> #nova aggregate-add-host 1 compute03
>
> *Created host aggregate normal*
> #nova aggregate-create normal
> #nova aggregate-set-metadata 2 pinned=false
>
> #nova aggregate-add-host 2 compute01
>
> #nova aggregate-add-host 2 compute02
>
> *Created the flavor with cpu pinning* #nova flavor-create m1.performance
6
> 2048 20 4 #nova flavor-key 6 set hw:cpu_policy=dedicated #nova
flavor-key 6
> set aggregate_instance_extra_specs:pinned=true *The issue is:* With
SR-IOV
> ports it only let's me create instances with 6 vcpus in total with the
conf
> described above. Without SR-IOV, using OVS, I don't have that limitation.
> Is this a bug or something? I've seen this:
>
https://bugs.launchpad.net/nova/+bug/1441169, however I have the patch,
and
> as I said it works for the first 6 vcpus with my configuration.
Adding Nikola and Brent. Do you happen to know if your motherboard chipset
supports NUMA locality of the PCIe devices and if so which NUMA nodes the
SR-IOV cards are associated with? I *believe* numactl --hardware will tell
you if this is the case (I don't presently have a machine in front of me
with support for this). I'm wondering if or how the device locality code
copes at the moment if the instance spans two nodes (obviously the device
is only local to one of them).
> *Some relevant logs:*
>
> */var/log/nova/nova-scheduler.log*
>
> 2015-11-06 11:18:17.955 59494 DEBUG nova.filters
> [req-9e20f8a9-384f-45c2-aa99-2d7b3100c98d
> 9340dc4e70a14aeb82013e5a1631de80 d5ecb0eea96f4996b565fd983a768b11 - -
> -] Starting with 3 host(s) get_filtered_objects
> /usr/lib/python2.7/site-packages/nova/filters.py:70
>
> 2015-11-06 11:18:17.955 59494 DEBUG nova.filters
> [req-9e20f8a9-384f-45c2-aa99-2d7b3100c98d
> 9340dc4e70a14aeb82013e5a1631de80 d5ecb0eea96f4996b565fd983a768b11 - -
> -] Filter RetryFilter returned 3 host(s) get_filtered_objects
> /usr/lib/python2.7/site-packages/nova/filters.py:84
> 2015-11-06 11:18:17.955 59494 DEBUG nova.filters
> [req-9e20f8a9-384f-45c2-aa99-2d7b3100c98d
> 9340dc4e70a14aeb82013e5a1631de80 d5ecb0eea96f4996b565fd983a768b11 - -
> -] Filter AvailabilityZoneFilter returned 3 host(s)
> get_filtered_objects
> /usr/lib/python2.7/site-packages/nova/filters.py:84
> 2015-11-06 11:18:17.955 59494 DEBUG nova.filters
> [req-9e20f8a9-384f-45c2-aa99-2d7b3100c98d
> 9340dc4e70a14aeb82013e5a1631de80 d5ecb0eea96f4996b565fd983a768b11 - -
> -] Filter RamFilter returned 3 host(s) get_filtered_objects
> /usr/lib/python2.7/site-packages/nova/filters.py:84
> 2015-11-06 11:18:17.956 59494 DEBUG nova.filters
> [req-9e20f8a9-384f-45c2-aa99-2d7b3100c98d
> 9340dc4e70a14aeb82013e5a1631de80 d5ecb0eea96f4996b565fd983a768b11 - -
> -] Filter ComputeFilter returned 3 host(s) get_filtered_objects
> /usr/lib/python2.7/site-packages/nova/filters.py:84
> 2015-11-06 11:18:17.956 59494 DEBUG nova.filters
> [req-9e20f8a9-384f-45c2-aa99-2d7b3100c98d
> 9340dc4e70a14aeb82013e5a1631de80 d5ecb0eea96f4996b565fd983a768b11 - -
> -] Filter ComputeCapabilitiesFilter returned 3 host(s)
> get_filtered_objects
> /usr/lib/python2.7/site-packages/nova/filters.py:84
> 2015-11-06 11:18:17.956 59494 DEBUG nova.filters
> [req-9e20f8a9-384f-45c2-aa99-2d7b3100c98d
> 9340dc4e70a14aeb82013e5a1631de80 d5ecb0eea96f4996b565fd983a768b11 - -
> -] Filter ImagePropertiesFilter returned 3 host(s)
> get_filtered_objects
> /usr/lib/python2.7/site-packages/nova/filters.py:84
> 2015-11-06 11:18:17.956 59494 DEBUG nova.filters
> [req-9e20f8a9-384f-45c2-aa99-2d7b3100c98d
> 9340dc4e70a14aeb82013e5a1631de80 d5ecb0eea96f4996b565fd983a768b11 - -
> -] Filter ServerGroupAntiAffinityFilter returned 3 host(s)
> get_filtered_objects
> /usr/lib/python2.7/site-packages/nova/filters.py:84
> 2015-11-06 11:18:17.956 59494 DEBUG nova.filters
> [req-9e20f8a9-384f-45c2-aa99-2d7b3100c98d
> 9340dc4e70a14aeb82013e5a1631de80 d5ecb0eea96f4996b565fd983a768b11 - -
> -] Filter ServerGroupAffinityFilter returned 3 host(s)
> get_filtered_objects
> /usr/lib/python2.7/site-packages/nova/filters.py:84
> 2015-11-06 11:18:17.957 59494 DEBUG nova.filters
> [req-9e20f8a9-384f-45c2-aa99-2d7b3100c98d
> 9340dc4e70a14aeb82013e5a1631de80 d5ecb0eea96f4996b565fd983a768b11 - -
> -] Filter PciPassthroughFilter returned 3 host(s) get_filtered_objects
> /usr/lib/python2.7/site-packages/nova/filters.py:84*2015-11-06
> 11:18:17.959 59494 DEBUG nova.filters
> [req-9e20f8a9-384f-45c2-aa99-2d7b3100c98d
> 9340dc4e70a14aeb82013e5a1631de80 d5ecb0eea96f4996b565fd983a768b11 - -
> -] Filter NUMATopologyFilter returned 2 host(s) get_filtered_objects
> /usr/lib/python2.7/site-packages/nova/filters.py:84*
>
> Any help would be appreciated.
This looks like a successful run (still 2 hosts returned after
NUMATopologyFilter)? Or did were you expecting the host filtered out by
PciPassthroughFilter to still be in scope?
Thanks,
--
Steve Gordon,
Sr. Technical Product Manager,
Red Hat Enterprise Linux OpenStack Platform