[Rdo-list] SR-IOV on openstack: no valid host is found
Itzik Brown
itzikb at redhat.com
Mon Aug 31 06:32:21 UTC 2015
Hi,
There is patch under review to solve the connectivity issue:
https://review.openstack.org/#/c/198736/
Itzik
On 08/28/2015 06:27 PM, Pedro Sousa wrote:
> Hi all,
>
> I've managed to get his working, except SR-IOV vms cannot reach OVS
> vms and vice-versa. I've disabled firewall in ml2_conf.ini, openswitch
> and sriov agent files with this conf:
>
> *[securitygroup]*
> *enable_security_group = False
> *
> *firewall_driver = neutron.agent.firewall.NoopFirewallDriver*
>
> Still doesn't work. Any idea why OVS based nic cannot ping SR-IOV nics?
>
> Thanks,
> Pedro Sousa
>
>
>
> On Wed, Aug 26, 2015 at 3:01 PM, Pedro Sousa <pgsousa at gmail.com
> <mailto:pgsousa at gmail.com>> wrote:
>
> Hi all,
>
> thank you for your replys, concerning the dhcp issue, as I said it
> works some times, I also see this with tcpdump:
>
> # tcpdump -i any port 67 or port 68 -e -n
> *14:51:15.160160 B fa:16:3e:e6:a5:fd ethertype 802.1Q (0x8100),
> length 348: vlan 3486, p 0, ethertype IPv4, 0.0.0.0.bootpc >
> 255.255.255.255.bootps: BOOTP/DHCP, Request from
> fa:16:3e:e6:a5:fd, length 300*
> *14:51:15.160339 B fa:16:3e:e6:a5:fd ethertype 802.1Q (0x8100),
> length 348: vlan 3486, p 0, ethertype IPv4, 0.0.0.0.bootpc >
> 255.255.255.255.bootps: BOOTP/DHCP, Request from
> fa:16:3e:e6:a5:fd, length 300*
> *14:51:15.160341 B fa:16:3e:e6:a5:fd ethertype 802.1Q (0x8100),
> length 348: vlan 1, p 0, ethertype IPv4, 0.0.0.0.bootpc >
> 255.255.255.255.bootps: BOOTP/DHCP, Request from
> fa:16:3e:e6:a5:fd, length 300*
> *14:51:15.160610 Out fa:16:3e:48:d3:db ethertype 802.1Q (0x8100),
> length 385: vlan 3486, p 0, ethertype IPv4, 10.0.30.3.bootps >
> 10.0.30.33.bootpc: BOOTP/DHCP, Reply, length 337*
>
> 1º Joe, how can I check that VF has the right VLAN Tag?
>
> 2º My understanding is that VLANS are correctly configured as it
> works fine with Openvswitch.
>
> Alon, I have the VLANS configured like this:
>
> Eth104/1/2 SERVER2_NIC5 connected trunk full 10G --
> Eth104/1/6 SERVER1_NIC5 connected trunk full 10G --
>
> vlan 3482 (untagged)
> vlan 2402 (tagged)
> vlan 3480 (tagged)
> vlan 3481 (tagged)
> vlan 3485 (tagged)
> vlan 3486 (tagged)
> vlan 3487 (tagged)
> vlan 3488 (tagged)
> vlan 3489 (tagged)
> vlan 3490 (tagged)
> vlan 3491 (tagged)
> vlan 3492 (tagged)
>
> Regards,
> Pedro Sousa
>
>
>
>
> On Wed, Aug 26, 2015 at 2:37 PM, Dotan, Alon <alon.dotan at hp.com
> <mailto:alon.dotan at hp.com>> wrote:
>
> Sounds like issues in the environment,
>
> The PF (Physical function (external port)) connected to some
> switch, if so as Trunk?
>
> *From:*rdo-list-bounces at redhat.com
> <mailto:rdo-list-bounces at redhat.com>
> [mailto:rdo-list-bounces at redhat.com
> <mailto:rdo-list-bounces at redhat.com>] *On Behalf Of *Joe Talerico
> *Sent:* Wednesday, August 26, 2015 16:26
> *To:* Pedro Sousa
> *Cc:* rdo-list at redhat.com <mailto:rdo-list at redhat.com>
> *Subject:* Re: [Rdo-list] SR-IOV on openstack: no valid host
> is found
>
> On Wed, Aug 26, 2015 at 6:54 AM, Pedro Sousa
> <pgsousa at gmail.com <mailto:pgsousa at gmail.com>> wrote:
>
> Hi,
>
> Some update on this, although I can launch the VM and see
> the VF inside it, I have 2 problems
>
> 1º Sometimes I don't get ip from dhcp when I launch a new
> instance.
>
> Does the VF have the right VLAN tag?
>
> 2º I cannot ping the other hosts inside the tenant network
> using sr-iov nics. If I use ovs nics it works.
>
> Are VLANs setup properly?
>
> Joe
>
> Anyone has experienced this issues?
>
> Thanks,
>
> Pedro Sousa
>
> On Tue, Aug 25, 2015 at 12:02 PM, Pedro Sousa
> <pgsousa at gmail.com <mailto:pgsousa at gmail.com>> wrote:
>
> Hi,
>
> Anyone interested, I got it working using this procedure:
>
> http://docs-draft.openstack.org/85/213985/10/check/gate-openstack-manuals-tox-doc-publish-checkbuild/1381a5e//publish-docs/networking-guide/adv_config_sriov.html
> <http://docs-draft.openstack.org/85/213985/10/check/gate-openstack-manuals-tox-doc-publish-checkbuild/1381a5e/publish-docs/networking-guide/adv_config_sriov.html>
>
> I'm using *RDO/Juno Centos 7.1*
>
> My conf:
>
> *Controller/Network Node:*
>
> */etc/neutron/plugins/ml2/ml2_conf.ini*
>
> type_drivers = vlan
>
> tenant_network_types = vlan
>
> mechanism_drivers =openvswitch,sriovnicswitch
>
> [ml2_type_vlan]
>
> network_vlan_ranges =int-vlan:1440:1449
>
> [securitygroup]
>
> enable_security_group = True
>
> firewall_driver =
> neutron.agent.firewall.NoopFirewallDriver
>
> */etc/neutron/plugins/ml2/ml2_conf_sriov.ini*
>
> [ml2_sriov]
>
> supported_pci_vendor_devs = 14e4:16af
>
> agent_required = True
>
> [sriov_nic]
>
> physical_device_mappings = int-vlan:p2p1
>
> */etc/nova/nova.conf*
>
> [Default]
>
> scheduler_default_filters=RetryFilter,AvailabilityZoneFilter,RamFilter,ComputeFilter,ComputeCapabilitiesFilter,ImagePropertiesFilter,ServerGroupAntiAffinityFilter,ServerGroupAffinityFilter,PciPassthroughFilter
>
> scheduler_available_filters =
> nova.scheduler.filters.all_filters
>
> scheduler_available_filters =
> nova.scheduler.filters.pci_passthrough_filter.PciPassthroughFilter
>
> */usr/lib/systemd/system/neutron-server.service*
>
> ExecStart=/usr/bin/neutron-server --config-file
> /usr/share/neutron/neutron-dist.conf --config-file
> /etc/neutron/neutron.conf --config-file
> /etc/neutron/plugin.ini --config-file
> /etc/neutron/plugins/ml2/ml2_conf_sriov.ini --log-file
> /var/log/neutron/server.log
>
> *Compute Node:*
>
> I had to install ml2 plugin and sr-iov agent. Note, in
> the compute node using rdo packstack the ml2 plugin is
> not installed by default.
>
> # yum
> install openstack-neutron-ml2 openstack-neutron-sriov-nic-agent
>
> */etc/neutron/plugins/ml2/ml2_conf.ini*
>
> [securitygroup]
>
> # Controls if neutron security group is enabled or not.
>
> # It should be false when you use nova security group.
>
> # enable_security_group = True
>
> enable_security_group = True
>
> firewall_driver =
> neutron.agent.firewall.NoopFirewallDriver
>
> */etc/neutron/plugins/ml2/ml2_conf_sriov.ini*
>
> [securitygroup]
>
> firewall_driver =
> neutron.agent.firewall.NoopFirewallDriver
>
> [sriov_nic]
>
> physical_device_mappings = int-vlan:p2p1
>
> */usr/lib/systemd/system/neutron-sriov-nic-agent.service*
>
> ExecStart=/usr/bin/neutron-sriov-nic-agent
> --config-file /usr/share/neutron/neutron-dist.conf
> --config-file /etc/neutron/neutron.conf --config-file
> /etc/neutron/plugins/ml2/ml2_conf_sriov.ini --log-file
> /var/log/neutron/sriov-nic-agent.log
>
> Regards,
>
> Pedro Sousa
>
> On Tue, Aug 25, 2015 at 11:04 AM, Joe Talerico
> <jtaleric at redhat.com <mailto:jtaleric at redhat.com>> wrote:
>
>
>
> On Monday, August 17, 2015, שחם
> פרידנברג<shahamf at gmail.com
> <mailto:shahamf at gmail.com>> wrote:
>
> Hey all,
>
> I deployed openstack on DELL PowerEdge R620
> installed with Centos 7.
>
> SR-IOV is enabled in BIOS (both Virtualization
> Technology & SR-IOV).
>
> Also, I added the needed kernel parameters and
> created virtual functions on 82599 Intel 10G NIC.
>
> in nova.conf:
>
> 1.
> pci_passthrough_whitelist={"devname":"p2p1","physical_network":"sriovnet"}
>
> 2.
> scheduler_default_filters=RetryFilter,AvailabilityZoneFilter,RamFilter,ComputeFilter,ComputeCapabilitiesFilter,
>
> ImagePropertiesFilter,CoreFilter,PciPassthroughFilter
>
> Where did you supply the whitelist?
>
> Also, after making the whitelist change did you
> check the nova-compute.log? It typically reports
> the PCI devices that can be used for instances.
>
> Joe
>
> in ml2_conf.ini:
>
> 1. type_drivers = vxlan,vlan
>
> 2. mechanism_drivers =openvswitch,sriovnicswitch
>
> 3. network_vlan_ranges = sriovnet:80:90
>
> in ml2_conf_sriov.ini:
>
> 1. supported_pci_vendor_devs = 8086:10ed
>
> 2. agent_required = False
>
> in neutron-server.service:
>
> 1. ExecStart=/usr/bin/neutron-server
> --config-file
> /usr/share/neutron/neutron-dist.conf
> --config-dir /usr/share/neutron/server
>
> --config-file /etc/neutron/neutron.conf
> --config-file /etc/neutron/plugin.ini
>
> --config-file
> /etc/neutron/plugins/ml2/ml2_conf_sriov.ini
> --config-dir /etc/neutron/conf.d/neutron-server
>
> --log-file /var/log/neutron/server.log
>
> created Network based on physical network I
> defined (sriovnet), configured subnet and
> created direct type port.
>
> When I creating an image (nova boot --flavor
> m1.large --image my_img --nic
> port-id=087ff574-fb14-47fd-82cb-454f176154ff
> test_sriov)
>
> I get the following error:
>
> Traceback (most recent call last):
>
> File
> "/usr/lib/python2.7/site-packages/oslo_messaging/rpc/server.py",
> line 142, in inner
>
> return func(*args, **kwargs)
>
> File
> "/usr/lib/python2.7/site-packages/nova/scheduler/manager.py",
> line 86, in select_destinations
>
> filter_properties)
>
> File
> "/usr/lib/python2.7/site-packages/nova/scheduler/filter_scheduler.py",
> line 80, in select_destinations
>
> raise exception.NoValidHost(reason=reason)
>
> NoValidHost: No valid host was found. There
> are not enough hosts available.
>
> 2015-08-18 04:28:22.886 17998 WARNING
> nova.scheduler.utils
> [req-5804960b-4614-4d0e-9a3d-a94964cf93f8
> caf2b9813205455896e60c6d00c92b4d
> 4bd6b22041ef4123958a0f85c775b770 - - -]
> [instance:
> 470e16f9-002f-4ae4-82f4-17a83a93d860] Setting
> instance to ERROR state.
>
> Any idea what might be the problem here?
>
> Thanks,
>
> Shaham
>
> _______________________________________________
> Rdo-list mailing list
> Rdo-list at redhat.com <mailto:Rdo-list at redhat.com>
> https://www.redhat.com/mailman/listinfo/rdo-list
>
> To unsubscribe: rdo-list-unsubscribe at redhat.com
> <mailto:rdo-list-unsubscribe at redhat.com>
>
>
> _______________________________________________
> Rdo-list mailing list
> Rdo-list at redhat.com <mailto:Rdo-list at redhat.com>
> https://www.redhat.com/mailman/listinfo/rdo-list
>
> To unsubscribe: rdo-list-unsubscribe at redhat.com
> <mailto:rdo-list-unsubscribe at redhat.com>
>
>
>
>
>
> _______________________________________________
> Rdo-list mailing list
> Rdo-list at redhat.com
> https://www.redhat.com/mailman/listinfo/rdo-list
>
> To unsubscribe: rdo-list-unsubscribe at redhat.com
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.rdoproject.org/pipermail/dev/attachments/20150831/6c68a7c7/attachment.html>
More information about the dev
mailing list