On 04/28/2013 09:54 AM, Zhang, Kimi (NSN - CN/Cheng Du) wrote:

Hi, Gary

 

Yes, I’m aware of that packstack does not support quantum yet.  The whole setup was installed manually.

 

I did run quantum-server-setup and quantum-host-setup, I tried linuxbridge plugin too, it has no issue for VM to get IP address, but openvswitch has issues on this…


ok.

if you configure and IP address manually on the VM are you able to ping the port of the DHCP agent?

you can get the IP from quantum port-list


 

 

Regards,

Kimi

 

From: rdo-list-bounces@redhat.com [mailto:rdo-list-bounces@redhat.com] On Behalf Of ext Gary Kotton
Sent: Sunday, April 28, 2013 2:50 PM
To: rdo-list@redhat.com
Subject: Re: [Rdo-list] [Grizzly] Network problem with Quantum + Openvswitch + Vlan

 

Hi Kimi,
Thanks for the mail. Please see the inline comments below. Please note that at the moment we do not have packstack support for Quantum so there is a little manual plumbing that needs to be done (not sure if you have done this already).
On the host where the quantum service is running you need to run quantum-server-setup and on the compute nodes you need to run quantum-host-setup (please note that the relevant keystone credentials need to be set too).
Thanks
Gary

On 04/28/2013 09:38 AM, Zhang, Kimi (NSN - CN/Cheng Du) wrote:

converted from rtf

When I start VM instance, the VM can’t get IP address. Could someone help me on this ?


I will try

 

3 nodes Setup with RHEL 6.4 OS + rdo grizzly repository.

·         Controller node:

Services: Keystone+Glance+Cinder+Quantum server + Nova services

Network: bond0(10.68.125.11 for O&M)

 

·         Network node:

Services: quantum-openvswitch-agent,  quantum-l3-agent, quantum-dhcp-agent, quantum-metadata-agent

Network: bond0(10.68.125.15 for O&M) , p3p1 for VM internal network, p3p2 for external network


Please note that RHEL currently does not support namespaces so there are a number of limitations. We are addressing this at the moment. If namespaces are not used then it is suggested that one does not run the DHCP agent and the L3 agent on the same host. The reason for this is that there is no network isolation.


 

·         Compute node:

Services: nove-compute and quantum-openvswitch-agent

Network: bond0(10.68.125.16 for O&M), p3p1 for VM internal network

 

·         Switch setup tagging for vlan 1000-2999 for p3p1 ports(VM network) of network and compute nodes.

 

1.       Quantum.conf:

[DEFAULT]

debug = True

verbose = True

lock_path = $state_path/lock

bind_host = 0.0.0.0

bind_port = 9696

core_plugin = quantum.plugins.openvswitch.ovs_quantum_plugin.OVSQuantumPluginV2

api_paste_config = api-paste.ini

rpc_backend = quantum.openstack.common.rpc.impl_kombu


Are you using rabbit or qpid?


control_exchange = quantum

rabbit_host = 10.68.125.11

notification_driver = quantum.openstack.common.notifier.rpc_notifier

default_notification_level = INFO

notification_topics = notifications

[QUOTAS]

[DEFAULT_SERVICETYPE]

[AGENT]

polling_interval = 2

root_helper = sudo quantum-rootwrap /etc/quantum/rootwrap.conf

[keystone_authtoken]

auth_host = 10.68.125.11

auth_port = 35357

auth_protocol = http

signing_dir = /var/lib/quantum/keystone-signing

admin_tenant_name = service

admin_user = quantum

admin_password = password

 

2.       ovs_quantum_plugin.ini

[DATABASE]

sql_connection = mysql://quantum:quantum@10.68.125.11:3306/ovs_quantum

reconnect_interval = 2

[OVS]

tenant_network_type = vlan

network_vlan_ranges = physnet1:1000:2999

bridge_mappings = physnet1:br-p3p1

[AGENT]

polling_interval = 2

[SECURITYGROUP]

 

3.       nova.conf

[DEFAULT]

verbose=true

logdir = /var/log/nova

state_path = /var/lib/nova

lock_path = /var/lib/nova/tmp

volumes_dir = /etc/nova/volumes

dhcpbridge = /usr/bin/nova-dhcpbridge

dhcpbridge_flagfile = /etc/nova/nova.conf

force_dhcp_release = True

injected_network_template = /usr/share/nova/interfaces.template

libvirt_nonblocking = True

libvirt_inject_partition = -1

network_manager = nova.network.manager.FlatDHCPManager

iscsi_helper = tgtadm

compute_driver = libvirt.LibvirtDriver

libvirt_type=kvm

libvirt_ovs_bridge=br-int

firewall_driver = nova.virt.libvirt.firewall.IptablesFirewallDriver

manager=nova.conductor.manager.ConductorManager

rpc_backend = nova.openstack.common.rpc.impl_kombu

rabbit_host = 10.68.125.11

rootwrap_config = /etc/nova/rootwrap.conf

use_deprecated_auth=false

auth_strategy=keystone

glance_api_servers=10.68.125.11:9292

image_service=nova.image.glance.GlanceImageService

novnc_enabled=true

novncproxy_base_url=http://10.68.125.11:6080/vnc_auto.html

novncproxy_port=6080

vncserver_proxyclient_address=10.68.125.16

vncserver_listen=0.0.0.0

libvirt_vif_driver=nova.virt.libvirt.vif.LibvirtHybridOVSBridgeDriver

libvirt_use_virtio_for_bridges=True

network_api_class=nova.network.quantumv2.api.API

quantum_url=http://10.68.125.11:9696

quantum_auth_strategy=keystone

quantum_admin_tenant_name=service

quantum_admin_username=quantum

quantum_admin_password=password

quantum_admin_auth_url=http://10.68.125.11:35357/v2.0

linuxnet_interface_driver=nova.network.linux_net.LinuxOVSInterfaceDriver

libvirt_vif_type=ethernet

service_quantum_metadata_proxy = True

quantum_metadata_proxy_shared_secret = helloOpenStack

metadata_host = 10.68.125.11

metadata_listen = 0.0.0.0

metadata_listen_port = 8775

[keystone_authtoken]

admin_tenant_name = service

admin_user = nova

admin_password = password

auth_host = 10.68.125.11

auth_port = 35357

auth_protocol = http

signing_dir = /tmp/keystone-signing-nova

 

4.       ovs-vsctl show on network node:

aeeb6cf7-271b-405a-aa17-1b95bcd9e301

    Bridge "br-p3p1"

        Port "p3p1"

            Interface "p3p1"

        Port "phy-br-p3p1"

            Interface "phy-br-p3p1"

        Port "br-p3p1"

            Interface "br-p3p1"

                type: internal

    Bridge br-ex

        Port br-ex

            Interface br-ex

                type: internal

        Port "qg-a83c0abd-f4"

            Interface "qg-a83c0abd-f4"

                type: internal

        Port "p3p2"

            Interface "p3p2"

    Bridge br-int

        Port br-int

            Interface br-int

                type: internal

        Port "int-br-p3p1"

            Interface "int-br-p3p1"

        Port "tap1f386a2a-12"

            tag: 1

            Interface "tap1f386a2a-12"

                type: internal

ovs_version: "1.9.0"

 

5.       ovs-vsctl show on compute node:

8d6c2637-ff69-4a2d-a7db-e4f181273bc0

    Bridge "br-p3p1"

        Port "br-p3p1"

            Interface "br-p3p1"

                type: internal

        Port "phy-br-p3p1"

            Interface "phy-br-p3p1"

        Port "p3p1"

            Interface "p3p1"

    Bridge br-int

        Port "qvo56a4572c-dc"

            tag: 2

            Interface "qvo56a4572c-dc"

        Port "int-br-p3p1"

            Interface "int-br-p3p1"

        Port br-int

            Interface br-int

                type: internal

ovs_version: "1.9.0"

 

On compute node, I can see dhcp request packet from tcpdump on qvo56a4572c-dc, but it seems the packet is not forwarded out since I can’t see packet from int-br-p3p1 on br-int or any port from br-p3p1.


Any chance to get the DHCP and the L3 agent configuration files? Please check that use_namespaces = False in both of these files.

Are there any log errors?


 

 

Thank you!

 

 

Regards,

Kimi

 

 

 




_______________________________________________
Rdo-list mailing list
Rdo-list@redhat.com
https://www.redhat.com/mailman/listinfo/rdo-list