Hi sir,
this is debug server.log and it appers when spawning instance.
2014-09-10 18:52:03.656 16952 DEBUG urllib3.connectionpool [-] "POST /v2.0/tokens
HTTP/1.1" 401 133 _make_request
/usr/lib/python2.6/site-packages/urllib3/connectionpool.py:295
2014-09-10 18:52:03.657 16952 ERROR neutron.notifiers.nova [-] Failed to notify nova on
events: [{'status': 'completed', 'tag':
u'4fbbddbf-ff50-45c0-b5af-3f92e9b81f68', 'name':
'network-vif-plugged', 'server_uuid':
u'0d5c1932-cc2b-42e3-95da-2fe04e33b570'}]
2014-09-10 18:52:03.657 16952 TRACE neutron.notifiers.nova Traceback (most recent call
last):
2014-09-10 18:52:03.657 16952 TRACE neutron.notifiers.nova File
"/usr/lib/python2.6/site-packages/neutron/notifiers/nova.py", line 221, in
send_events
2014-09-10 18:52:03.657 16952 TRACE neutron.notifiers.nova batched_events)
2014-09-10 18:52:03.657 16952 TRACE neutron.notifiers.nova File
"/usr/lib/python2.6/site-packages/novaclient/v1_1/contrib/server_external_events.py",
line 39, in create
2014-09-10 18:52:03.657 16952 TRACE neutron.notifiers.nova return_raw=True)
2014-09-10 18:52:03.657 16952 TRACE neutron.notifiers.nova File
"/usr/lib/python2.6/site-packages/novaclient/base.py", line 152, in _create
2014-09-10 18:52:03.657 16952 TRACE neutron.notifiers.nova _resp, body =
self.api.client.post(url, body=body)
2014-09-10 18:52:03.657 16952 TRACE neutron.notifiers.nova File
"/usr/lib/python2.6/site-packages/novaclient/client.py", line 312, in post
2014-09-10 18:52:03.657 16952 TRACE neutron.notifiers.nova return
self._cs_request(url, 'POST', **kwargs)
2014-09-10 18:52:03.657 16952 TRACE neutron.notifiers.nova File
"/usr/lib/python2.6/site-packages/novaclient/client.py", line 275, in
_cs_request
2014-09-10 18:52:03.657 16952 TRACE neutron.notifiers.nova self.authenticate()
2014-09-10 18:52:03.657 16952 TRACE neutron.notifiers.nova File
"/usr/lib/python2.6/site-packages/novaclient/client.py", line 408, in
authenticate
2014-09-10 18:52:03.657 16952 TRACE neutron.notifiers.nova auth_url =
self._v2_auth(auth_url)
2014-09-10 18:52:03.657 16952 TRACE neutron.notifiers.nova File
"/usr/lib/python2.6/site-packages/novaclient/client.py", line 495, in _v2_auth
2014-09-10 18:52:03.657 16952 TRACE neutron.notifiers.nova return
self._authenticate(url, body)
2014-09-10 18:52:03.657 16952 TRACE neutron.notifiers.nova File
"/usr/lib/python2.6/site-packages/novaclient/client.py", line 508, in
_authenticate
2014-09-10 18:52:03.657 16952 TRACE neutron.notifiers.nova **kwargs)
2014-09-10 18:52:03.657 16952 TRACE neutron.notifiers.nova File
"/usr/lib/python2.6/site-packages/novaclient/client.py", line 268, in
_time_request
2014-09-10 18:52:03.657 16952 TRACE neutron.notifiers.nova resp, body =
self.request(url, method, **kwargs)
2014-09-10 18:52:03.657 16952 TRACE neutron.notifiers.nova File
"/usr/lib/python2.6/site-packages/novaclient/client.py", line 262, in request
2014-09-10 18:52:03.657 16952 TRACE neutron.notifiers.nova raise
exceptions.from_response(resp, body, url, method)
2014-09-10 18:52:03.657 16952 TRACE neutron.notifiers.nova Unauthorized: User nova is
unauthorized for tenant d3e2355e31b449cca9dd57fa5073ec2f (HTTP 401)
2014-09-10 18:52:03.657 16952 TRACE neutron.notifiers.nova
2014-09-10 18:52:04.892 16952 DEBUG neutron.openstack.common.rpc.amqp [-] received
{u'_context_roles': [u'admin'], u'_context_request_id':
u'req-a460d750-d0b6-49d
Rasanjaya Subasinghe
Dev/Ops Engineer,WSO2 inc.
Mobile: +94772250358
E-Mail: rasanjaya(a)wso2.com
On Sep 10, 2014, at 1:23 PM, Rasanjaya Subasinghe <Rasaposha(a)gmail.com> wrote:
Hi,
Any luck sir..
cheers
On Sep 10, 2014, at 12:03 PM, Rasanjaya Subasinghe <Rasaposha(a)gmail.com> wrote:
> Hi sir,
>
> I will provide more details for reproduce the issue.
>
> cheers
>
> On Wed, Sep 10, 2014 at 12:02 PM, Rasanjaya Subasinghe <rasaposha(a)gmail.com>
wrote:
> Hi Kashyap,
>
> this is the configuration i have made for integrate with LDAP,
>
> 1. keystone.conf
>
> url = ldap://192.168.16.100
> user = cn=admin,dc=example,dc=org
> password = 123
> suffix = dc=example,dc=org
>
> user_tree_dn = ou=Users,dc=example,dc=org
> user_objectclass = inetOrgPerson
> user_id_attribute = cn
> user_name_attribute = cn
> user_pass_attribute = userPassword
> user_enabled_emulation = True
> user_enabled_emulation_dn = cn=enabled_users,ou=Users,dc=example,dc=org
> user_allow_create = False
> user_allow_update = False
> user_allow_delete = False
>
> tenant_tree_dn = ou=Groups,dc=example,dc=org
> tenant_objectclass = groupOfNames
> tenant_id_attribute = cn
> #tenant_domain_id_attribute = businessCategory
> #tenant_domain_id_attribute = cn
> tenant_member_attribute = member
> tenant_name_attribute = cn
> tenant_domain_id_attribute = None
> tenant_allow_create = False
> tenant_allow_update = False
> tenant_allow_delete = False
>
>
> role_tree_dn = ou=Roles,dc=example,dc=org
> role_objectclass = organizationalRole
> role_member_attribute = roleOccupant
> role_id_attribute = cn
> role_name_attribute = cn
> role_allow_create = False
> role_allow_update = False
> role_allow_delete = False
>
> 2.neutron.conf
>
> [DEFAULT]
> # Print more verbose output (set logging level to INFO instead of default WARNING
level).
> # verbose = True
> verbose = True
>
> # Print debugging output (set logging level to DEBUG instead of default WARNING
level).
> # debug = False
> debug = True
>
> # Where to store Neutron state files. This directory must be writable by the
> # user executing the agent.
> # state_path = /var/lib/neutron
>
> # Where to store lock files
> # lock_path = $state_path/lock
>
> # log_format = %(asctime)s %(levelname)8s [%(name)s] %(message)s
> # log_date_format = %Y-%m-%d %H:%M:%S
>
> # use_syslog -> syslog
> # log_file and log_dir -> log_dir/log_file
> # (not log_file) and log_dir -> log_dir/{binary_name}.log
> # use_stderr -> stderr
> # (not user_stderr) and (not log_file) -> stdout
> # publish_errors -> notification system
>
> # use_syslog = False
> use_syslog = False
> # syslog_log_facility = LOG_USER
>
> # use_stderr = False
> # log_file =
> # log_dir =
> log_dir =/var/log/neutron
>
> # publish_errors = False
>
> # Address to bind the API server to
> # bind_host = 0.0.0.0
> bind_host = 0.0.0.0
>
> # Port the bind the API server to
> # bind_port = 9696
> bind_port = 9696
>
> # Path to the extensions. Note that this can be a colon-separated list of
> # paths. For example:
> # api_extensions_path = extensions:/path/to/more/extensions:/even/more/extensions
> # The __path__ of neutron.extensions is appended to this, so if your
> # extensions are in there you don't need to specify them here
> # api_extensions_path =
>
> # (StrOpt) Neutron core plugin entrypoint to be loaded from the
> # neutron.core_plugins namespace. See setup.cfg for the entrypoint names of the
> # plugins included in the neutron source distribution. For compatibility with
> # previous versions, the class name of a plugin can be specified instead of its
> # entrypoint name.
> #
> # core_plugin =
> core_plugin =neutron.plugins.openvswitch.ovs_neutron_plugin.OVSNeutronPluginV2
> # Example: core_plugin = ml2
>
> # (ListOpt) List of service plugin entrypoints to be loaded from the
> # neutron.service_plugins namespace. See setup.cfg for the entrypoint names of
> # the plugins included in the neutron source distribution. For compatibility
> # with previous versions, the class name of a plugin can be specified instead
> # of its entrypoint name.
> #
> # service_plugins =
> service_plugins =neutron.services.firewall.fwaas_plugin.FirewallPlugin
> # Example: service_plugins = router,firewall,lbaas,vpnaas,metering
>
> # Paste configuration file
> # api_paste_config = /usr/share/neutron/api-paste.ini
>
> # The strategy to be used for auth.
> # Supported values are 'keystone'(default), 'noauth'.
> # auth_strategy = noauth
> auth_strategy = keystone
>
> # Base MAC address. The first 3 octets will remain unchanged. If the
> # 4h octet is not 00, it will also be used. The others will be
> # randomly generated.
> # 3 octet
> # base_mac = fa:16:3e:00:00:00
> base_mac = fa:16:3e:00:00:00
> # 4 octet
> # base_mac = fa:16:3e:4f:00:00
>
> # Maximum amount of retries to generate a unique MAC address
> # mac_generation_retries = 16
> mac_generation_retries = 16
>
> # DHCP Lease duration (in seconds)
> # dhcp_lease_duration = 86400
> dhcp_lease_duration = 86400
>
> # Allow sending resource operation notification to DHCP agent
> # dhcp_agent_notification = True
>
> # Enable or disable bulk create/update/delete operations
> # allow_bulk = True
> allow_bulk = True
> # Enable or disable pagination
> # allow_pagination = False
> allow_pagination = False
> # Enable or disable sorting
> # allow_sorting = False
> allow_sorting = False
> # Enable or disable overlapping IPs for subnets
> # Attention: the following parameter MUST be set to False if Neutron is
> # being used in conjunction with nova security groups
> # allow_overlapping_ips = True
> allow_overlapping_ips = True
> # Ensure that configured gateway is on subnet
> # force_gateway_on_subnet = False
>
>
> # RPC configuration options. Defined in rpc __init__
> # The messaging module to use, defaults to kombu.
> # rpc_backend = neutron.openstack.common.rpc.impl_kombu
> rpc_backend = neutron.openstack.common.rpc.impl_kombu
> # Size of RPC thread pool
> # rpc_thread_pool_size = 64
> # Size of RPC connection pool
> # rpc_conn_pool_size = 30
> # Seconds to wait for a response from call or multicall
> # rpc_response_timeout = 60
> # Seconds to wait before a cast expires (TTL). Only supported by impl_zmq.
> # rpc_cast_timeout = 30
> # Modules of exceptions that are permitted to be recreated
> # upon receiving exception data from an rpc call.
> # allowed_rpc_exception_modules = neutron.openstack.common.exception, nova.exception
> # AMQP exchange to connect to if using RabbitMQ or QPID
> # control_exchange = neutron
> control_exchange = neutron
>
> # If passed, use a fake RabbitMQ provider
> # fake_rabbit = False
>
> # Configuration options if sending notifications via kombu rpc (these are
> # the defaults)
> # SSL version to use (valid only if SSL enabled)
> # kombu_ssl_version =
> # SSL key file (valid only if SSL enabled)
> # kombu_ssl_keyfile =
> # SSL cert file (valid only if SSL enabled)
> # kombu_ssl_certfile =
> # SSL certification authority file (valid only if SSL enabled)
> # kombu_ssl_ca_certs =
> # IP address of the RabbitMQ installation
> # rabbit_host = localhost
> rabbit_host = 192.168.32.20
> # Password of the RabbitMQ server
> # rabbit_password = guest
> rabbit_password = guest
> # Port where RabbitMQ server is running/listening
> # rabbit_port = 5672
> rabbit_port = 5672
> # RabbitMQ single or HA cluster (host:port pairs i.e: host1:5672, host2:5672)
> # rabbit_hosts is defaulted to '$rabbit_host:$rabbit_port'
> # rabbit_hosts = localhost:5672
> rabbit_hosts = 192.168.32.20:5672
> # User ID used for RabbitMQ connections
> # rabbit_userid = guest
> rabbit_userid = guest
> # Location of a virtual RabbitMQ installation.
> # rabbit_virtual_host = /
> rabbit_virtual_host = /
> # Maximum retries with trying to connect to RabbitMQ
> # (the default of 0 implies an infinite retry count)
> # rabbit_max_retries = 0
> # RabbitMQ connection retry interval
> # rabbit_retry_interval = 1
> # Use HA queues in RabbitMQ (x-ha-policy: all). You need to
> # wipe RabbitMQ database when changing this option. (boolean value)
> # rabbit_ha_queues = false
> rabbit_ha_queues = False
>
> # QPID
> # rpc_backend=neutron.openstack.common.rpc.impl_qpid
> # Qpid broker hostname
> # qpid_hostname = localhost
> # Qpid broker port
> # qpid_port = 5672
> # Qpid single or HA cluster (host:port pairs i.e: host1:5672, host2:5672)
> # qpid_hosts is defaulted to '$qpid_hostname:$qpid_port'
> # qpid_hosts = localhost:5672
> # Username for qpid connection
> # qpid_username = ''
> # Password for qpid connection
> # qpid_password = ''
> # Space separated list of SASL mechanisms to use for auth
> # qpid_sasl_mechanisms = ''
> # Seconds between connection keepalive heartbeats
> # qpid_heartbeat = 60
> # Transport to use, either 'tcp' or 'ssl'
> # qpid_protocol = tcp
> # Disable Nagle algorithm
> # qpid_tcp_nodelay = True
>
> # ZMQ
> # rpc_backend=neutron.openstack.common.rpc.impl_zmq
> # ZeroMQ bind address. Should be a wildcard (*), an ethernet interface, or IP.
> # The "host" option should point or resolve to this address.
> # rpc_zmq_bind_address = *
>
> # ============ Notification System Options =====================
>
> # Notifications can be sent when network/subnet/port are created, updated or
deleted.
> # There are three methods of sending notifications: logging (via the
> # log_file directive), rpc (via a message queue) and
> # noop (no notifications sent, the default)
>
> # Notification_driver can be defined multiple times
> # Do nothing driver
> # notification_driver = neutron.openstack.common.notifier.no_op_notifier
> # Logging driver
> # notification_driver = neutron.openstack.common.notifier.log_notifier
> # RPC driver.
> # notification_driver = neutron.openstack.common.notifier.rpc_notifier
>
> # default_notification_level is used to form actual topic name(s) or to set logging
level
> # default_notification_level = INFO
>
> # default_publisher_id is a part of the notification payload
> # host =
myhost.com
> # default_publisher_id = $host
>
> # Defined in rpc_notifier, can be comma separated values.
> # The actual topic names will be %s.%(default_notification_level)s
> # notification_topics = notifications
>
> # Default maximum number of items returned in a single response,
> # value == infinite and value < 0 means no max limit, and value must
> # be greater than 0. If the number of items requested is greater than
> # pagination_max_limit, server will just return pagination_max_limit
> # of number of items.
> # pagination_max_limit = -1
>
> # Maximum number of DNS nameservers per subnet
> # max_dns_nameservers = 5
>
> # Maximum number of host routes per subnet
> # max_subnet_host_routes = 20
>
> # Maximum number of fixed ips per port
> # max_fixed_ips_per_port = 5
>
> # =========== items for agent management extension =============
> # Seconds to regard the agent as down; should be at least twice
> # report_interval, to be sure the agent is down for good
> # agent_down_time = 75
> agent_down_time = 75
> # =========== end of items for agent management extension =====
>
> # =========== items for agent scheduler extension =============
> # Driver to use for scheduling network to DHCP agent
> # network_scheduler_driver = neutron.scheduler.dhcp_agent_scheduler.ChanceScheduler
> # Driver to use for scheduling router to a default L3 agent
> # router_scheduler_driver = neutron.scheduler.l3_agent_scheduler.ChanceScheduler
> router_scheduler_driver = neutron.scheduler.l3_agent_scheduler.ChanceScheduler
> # Driver to use for scheduling a loadbalancer pool to an lbaas agent
> # loadbalancer_pool_scheduler_driver =
neutron.services.loadbalancer.agent_scheduler.ChanceScheduler
>
> # Allow auto scheduling networks to DHCP agent. It will schedule non-hosted
> # networks to first DHCP agent which sends get_active_networks message to
> # neutron server
> # network_auto_schedule = True
>
> # Allow auto scheduling routers to L3 agent. It will schedule non-hosted
> # routers to first L3 agent which sends sync_routers message to neutron server
> # router_auto_schedule = True
>
> # Number of DHCP agents scheduled to host a network. This enables redundant
> # DHCP agents for configured networks.
> # dhcp_agents_per_network = 1
> dhcp_agents_per_network = 1
>
> # =========== end of items for agent scheduler extension =====
>
> # =========== WSGI parameters related to the API server ==============
> # Number of separate worker processes to spawn. The default, 0, runs the
> # worker thread in the current process. Greater than 0 launches that number of
> # child processes as workers. The parent process manages them.
> # api_workers = 0
> api_workers = 0
>
> # Number of separate RPC worker processes to spawn. The default, 0, runs the
> # worker thread in the current process. Greater than 0 launches that number of
> # child processes as RPC workers. The parent process manages them.
> # This feature is experimental until issues are addressed and testing has been
> # enabled for various plugins for compatibility.
> # rpc_workers = 0
>
> # Sets the value of TCP_KEEPIDLE in seconds to use for each server socket when
> # starting API server. Not supported on OS X.
> # tcp_keepidle = 600
>
> # Number of seconds to keep retrying to listen
> # retry_until_window = 30
>
> # Number of backlog requests to configure the socket with.
> # backlog = 4096
>
> # Max header line to accommodate large tokens
> # max_header_line = 16384
>
> # Enable SSL on the API server
> # use_ssl = False
> use_ssl = False
>
> # Certificate file to use when starting API server securely
> # ssl_cert_file = /path/to/certfile
>
> # Private key file to use when starting API server securely
> # ssl_key_file = /path/to/keyfile
>
> # CA certificate file to use when starting API server securely to
> # verify connecting clients. This is an optional parameter only required if
> # API clients need to authenticate to the API server using SSL certificates
> # signed by a trusted CA
> # ssl_ca_file = /path/to/cafile
> # ======== end of WSGI parameters related to the API server ==========
>
>
> # ======== neutron nova interactions ==========
> # Send notification to nova when port status is active.
> # notify_nova_on_port_status_changes = False
> notify_nova_on_port_status_changes = True
>
> # Send notifications to nova when port data (fixed_ips/floatingips) change
> # so nova can update it's cache.
> # notify_nova_on_port_data_changes = False
> notify_nova_on_port_data_changes = True
>
> # URL for connection to nova (Only supports one nova region currently).
> # nova_url =
http://127.0.0.1:8774/v2
> nova_url =
http://192.168.32.20:8774/v2
>
> # Name of nova region to use. Useful if keystone manages more than one region
> # nova_region_name =
> nova_region_name =RegionOne
>
> # Username for connection to nova in admin context
> # nova_admin_username =
> nova_admin_username =nova
>
> # The uuid of the admin nova tenant
> # nova_admin_tenant_id =
> nova_admin_tenant_id =d3e2355e31b449cca9dd57fa5073ec2f
>
> # Password for connection to nova in admin context.
> # nova_admin_password =
> nova_admin_password =secret
>
> # Authorization URL for connection to nova in admin context.
> # nova_admin_auth_url =
> nova_admin_auth_url =http://192.168.32.20:35357/v2.0
>
> # Number of seconds between sending events to nova if there are any events to send
> # send_events_interval = 2
> send_events_interval = 2
>
> # ======== end of neutron nova interactions ==========
> rabbit_use_ssl=False
>
> [quotas]
> # Default driver to use for quota checks
> # quota_driver = neutron.db.quota_db.DbQuotaDriver
>
> # Resource name(s) that are supported in quota features
> # quota_items = network,subnet,port
>
> # Default number of resource allowed per tenant. A negative value means
> # unlimited.
> # default_quota = -1
>
> # Number of networks allowed per tenant. A negative value means unlimited.
> # quota_network = 10
>
> # Number of subnets allowed per tenant. A negative value means unlimited.
> # quota_subnet = 10
>
> # Number of ports allowed per tenant. A negative value means unlimited.
> # quota_port = 50
>
> # Number of security groups allowed per tenant. A negative value means
> # unlimited.
> # quota_security_group = 10
>
> # Number of security group rules allowed per tenant. A negative value means
> # unlimited.
> # quota_security_group_rule = 100
>
> # Number of vips allowed per tenant. A negative value means unlimited.
> # quota_vip = 10
>
> # Number of pools allowed per tenant. A negative value means unlimited.
> # quota_pool = 10
>
> # Number of pool members allowed per tenant. A negative value means unlimited.
> # The default is unlimited because a member is not a real resource consumer
> # on Openstack. However, on back-end, a member is a resource consumer
> # and that is the reason why quota is possible.
> # quota_member = -1
>
> # Number of health monitors allowed per tenant. A negative value means
> # unlimited.
> # The default is unlimited because a health monitor is not a real resource
> # consumer on Openstack. However, on back-end, a member is a resource consumer
> # and that is the reason why quota is possible.
> # quota_health_monitors = -1
>
> # Number of routers allowed per tenant. A negative value means unlimited.
> # quota_router = 10
>
> # Number of floating IPs allowed per tenant. A negative value means unlimited.
> # quota_floatingip = 50
>
> [agent]
> # Use "sudo neutron-rootwrap /etc/neutron/rootwrap.conf" to use the real
> # root filter facility.
> # Change to "sudo" to skip the filtering and just run the comand directly
> # root_helper = sudo neutron-rootwrap /etc/neutron/rootwrap.conf
> root_helper = sudo neutron-rootwrap /etc/neutron/rootwrap.conf
>
> # =========== items for agent management extension =============
> # seconds between nodes reporting state to server; should be less than
> # agent_down_time, best if it is half or less than agent_down_time
> # report_interval = 30
> report_interval = 30
>
> # =========== end of items for agent management extension =====
>
> [keystone_authtoken]
> # auth_host = 127.0.0.1
> auth_host = 192.168.32.20
> # auth_port = 35357
> auth_port = 35357
> # auth_protocol = http
> auth_protocol = http
> # admin_tenant_name = %SERVICE_TENANT_NAME%
> admin_tenant_name = services
> # admin_user = %SERVICE_USER%
> admin_user = neutron
> # admin_password = %SERVICE_PASSWORD%
> admin_password = secret
> auth_uri=http://192.168.32.20:5000/
>
> [database]
> # This line MUST be changed to actually run the plugin.
> # Example:
> # connection = mysql://root:pass@127.0.0.1:3306/neutron
> connection = mysql://neutron:secret@192.168.32.20/ovs_neutron
> # Replace 127.0.0.1 above with the IP address of the database used by the
> # main neutron server. (Leave it as is if the database runs on this host.)
> # connection = sqlite://
>
> # The SQLAlchemy connection string used to connect to the slave database
> # slave_connection =
>
> # Database reconnection retry times - in event connectivity is lost
> # set to -1 implies an infinite retry count
> # max_retries = 10
> max_retries = 10
>
> # Database reconnection interval in seconds - if the initial connection to the
> # database fails
> # retry_interval = 10
> retry_interval = 10
>
> # Minimum number of SQL connections to keep open in a pool
> # min_pool_size = 1
>
> # Maximum number of SQL connections to keep open in a pool
> # max_pool_size = 10
>
> # Timeout in seconds before idle sql connections are reaped
> # idle_timeout = 3600
> idle_timeout = 3600
>
> # If set, use this value for max_overflow with sqlalchemy
> # max_overflow = 20
>
> # Verbosity of SQL debugging information. 0=None, 100=Everything
> # connection_debug = 0
>
> # Add python stack traces to SQL as comment strings
> # connection_trace = False
>
> # If set, use this value for pool_timeout with sqlalchemy
> # pool_timeout = 10
>
> [service_providers]
> # Specify service providers (drivers) for advanced services like loadbalancer, VPN,
Firewall.
> # Must be in form:
> # service_provider=<service_type>:<name>:<driver>[:default]
> # List of allowed service types includes LOADBALANCER, FIREWALL, VPN
> # Combination of <service type> and <name> must be unique; <driver>
must also be unique
> # This is multiline option, example for default provider:
> # service_provider=LOADBALANCER:name:lbaas_plugin_driver_path:default
> # example of non-default provider:
> # service_provider=FIREWALL:name2:firewall_driver_path
> # --- Reference implementations ---
> # service_provider =
LOADBALANCER:Haproxy:neutron.services.loadbalancer.drivers.haproxy.plugin_driver.HaproxyOnHostPluginDriver:default
>
service_provider=VPN:openswan:neutron.services.vpn.service_drivers.ipsec.IPsecVPNDriver:default
> # In order to activate Radware's lbaas driver you need to uncomment the next
line.
> # If you want to keep the HA Proxy as the default lbaas driver, remove the attribute
default from the line below.
> # Otherwise comment the HA Proxy line
> # service_provider =
LOADBALANCER:Radware:neutron.services.loadbalancer.drivers.radware.driver.LoadBalancerDriver:default
> # uncomment the following line to make the 'netscaler' LBaaS provider
available.
> #
service_provider=LOADBALANCER:NetScaler:neutron.services.loadbalancer.drivers.netscaler.netscaler_driver.NetScalerPluginDriver
> # Uncomment the following line (and comment out the OpenSwan VPN line) to enable
Cisco's VPN driver.
> #
service_provider=VPN:cisco:neutron.services.vpn.service_drivers.cisco_ipsec.CiscoCsrIPsecVPNDriver:default
> # Uncomment the line below to use Embrane heleos as Load Balancer service provider.
> #
service_provider=LOADBALANCER:Embrane:neutron.services.loadbalancer.drivers.embrane.driver.EmbraneLbaas:default
>
> 3.Ldif.file for openLDAP
> # extended LDIF
> #
> # LDAPv3
> # base <dc=example,dc=org> with scope subtree
> # filter: (objectclass=*)
> # requesting: ALL
> #
>
> #
example.org
> dn: dc=example,dc=org
> objectClass: top
> objectClass: dcObject
> objectClass: organization
> o: example Inc
> dc: example
>
> # Groups,
example.org
> dn: ou=Groups,dc=example,dc=org
> ou: Groups
> objectClass: organizationalUnit
>
> # Users,
example.org
> dn: ou=Users,dc=example,dc=org
> ou: users
> objectClass: organizationalUnit
>
> # Roles,
example.org
> dn: ou=Roles,dc=example,dc=org
> ou: roles
> objectClass: organizationalUnit
>
> # admin, Users,
example.org
> dn: cn=admin,ou=Users,dc=example,dc=org
> cn: admin
> objectClass: inetOrgPerson
> objectClass: top
> sn: admin
> uid: admin
> userPassword: secret
>
> # demo, Users,
example.org
> dn: cn=demo,ou=Users,dc=example,dc=org
> cn: demo
> objectClass: inetOrgPerson
> objectClass: top
> sn: demo
> uid: demo
> userPassword: demo
>
> # cinder, Users,
example.org
> dn: cn=cinder,ou=Users,dc=example,dc=org
> cn: cinder
> objectClass: inetOrgPerson
> objectClass: top
> sn: cinder
> uid: cinder
> userPassword: secret
>
> # glance, Users,
example.org
> dn: cn=glance,ou=Users,dc=example,dc=org
> cn: glance
> objectClass: inetOrgPerson
> objectClass: top
> sn: glance
> uid: glance
> userPassword: secret
>
> # nova, Users,
example.org
> dn: cn=nova,ou=Users,dc=example,dc=org
> cn: nova
> objectClass: inetOrgPerson
> objectClass: top
> sn: nova
> uid: nova
> userPassword: secret
>
> # neutron, Users,
example.org
> dn: cn=neutron,ou=Users,dc=example,dc=org
> cn: neutron
> objectClass: inetOrgPerson
> objectClass: top
> sn: neutron
> uid: neutron
> userPassword: secret
>
> # enabled_users, Users,
example.org
> dn: cn=enabled_users,ou=Users,dc=example,dc=org
> cn: enabled_users
> member: cn=admin,ou=Users,dc=example,dc=org
> member: cn=demo,ou=Users,dc=example,dc=org
> member: cn=nova,ou=Users,dc=example,dc=org
> member: cn=glance,ou=Users,dc=example,dc=org
> member: cn=cinder,ou=Users,dc=example,dc=org
> member: cn=neutron,ou=Users,dc=example,dc=org
> objectClass: groupOfNames
>
> # demo, Groups,
example.org
> dn: cn=demo,ou=Groups,dc=example,dc=org
> cn: demo
> objectClass: groupOfNames
> member: cn=admin,ou=Users,dc=example,dc=org
> member: cn=demo,ou=Users,dc=example,dc=org
> member: cn=nova,ou=Users,dc=example,dc=org
> member: cn=glance,ou=Users,dc=example,dc=org
> member: cn=cinder,ou=Users,dc=example,dc=org
> member: cn=neutron,ou=Users,dc=example,dc=org
>
>
> # Member, demo, Groups,
example.org
> dn: cn=Member,cn=demo,ou=Groups,dc=example,dc=org
> cn: member
> description: Role associated with openstack users
> objectClass: organizationalRole
> roleOccupant: cn=demo,ou=Users,dc=example,dc=org
>
> # admin, demo, Groups,
example.org
> dn: cn=admin,cn=demo,ou=Groups,dc=example,dc=org
> cn: admin
> description: Role associated with openstack users
> objectClass: organizationalRole
> roleOccupant: cn=admin,ou=Users,dc=example,dc=org
> roleOccupant: cn=nova,ou=Users,dc=example,dc=org
> roleOccupant: cn=glance,ou=Users,dc=example,dc=org
> roleOccupant: cn=cinder,ou=Users,dc=example,dc=org
> roleOccupant: cn=neutron,ou=Users,dc=example,dc=org
>
>
> # services, Groups,
example.org
> dn: cn=services,ou=Groups,dc=example,dc=org
> cn: services
> objectClass: groupOfNames
> member: cn=admin,ou=Users,dc=example,dc=org
> member: cn=demo,ou=Users,dc=example,dc=org
> member: cn=nova,ou=Users,dc=example,dc=org
> member: cn=glance,ou=Users,dc=example,dc=org
> member: cn=cinder,ou=Users,dc=example,dc=org
> member: cn=neutron,ou=Users,dc=example,dc=org
>
> # admin, services, Groups,
example.org
> dn: cn=admin,cn=services,ou=Groups,dc=example,dc=org
> cn: admin
> description: Role associated with openstack users
> objectClass: organizationalRole
> roleOccupant: cn=admin,ou=Users,dc=example,dc=org
> roleOccupant: cn=nova,ou=Users,dc=example,dc=org
> roleOccupant: cn=glance,ou=Users,dc=example,dc=org
> roleOccupant: cn=cinder,ou=Users,dc=example,dc=org
> roleOccupant: cn=neutron,ou=Users,dc=example,dc=org
>
> # admin, Groups,
example.org
> dn: cn=admin,ou=Groups,dc=example,dc=org
> cn: admin
> objectClass: groupOfNames
> member: cn=admin,ou=Users,dc=example,dc=org
> member: cn=demo,ou=Users,dc=example,dc=org
> member: cn=nova,ou=Users,dc=example,dc=org
> member: cn=glance,ou=Users,dc=example,dc=org
> member: cn=cinder,ou=Users,dc=example,dc=org
> member: cn=neutron,ou=Users,dc=example,dc=org
>
> # admin, admin, Groups,
example.org
> dn: cn=admin,cn=admin,ou=Groups,dc=example,dc=org
> cn: admin
> description: Role associated with openstack users
> objectClass: organizationalRole
> roleOccupant: cn=admin,ou=Users,dc=example,dc=org
> roleOccupant: cn=nova,ou=Users,dc=example,dc=org
> roleOccupant: cn=glance,ou=Users,dc=example,dc=org
> roleOccupant: cn=cinder,ou=Users,dc=example,dc=org
> roleOccupant: cn=neutron,ou=Users,dc=example,dc=org
>
> # Member, Roles,
example.org
> dn: cn=Member,ou=Roles,dc=example,dc=org
> cn: member
> description: Role associated with openstack users
> objectClass: organizationalRole
> roleOccupant: cn=demo,ou=Users,dc=example,dc=org
>
> # admin, Roles,
example.org
> dn: cn=admin,ou=Roles,dc=example,dc=org
> cn: admin
> description: Role associated with openstack users
> objectClass: organizationalRole
> roleOccupant: cn=admin,ou=Users,dc=example,dc=org
> roleOccupant: cn=nova,ou=Users,dc=example,dc=org
> roleOccupant: cn=glance,ou=Users,dc=example,dc=org
> roleOccupant: cn=cinder,ou=Users,dc=example,dc=org
> roleOccupant: cn=neutron,ou=Users,dc=example,dc=org
>
>
> On Wed, Sep 10, 2014 at 11:56 AM, Rasanjaya Subasinghe <rasaposha(a)gmail.com>
wrote:
>>
> Hi,
> Sorry for the inconvenience sir,I herewith attached the keystone.conf,neutron.conf
and LDAP ldif file.
> Its Centos6.5 and control and 3 compute node setup in-house cloud and without LDAP
keystone settings(driver=keystone.identity.backends.ldap.Identity) everything working
fine. those are,
> 1.Instance spawn perfectly,
> 2.live migration work perfectly.
> then try to configure keystone with LDAP driver gives that error on neutron
server.log.
> 3.This setup up is tested on without ml2 and even ml2 test end with same
issue.
> I will attached the LDAP file and neutron file.
> *keystone version 0.9.0
>
>
>
>
>
> below shows the neutron error show on compute.log
>
> On Wed, Sep 10, 2014 at 11:52 AM, Rasanjaya Subasinghe <rasaposha(a)gmail.com>
wrote:
>
> On Sep 9, 2014, at 8:09 PM, Rasanjaya Subasinghe <Rasaposha(a)gmail.com> wrote:
>
>>
>> Hi Kashyap,
>> Its Centos6.5 and control and 3 compute node setup in-house cloud and without
LDAP keystone settings(driver=keystone.identity.backends.ldap.Identity) everything working
fine. those are,
>> 1.Instance spawn perfectly,
>> 2.live migration work perfectly.
>> then try to configure keystone with LDAP driver gives that error on neutron
server.log.
>> 3.This setup up is tested on without ml2 and even ml2 test end with same
issue.
>> I will attached the LDAP file and neutron file.
>> *keystone version 0.9.0
>> <keystone.conf>
>> <neutron.conf>
>> <staging.ldif>
>> below shows the neutron error show on compute.log
>>
>> <Screen Shot 2014-09-09 at 8.08.25 PM.png>
>>
>> cheers,
>> thanks
>> Begin forwarded message:
>>
>>> From: Kashyap Chamarthy <kchamart(a)redhat.com>
>>> Subject: Re: [Rdo-list] icehouse ldap integration
>>> Date: September 9, 2014 at 7:27:59 PM GMT+5:30
>>> To: Rasanjaya Subasinghe <rasaposha(a)gmail.com>
>>> Cc: rdo-list(a)redhat.com
>>>
>>> On Tue, Sep 09, 2014 at 06:19:56PM +0530, Rasanjaya Subasinghe wrote:
>>>>
>>>> Hi,
>>>> I tried to configure openstack ice house with LDAP and all things are
>>>> goes well execp neutron issue, this is the issue which appears on the
>>>> server.log file of neutron service.
>>>>
>>>> Can you guide me for this matter? thanks for the help.
>>>
>>> This information you've provided is not sufficient to give any
>>> meaningful response.
>>>
>>> At a minimum, if anyone have to help you diagnose your issue, you need
>>> to provide:
>>>
>>> - Describe in more detail what you mean by "configure
>>> openstack ice house with LDAP".
>>> - What is the test you're trying to perform? An exact reproducer would
>>> be very useful.
>>> - What is the exact error message you see? Contextual logs/errors from
>>> Keystone/Nova.
>>> - Exact versions of Keystone, and other relevant packages.
>>> - What OS? Fedora? CentOS? Something else?
>>> - Probably, provide config files for /etc/keystone/keystone.conf and
>>> relevant Neutron config files (preferably uploaded somewhere in
>>> *plain text*).
>>>
>>>
>>> --
>>> /kashyap
>>
>
>
>
>
> --
> Rasanjaya Subasinghe
>
>
>
> --
> Rasanjaya Subasinghe
>
>
>
> --
> Rasanjaya Subasinghe