[Rdo-list] Neutron configuration files for a two node Neutron+GRE+OVS
by Kashyap Chamarthy
Heya,
Just in case if it's useful for someone, here are my working Neutron
configuration files (and iptables rules) for a two node set-up based on
IceHouse-M2 on Fedora-20,
- Controller node: Nova, Keystone (token-based auth), Cinder,
Glance, Neutron (using Open vSwitch plugin and GRE tunneling).
- Compute node: Nova (nova-compute), Neutron (openvswitch-agent)
Controller node Neutron configurations
======================================
1. neutron.conf
---------------
$ cat /etc/neutron/neutron.conf | grep -v ^$ | grep -v ^#
[DEFAULT]
core_plugin
=neutron.plugins.openvswitch.ovs_neutron_plugin.OVSNeutronPluginV2
rpc_backend = neutron.openstack.common.rpc.impl_qpid
control_exchange = neutron
qpid_hostname = 192.169.142.49
auth_strategy = keystone
allow_overlapping_ips = True
dhcp_lease_duration = 120
allow_bulk = True
qpid_port = 5672
qpid_heartbeat = 60
qpid_protocol = tcp
qpid_tcp_nodelay = True
qpid_reconnect_limit=0
qpid_reconnect_interval_max=0
qpid_reconnect_timeout=0
qpid_reconnect=True
qpid_reconnect_interval_min=0
qpid_reconnect_interval=0
debug = False
verbose = False
[quotas]
[agent]
[keystone_authtoken]
admin_tenant_name = services
admin_user = neutron
admin_password = fedora
auth_host = 192.169.142.49
auth_port = 35357
auth_protocol = http
auth_uri=http://192.169.142.49:5000/
[database]
[service_providers]
[AGENT]
root_helper = sudo neutron-rootwrap /etc/neutron/rootwrap.conf
2. (OVS) plugin.ini
-------------------
$ cat /etc/neutron/plugin.ini | grep -v ^$ | grep -v ^#
[ovs]
tenant_network_type = gre
tunnel_id_ranges = 1:1000
enable_tunneling = True
integration_bridge = br-int
tunnel_bridge = br-tun
local_ip = 192.169.142.49
[agent]
[securitygroup]
[DATABASE]
sql_connection = mysql://neutron:fedora@node1-controller/ovs_neutron
sql_max_retries=10
reconnect_interval=2
sql_idle_timeout=3600
[SECURITYGROUP]
firewall_driver =
neutron.agent.linux.iptables_firewall.OVSHybridIptablesFirewallDriver
3. dhcp_agent.ini
-----------------
$ cat /etc/neutron/dhcp_agent.ini | grep -v ^$ | grep -v ^#
[DEFAULT]
interface_driver = neutron.agent.linux.interface.OVSInterfaceDriver
handle_internal_only_routers = TRUE
external_network_bridge = br-ex
use_namespaces = True
dnsmasq_config_file = /etc/neutron/dnsmasq.conf
4. l3_agent.ini
---------------
$ cat /etc/neutron/dhcp_agent.ini | grep -v ^$ | grep -v ^#
[DEFAULT]
interface_driver = neutron.agent.linux.interface.OVSInterfaceDriver
handle_internal_only_routers = TRUE
external_network_bridge = br-ex
use_namespaces = True
dnsmasq_config_file = /etc/neutron/dnsmasq.conf
5. dnsmasq.conf
---------------
This logs dnsmasq output is to a file, instead of journalctl):
$ cat /etc/neutron/dnsmasq.conf | grep -v ^$ | grep -v ^#
log-facility = /var/log/neutron/dnsmasq.log
log-dhcp
6. api-paste.ini
----------------
$ cat /etc/neutron/api-paste.ini | grep -v ^$ | grep -v ^#
[composite:neutron]
use = egg:Paste#urlmap
/: neutronversions
/v2.0: neutronapi_v2_0
[composite:neutronapi_v2_0]
use = call:neutron.auth:pipeline_factory
noauth = extensions neutronapiapp_v2_0
keystone = authtoken keystonecontext extensions neutronapiapp_v2_0
[filter:keystonecontext]
paste.filter_factory = neutron.auth:NeutronKeystoneContext.factory
[filter:authtoken]
paste.filter_factory =
keystoneclient.middleware.auth_token:filter_factory
admin_user=neutron
auth_port=35357
admin_password=fedora
auth_protocol=http
auth_uri=http://192.169.142.49:5000/
admin_tenant_name=services
auth_host = 192.169.142.49
[filter:extensions]
paste.filter_factory =
neutron.api.extensions:plugin_aware_extension_middleware_factory
[app:neutronversions]
paste.app_factory = neutron.api.versions:Versions.factory
[app:neutronapiapp_v2_0]
paste.app_factory = neutron.api.v2.router:APIRouter.factory
7. metadata_agent.ini
---------------------
$ cat /etc/neutron/metadata_agent.ini | grep -v ^$ | grep -v ^#
[DEFAULT]
auth_url = http://192.169.142.49:35357/v2.0/
auth_region = regionOne
admin_tenant_name = services
admin_user = neutron
admin_password = fedora
nova_metadata_ip = 192.168.142.49
nova_metadata_port = 8775
metadata_proxy_shared_secret = fedora
Compute node Neutron configurations
===================================
1. neutron.conf
---------------
$ cat /etc/neutron/neutron.conf | grep -v ^$ | grep -v ^#
[DEFAULT]
core_plugin
=neutron.plugins.openvswitch.ovs_neutron_plugin.OVSNeutronPluginV2
rpc_backend = neutron.openstack.common.rpc.impl_qpid
qpid_hostname = 192.169.142.49
auth_strategy = keystone
allow_overlapping_ips = True
qpid_port = 5672
debug = True
verbose = True
[quotas]
[agent]
[keystone_authtoken]
admin_tenant_name = services
admin_user = neutron
admin_password = fedora
auth_host = 192.169.142.49
[database]
[service_providers]
[AGENT]
root_helper = sudo neutron-rootwrap /etc/neutron/rootwrap.conf
2. (OVS) plugin.ini
-------------------
$ cat plugin.ini | grep -v ^$ | grep -v ^#
[ovs]
tenant_network_type = gre
tunnel_id_ranges = 1:1000
enable_tunneling = True
integration_bridge = br-int
tunnel_bridge = br-tun
local_ip = 192.169.142.57
[DATABASE]
sql_connection = mysql://neutron:fedora@node1-controller/ovs_neutron
[SECURITYGROUP]
firewall_driver =
neutron.agent.linux.iptables_firewall.OVSHybridIptablesFirewallDriver
[agent]
[securitygroup]
3. metadata_agent.ini
---------------------
$ cat metadata_agent.ini | grep -v ^$ | grep -v ^#
[DEFAULT]
auth_url = http://localhost:5000/v2.0
auth_region = RegionOne
admin_tenant_name = %SERVICE_TENANT_NAME%
admin_user = %SERVICE_USER%
admin_password = %SERVICE_PASSWORD%
iptables rules on both Controller and Compute nodes
===================================================
iptables on Controller node
---------------------------
$ cat /etc/sysconfig/iptables
*filter
:INPUT ACCEPT [0:0]
:FORWARD ACCEPT [0:0]
:OUTPUT ACCEPT [0:0]
-A INPUT -m state --state ESTABLISHED,RELATED -j ACCEPT
-A INPUT -p icmp -j ACCEPT
-A INPUT -i lo -j ACCEPT
-A INPUT -p tcp -m multiport --dports 3260 -m comment --comment "001
cinder incoming" -j ACCEPT
-A INPUT -p tcp -m multiport --dports 80 -m comment --comment "001
horizon incoming" -j ACCEPT
-A INPUT -p tcp -m multiport --dports 9292 -m comment --comment "001
glance incoming" -j ACCEPT
-A INPUT -p tcp -m multiport --dports 5000,35357 -m comment
--comment "001 keystone incoming" -j ACCEPT
-A INPUT -p tcp -m multiport --dports 3306 -m comment --comment "001
mariadb incoming" -j ACCEPT
-A INPUT -p tcp -m multiport --dports 6080 -m comment --comment "001
novncproxy incoming" -j ACCEPT
-A INPUT -p tcp -m multiport --dports 8770:8780 -m comment --comment
"001 novaapi incoming" -j ACCEPT
-A INPUT -p tcp -m multiport --dports 9696 -m comment --comment "001
neutron incoming" -j ACCEPT
-A INPUT -p tcp -m multiport --dports 5672 -m comment --comment "001
qpid incoming" -j ACCEPT
-A INPUT -p tcp -m multiport --dports 8700 -m comment --comment "001
metadata incoming" -j ACCEPT
-A INPUT -m state --state NEW -m tcp -p tcp --dport 22 -j ACCEPT
-A INPUT -m state --state NEW -m tcp -p tcp --dport 5900:5999 -j ACCEPT
-A INPUT -j REJECT --reject-with icmp-host-prohibited
-A INPUT -p gre -j ACCEPT
-A OUTPUT -p gre -j ACCEPT
-A FORWARD -j REJECT --reject-with icmp-host-prohibited
COMMIT
iptables on Compute node
------------------------
$ cat /etc/sysconfig/iptables
*filter
:INPUT ACCEPT [0:0]
:FORWARD ACCEPT [0:0]
:OUTPUT ACCEPT [0:0]
-A INPUT -m state --state ESTABLISHED,RELATED -j ACCEPT
-A INPUT -p icmp -j ACCEPT
-A INPUT -i lo -j ACCEPT
-A INPUT -m state --state NEW -m tcp -p tcp --dport 5900:5999 -j ACCEPT
-A INPUT -m state --state NEW -m tcp -p tcp --dport 22 -j ACCEPT
-A INPUT -p gre -j ACCEPT
-A INPUT -j REJECT --reject-with icmp-host-prohibited
-A OUTPUT -p gre -j ACCEPT
-A FORWARD -j REJECT --reject-with icmp-host-prohibited
COMMIT
[1] Also here --
http://kashyapc.fedorapeople.org/virt/openstack/neutron-configs-GRE-OVS-t...
--
/kashyap
10 years, 9 months
[Rdo-list] Glance does not save properties of images
by Daniel Speichert
Hello,
On the stable release of Havana installed through RDO, when I create images, their properties (disk_format, container_format, is_public) are lost even when set (they must be set, otherwise glance-api wouldn't accept the request).
However, they are not saved to the database. When I modify database manually, they are visible correctly, but it is still not possible to modify them through Glance.
Do you have any idea if that might be some configuration error or maybe a bug in RDO packaging?
Any help will be very much appreciated.
Best Regards,
Daniel Speichert
10 years, 9 months
[Rdo-list] Neutron VPNaaS
by Daniel Speichert
Hello,
Did anyone get VPNaaS to work using RDO packages? Is there any good tutorial for that like the one on LBaaS?
I couldn't find any on RDO website and my attempts to create VPN always ended with PENDING_CREATE.
I'd appreciate if someone could share some tested recipe for getting VPNaaS to work on RDO
Thanks,
Daniel Speichert
10 years, 10 months
[Rdo-list] Fedora 20 / Devstack Networking Issues
by Perry Myers
Ok, I've been chasing down some networking issues along with some other
folks. Here's what I'm seeing:
Starting with a vanilla F20 cloud image running on a F20 host, clone
devstack into it and run stack.sh.
First thing is that the RabbitMQ server issue I noted a few weeks ago is
still intermittently there. So during the step where rabbitmqctl is run
to set the password of the rabbit admin user, it might fail and all
subsequent AMQP communication fails which makes a lot of the nova
commands in devstack also fail.
But... if you get past this error (since it is intermittent), then
devstack seems to complete successfully. Standard commands like nova
list, keystone user-list, etc all work fine.
I did note though that access to Horizon does not work. I need to
investigate this further.
But worse than that is when you run nova boot, the host to guest
networking (remember this is devstack running in a VM) immediately gets
disconnected. This issue is 100% reproducible and multiple users are
reporting it (tsedovic, eharney, bnemec cc'd)
I did some investigation when this happens and here's what I found...
If I do:
$ brctl delif br100 eth0
I was immediately able to ping the guest from the host and vice versa.
If I reattach eth0 back to br100, networking stops again
Another thing... I notice that on the system br100 does not have an ip
address, but eth0 does. I thought when doing bridged networking like
this, the bridge should have the ip address and the physical iface that
is attached to the bridge does not get an ip addr.
So... I tweaked /etc/sysconfig/network-scripts/ifcfg-eth0 to remove the
dhcp from the bootproto line and I copied ifcfg-eth0 to ifcfg-br100
allowing it to use bootproto dhcp
I brought both ifaces down and then brought them both up. eth0 first
and br100 second
This time, br100 got the dhcp address from the host and networking
worked fine.
So is this just an issue with how nova is setting up bridges?
Since this network disconnect didn't happen until nova launched a vm, I
imagine this isn't a problem with devstack itself, but is likely an
issue with Nova Networking somehow.
Russell/DanS, is there any chance that all of the refactoring you did in
Nova Networking very recently introduce a regression?
Perry
10 years, 10 months
[Rdo-list] OpenStack Days Tokyo 2014 - Meetup?
by Sandro Mathys
It seems there's no official Red Hat / RDO presence at OpenStack Days
Tokyo 2014, but I was nevertheless wondering whether other people will
be there? If so, maybe we could have a small meetup sometime (beer or
dinner after the first day?). Not sure there's other RDO people in
Japan, though. ;)
Here's the details for the event for those who haven't heard about it.
You'll notice all the other big names (that have business in Japan)
are listed in some way or another:
http://openstackdays.com/en/
Cheers,
Sandro
10 years, 10 months