[Rdo-list] Concerning Rabbits
by John Eckersberg
(In the spirit of "Concerning Hobbits")
Ryan O'Hara and I have been investigating RabbitMQ as it pertains to RDO
recently. There has been a lot of discussion on several disparate
threads, so I wanted to try and capture it on the list for the benefit
of everyone.
Ryan has been working on getting RabbitMQ running in a multi-node HA
configuration. I won't steal his thunder, and he can speak to it better
than I can, so I'll defer to him on the details.
As for me, I've been working on el7 support and bug squashing along the
way.
The first bug[1] causes the daemon to load incredibly slow, or outright
fail by timing out. This is due to the SELinux policy disallowing
name_bind on ports lower than 32768. RabbitMQ tries to name_bind to a
port starting at 10000, and increments if it fails. So if you have
SELinux in enforcing mode, you'll get 22768 AVC denials in the log
before it finally starts.
The second bug[2] causes the daemon to intermittently fail to start due
to a race condition in the creation of the erlang cookie file. This
happens only the first time the service starts. Really this is an
Erlang bug, but there's a workaround for the RabbitMQ case.
I've submitted patches for both issues. Until those get merged in, I've
rebuilt[3] RabbitMQ for F20 which includes the fixes.
Beyond bugs, I've also built out RabbitMQ and all the build/runtime
dependencies for el7. I have a yum repo[4] on my fedorapeople page
containing all the bits. This is all the stuff that is presently
missing from EPEL7. In time, I would hope the maintainers build all
this stuff, but for now it'll work for testing. You will also need the
EPEL 7 Beta repository[5] enabled.
As a side note, I built everything using mock with a local override repo
on my workstation. I've not used copr before but it seems relevant to
this sort of thing, so if it's any benefit I'll look to rebuilt the el7
stack there for easier consumption.
Hopefully this helps get the discussion into one place, and provide a
baseline for further investigation by everyone interested in RabbitMQ.
John.
---
[1] Is really two bugzillas, but the same bug:
[1a] https://bugzilla.redhat.com/show_bug.cgi?id=998682
[1b] https://bugzilla.redhat.com/show_bug.cgi?id=1032595
[2] https://bugzilla.redhat.com/show_bug.cgi?id=1059913
[3] http://jeckersb.fedorapeople.org/rabbitmq-server-3.1.5-3.fc20.noarch.rpm
[4] http://jeckersb.fedorapeople.org/rabbitmq-el7/
[5] http://dl.fedoraproject.org/pub/epel/beta/7/x86_64/
10 years, 1 month
[Rdo-list] foreman-installer puppet problems: " Error 400 on SERVER: Must pass admin_password to Class[Quickstack::Nova]"
by Jonas Hagberg
Hej
I have installed foreman-installer from yum repo icehouce el6 on Scientific
linux 6.5
I got foreman up and running and got hostgroups and smart parameters.
But when assigning a node to a hostgroup (Neutron controller or neutron
compute) and running puppet I get the following error.
err: Could not retrieve catalog from remote server: Error 400 on SERVER:
Must pass admin_password to Class[Quickstack::Nova] at
/usr/share/openstack-foreman-installer/puppet/modules/quickstack/manifests/nova.pp:65
on "fqdn"
admin_password is set in hostgroup.
Any tips on where the problem could be?
Cheers
--
Jonas Hagberg
BILS - Bioinformatics Infrastructure for Life Sciences - http://bils.se
e-mail: jonas.hagberg(a)bils.se, jonas.hagberg(a)scilifelab.se
phone: +46-(0)70 6683869
address: SciLifeLab, Box 1031, 171 21 Solna, Sweden
10 years, 5 months
[Rdo-list] Fwd: [OpenStack Marketing] OpenStack Paris Summit Hotel Room Block
by Rich Bowen
FYI - If you're planning to go to the OpenStack Paris Summit, the
discount hotel room block is now available.
Also, the call for papers will be available this week, so it's time to
start thinking of what talk(s) you might submit.
--Rich
-------- Original Message --------
Subject: [OpenStack Marketing] OpenStack Paris Summit Hotel Room Block
Date: Mon, 30 Jun 2014 10:03:59 -0400
From: Claire Massey <claire(a)openstack.org>
To: marketing(a)lists.openstack.org
Hi everyone,
The hotel room block at Le Meridien is now open for the OpenStack Summit
in Paris. You can reserve rooms via the direct URL here:
https://www.openstack.org/summit/openstack-paris-summit-2014/hotels/
A second room block will soon be made available at the Hyatt Regency
hotel. We will post the URL for that block as soon as it is available.
Please stay tuned.
The Call for Speakers portal will also be made available this week at
openstack.org/summit <http://openstack.org/summit>. Please encourage
your colleagues to submit a presentation for the Paris Summit.
Thanks!
Claire
10 years, 5 months
[Rdo-list] Keystone w/Apache MySQL problem
by Adam Huffman
I'm in the middle of changing my Icehouse Keystone to use Apache with
SSL. After implementing this change, I'm seeing a strange MySQL error
when I submit a keystone query e.g. 'endpoint-list':
2014-06-29 22:38:41.172 30284 TRACE keystone.common.wsgi
OperationalError: (OperationalError) (1045, "Access denied for user
'keystone'@'localhost' (using password: YES)") None None
The weird thing is that the user defined in
/etc/keystone/keystone.conf is in fact 'keystone_admin', as created
when this cloud was setup originally using RDO. From where is it
picking up that username?
I created a new MySQL user 'keystone' with the same privileges as
'keystone_admin' but that didn't make any difference.
Adam
10 years, 5 months
[Rdo-list] Adding another public subnet in RDO
by Vimal Kumar
Hi,
I have a dedicated server which has 2 public ip ranges allotted to it by
the DC. I am trying out OpenStack RDO on this server (allinone install),
and I was able to assign one of the mentioned ranges (let's say
173.xxx.xxx.144/29) and managed to use up all the available ips in this
range for a few vms. This floating-ip range is now accessible from outside,
and everything is fine.
[root@mycloud ~(keystone_admin)]# neutron net-list
+--------------------------------------+---------+---------------------------------------------------------+
| id | name | subnets
|
+--------------------------------------+---------+---------------------------------------------------------+
| 09c8da8e-79d7-49e1-9af8-c2a13a032040 | private |
b7eeae38-682a-4397-8b3c-e3dee88527ab 10.0.0.0/24 |
| 31956556-c540-4676-9cd4-e618a4f93fc8 | public |
14d4b197-1121-4a4b-80b3-b8d80115f734 173.xxx.xxx.144/29 |
+--------------------------------------+---------+---------------------------------------------------------+
[root@yocloud ~(keystone_admin)]# neutron subnet-list
+--------------------------------------+----------------+--------------------+--------------------------------------------------------+
| id | name | cidr
| allocation_pools |
+--------------------------------------+----------------+--------------------+--------------------------------------------------------+
| b7eeae38-682a-4397-8b3c-e3dee88527ab | private_subnet | 10.0.0.0/24
| {"start": "10.0.0.2", "end": "10.0.0.254"} |
| 14d4b197-1121-4a4b-80b3-b8d80115f734 | public_subnet |
173.xxx.xxx.144/29 | {"start": "173.xxx.xxx.147", "end": "173.xxx.xxx.150"}
|
+--------------------------------------+----------------+--------------------+--------------------------------------------------------+
I am now looking to use the second public ip range for next vms and I am
not sure how to proceed.
I tried to create a subnet (public_subnet2) inside "public" net for the new
ip block but fail to get it working. Neutron does not appear to know that
it has a few more free floating-ips available, and throws 'No more IP
addresses available on network'.
Can someone point to the right direction? Is it not possible to add
multiple subnets inside a public network?
Regards,
Vimal
10 years, 6 months
[Rdo-list] Swift generating thousands of rsyncs per second on a small cluster
by Diogo Vieira
Hello,
I have a small cluster consisting on a physical machine running a Proxy and a Storage Node and 4 virtual machines running one Storage node each. Each Storage Node has only one device and the cluster has 5 zones and 3 replicas all configured with packstack.
Between the real machine (Proxy Node) and the virtual machines I have a firewall that logs all the traffic. I ran out of disk space in the firewall and came to the conclusion that the problem was the traffic generated between the Proxy Nodes and the virtual machines in port 6000 (which is of the object-server if I'm not mistaken). The problem is that the traffic being generated is on the order of a thousand or 2 rsyncs per second which seems a bit excessive. Is this behaviour normal? How much traffic should I be getting with this setup, with several (probably thousands) very small objects and less than 10 containers since it is not being accessed at all (right now we're not using the cluster)? Can someone help me understand where the problem is?
Assuming this is a problem, I tried to lower the concurrency on the object/container/account replicators and the number of workers in the proxy-server without any success.
Thank you in advance,
Diogo Vieira <dfv(a)eurotux.com>
Programador
Eurotux Informática, S.A. | www.eurotux.com
(t) +351 253 680 300
10 years, 6 months
[Rdo-list] Demo system recommendations
by Rich Bowen
Today I went to a local University and did a "what is OpenStack"
presentation. When I got to the demo part, (all in one, on a laptop) I
couldn't log in to Horizon, and didn't have time to troubleshoot
on-site. Back home again, and everything works perfectly.
So, in retrospect, this was a case of poor planning, but I'd like to
figure out how to have it not happen again, preferably even if I have to
do a demo that is completely off-line.
Is it simply a matter of changing 192.168.0.x to 127.0.0.1 in all of my
OpenStack configuration files, or is there going to be more to it than that?
--Rich
--
Rich Bowen - rbowen(a)redhat.com
OpenStack Community Liaison
http://openstack.redhat.com/
10 years, 6 months