[Rdo-list] Make obvious the forum moved to ask.openstack.org
by Benjamin Lipp
Hi,
the move of the forum has been decided almost a year ago, see [1]. I
think it's time to adapt all the wiki pages accordingly so people don't
waste time trying to find out how to post in this forum.
I just adapted
* https://openstack.redhat.com/Get_involved and
* https://openstack.redhat.com/Frequently_Asked_Questions
What is left, please take care of that:
* https://openstack.redhat.com/Frequently_Asked_Questions :
“Users of OpenStack on Fedora are welcome to participate in the Red Hat
OpenStack community forums on openstack.redhat.com […]”
What to do with this? Of course they can join ask.openstack, but it's
not RDO to decide on that because ask.openstack is for everyone. Thus it
would sound strange to say they are welcome to ask.openstack.
* https://openstack.redhat.com/Main_Page :
The main page is not editable, which is good, so please adapt the
section “Introducing RDO”. I propose to replace the current link to the
old forum by [[Get involved#ask.openstack|forums on ask.openstack]].
* Maybe it would be a good thing to include a hint on top of every page
of the old forum, excluding the pages belonging to the blog of course, like:
“The forum has been moved to ask.openstack, see this post [1] and this
wiki page [2] for more information”.
Kind regards,
Benjamin
[1]
https://openstack.redhat.com/forum/discussion/935/rdo-forum-moving-to-ask...
[2] https://openstack.redhat.com/Get_involved#ask.openstack
10 years, 2 months
[Rdo-list] Issues with sysctl.conf settings on CentOS 6?
by Steve Gordon
Hi all,
Running packstack --allinone on a freshly installed and updated CentOS 6.5 system I encountered this error with sysctl.conf:
"""
Applying 192.168.122.152_neutron.pp
192.168.122.152_neutron.pp: [ ERROR ]
Applying Puppet manifests [ ERROR ]
ERROR : Error appeared during Puppet run: 192.168.122.152_neutron.pp
Error: sysctl -p /etc/sysctl.conf returned 255 instead of one of [0]
You will find full trace in log /var/tmp/packstack/20140914-225857-OebbrQ/manifests/192.168.122.152_neutron.pp.log
Please check log file /var/tmp/packstack/20140914-225857-OebbrQ/openstack-setup.log for more information
"""
Running `sysctl -p /etc/sysctl.conf` myself I receive:
"""
# sysctl -p /etc/sysctl.conf
net.ipv4.ip_forward = 1
net.ipv4.conf.default.rp_filter = 1
net.ipv4.conf.default.accept_source_route = 0
kernel.sysrq = 0
kernel.core_uses_pid = 1
net.ipv4.tcp_syncookies = 1
error: "net.bridge.bridge-nf-call-ip6tables" is an unknown key
error: "net.bridge.bridge-nf-call-iptables" is an unknown key
error: "net.bridge.bridge-nf-call-arptables" is an unknown key
kernel.msgmnb = 65536
kernel.msgmax = 65536
kernel.shmmax = 68719476736
kernel.shmall = 4294967296
# echo $?
255
"""
Removing the errant lines and re-running PackStack it just adds them back and thus fails for the same reason. I couldn't find another RDO bug covering this issue, has anyone else run into it?
Logs are attached to https://bugzilla.redhat.com/show_bug.cgi?id=1141608
Thanks,
Steve
10 years, 2 months
[Rdo-list] mysqld failure on --allinone, centos7
by Rich Bowen
I'm running `packstack --allinone` on a fresh install of the new
CentOS7, and I'm getting a failure at:
192.168.0.176_mysql.pp: [ ERROR ]
Applying Puppet manifests [ ERROR ]
ERROR : Error appeared during Puppet run: 192.168.0.176_mysql.pp
Error: Could not enable mysqld:
You will find full trace in log
/var/tmp/packstack/20140708-092703-ZMkytw/manifests/192.168.0.176_mysql.pp.log
Please check log file
/var/tmp/packstack/20140708-092703-ZMkytw/openstack-setup.log for more
information
The log message is:
Notice:
/Stage[main]/Neutron::Db::Mysql/Mysql::Db[neutron]/Database_grant[neutron@127.0.0.1/neutron]:
Dependency Service[mysqld] has failures: true
Warning:
/Stage[main]/Neutron::Db::Mysql/Mysql::Db[neutron]/Database_grant[neutron@127.0.0.1/neutron]:
Skipping because of failed dependencies
mysqld was successfully installed, and is running.
Before I start digging deeper, I wondered if this is something that's
already been encountered.
Thanks.
--
Rich Bowen - rbowen(a)redhat.com
OpenStack Community Liaison
http://openstack.redhat.com/
10 years, 2 months
[Rdo-list] read only volumes when glusterfs fails
by Elías David
Hello,
I'm seeing a constant behaviour with my implementation of openstack
(libvirt/kvm) and cinder using glusterfs and I'm having troubles to find
the real cause or if it's something not normal at all.
I have configured cinder to use glusterfs as storage backend, the volume is
a replica 2 of 8 disks in 2 servers and I have several volumes attached to
several instances provided by cinder. The problem is this, is not uncommon
that one of the gluster servers reboot suddenly due to power failures (this
is an infrastructure problem unavoidable right now), when this happens the
instances start to see the attached volume as read only which force me to
hard reboot the instance so it can access the volume normally again.
Here are my doubts, the gluster volume is created in such a way that not a
single replica is on the same server as the master, if I lose a server due
to hardware failure, the other is still usable so I don't really understand
why couldn't the instances just use the replica brick in case that one of
the servers reboots.
Also, why the data is still there, can be read but can't be written to in
case of glusterfs failures? Is this a problem with my implementation?
configuration error on my part? something known to openstack? a cinder
thing? libvirt? glusterfs?
Having to hard reboot the instances is not a big issue right now, but
nevertheless I want to understand what's happening and if I can avoid this
issue.
Some specifics:
GlusterFS version is 3.5 All systems are CentOS 6.5 Openstack version is
Icehouse installed with packstack/rdo
Thanks in advance!
--
Elías David.
10 years, 2 months
Re: [Rdo-list] icehouse-devel branch of redhat-openstack/tempest
by Kaul, Yaniv
And it seems to be a bit broken on my platform (6.5, IceHouse):
tools/config_tempest.py --create identity.uri http://10.103.234.141:5000/v2.0/ identity.admin_username admin identity.admin_password secret identity.admin_tenant_name admin
/usr/lib64/python2.6/site-packages/Crypto/Util/number.py:57: PowmInsecureWarning: Not using mpz_powm_sec. You should rebuild using libgmp >= 5 to avoid timing attack vulnerability.
_warn("Not using mpz_powm_sec. You should rebuild using libgmp >= 5 to avoid timing attack vulnerability.", PowmInsecureWarning)
Traceback (most recent call last):
File "tools/config_tempest.py", line 31, in <module>
from tempest.common import api_discovery
ImportError: No module named tempest.common
From: Kaul, Yaniv
Sent: Thursday, September 11, 2014 9:19 AM
To: rdo-list(a)redhat.com
Subject: icehouse-devel branch of redhat-openstack/tempest
In https://github.com/redhat-openstack/tempest , seems like the only activity is on that branch. Is that still IceHouse-compatible? Can anyone enlighten me on what the changes are?
TIA,
Y.
10 years, 2 months
[Rdo-list] RDO Juno test days, September 25-26
by Rich Bowen
tl,dr; Mark your calendar, September 25-26 RDO Juno M3 test day.
As you're no doubt aware, OpenStack Juno Milestone 3 released a week ago
today [1] and Juno is now in FeatureFreeze [2].
We're in the process of packaging and testing this stuff for RDO, and,
as part of that, we'll be conducting test days, September 25-26, to
exercise these packages. We would greatly appreciate your help in this
testing process, as the more different environments these packages are
subjected to, the greater the chances of ferreting out the places where
it's going to break.
WHERE: #rdo IRC channel on Freenode
WHEN: All day, September 25-26, so that we can cover everyone's time zones
WHAT: Over the coming days, we'll be documenting a number of test cases
that can get people started, as well as details of how to report
problems when you encounter them. We'll also document workarounds there,
as we go along, so that you don't have to waste time on problems that
have already been solved. That will be posted to this list real soon.
[1] https://wiki.openstack.org/wiki/Juno_Release_Schedule
[2] https://wiki.openstack.org/wiki/FeatureFreeze
--
Rich Bowen - rbowen(a)redhat.com
OpenStack Community Liaison
http://openstack.redhat.com/
10 years, 2 months
[Rdo-list] Openstack Horizon el6 status
by Jose Castro Leon
Hi,
We are following the horizon releases from the RDO repository and also from the github repository as well.
We have just realised that the package for icehouse-2 has not been released and that the branch that was used to track the redhat patches for el6 has been removed as well.
Could you please tell me the timeline for this package? Is there any other repository with the el6 patches?
Kind regards,
Jose Castro Leon
CERN IT-OIS tel: +41.22.76.74272
mob: +41.76.48.79222
fax: +41.22.76.67955
Office: 31-R-021 CH-1211 Geneve 23
email: jose.castro.leon(a)cern.ch<mailto:jose.castro.leon@cern.ch>
10 years, 2 months
[Rdo-list] RDO - Icehouse boot from image or snapshot and create a new volume fails
by David S.
Dear List,
I just confused why when I try to launch an instance with option to boot
from image and create a new volume always fail during the installation. A
new volume were was created detected as "iso image" and not as a disk. I
know that I can create a new volume then attach to an instance but I think
this is could be a problem because requires additional step to make it
works.
Usually I'm doing like this:
1. launch an instance boot from image and create as volume
2. create a new volume then attach to a new instance and install the
operating system to that volume attached.
2. after the installation complete, the instance must be terminate without
deleting volume
3. launch new instance and boot from volume
My OpenStack Icehouse running on CentOS 6.5 x86_64 single machine.
Why I'm doing 3 steps above? I think I have 2 problems here:
1. Boot options doesn't change to disk after the operating system
installation completed.
2. Disk were created after launch instance with option "boot from image and
create volume" detected as ISO or image it self, so we need to attach
additional volume (disk) into instance.
If any wrong with my setup, please let me know.
Thanks for your help
Best regards,
David S.
------------------------------------------------
p. 087881216110
e. david(a)zeromail.us
w. http://blog.pnyet.web.id
10 years, 2 months