[Rdo-list] Can't ping/ssh to new instance
by Eric Berg
I've done a fresh install of RDO using packstack on a single host like this:
packstack --allinone --provision-all-in-one-ovs-bridge=n
And then followed the instructions here:
http://openstack.redhat.com/Neutron_with_existing_external_network
I've also generally followed Lars's approach from this video with the
same lack of connectivity: https://www.youtube.com/watch?v=DGf-ny25OAw
My public network is 192.168.20.0/24.
But I'm not able to ping or ssh from my 1902.168.0.0 network, the host
running OpenStack is at 192.168.0.37.
My instance is up and running with a 10.0.0.2 IP and 192.168.20.4
floating IP.
I can ping 192.168.20.3, but not 192.168.20.4.
I can use the net namespace approach to log into my cirros instance, but
can't get to 192.168.20.0/24 hosts.
This is my first OpenStack install. I'm a little confused at how a
stock installation (based on packstack) could somehow not include the
ability to access the VMs from the network on which the OS compute host
is running.
Any help troubleshooting this would be greatly appreciated.
Eric
--
Eric Berg
Sr. Software Engineer
Rubenstein Technology Group
eberg(a)rubensteintech.com
www.rubensteintech.com
10 years, 6 months
Re: [Rdo-list] [Openstack] rdo havana to icehouse: instances stuck in 'resized or migrated'
by Nikola Đipanov
On 05/20/2014 07:06 PM, Julie Pichon wrote:
> On 20/05/14 17:08, Dimitri Maziuk wrote:
>> On 05/20/2014 03:59 AM, Julie Pichon wrote:
>>> On 19/05/14 18:14, Dimitri Maziuk wrote:
>>>> On 05/19/2014 11:15 AM, Julie Pichon wrote:
>>>>>
>>>>> I had a chat with a Nova developer who pointed me to the following patch
>>>>> at https://review.openstack.org/#/c/84755/ , recently merged in Havana
>>>>> and included in the latest RDO Havana packages. Resize specifically is
>>>>> one of the actions affected by this bug, you might want to check that
>>>>> you're running the latest packages on your Havana node(s) and see if
>>>>> this might help to resolve the problem?
>>>>
>>>> [root@irukandji ~]# yum up
>>>> ...
>>>> No Packages marked for Update
>>>>
>>>> So -- no, that's not it, unless the patch hasn't made it into the rpms yet.
>>>
>>> Could you provide the version numbers for the openstack-nova packages?
>>
>> Icehouse node:
>>
>> [root@squid ~]# rpm -q -a | grep openstack
>> openstack-selinux-0.1.3-2.el6ost.noarch
>> openstack-nova-api-2014.1-2.el6.noarch
>> openstack-utils-2014.1-1.el6.noarch
>> openstack-nova-compute-2014.1-2.el6.noarch
>> openstack-nova-network-2014.1-2.el6.noarch
>> openstack-nova-common-2014.1-2.el6.noarch
>>
>> Havana node:
>>
>> [root@irukandji ~]# rpm -q -a | grep openstack
>> openstack-nova-common-2013.2.3-1.el6.noarch
>> openstack-nova-network-2013.2.3-1.el6.noarch
>> openstack-nova-api-2013.2.3-1.el6.noarch
>> openstack-selinux-0.1.3-2.el6ost.noarch
>> openstack-nova-compute-2013.2.3-1.el6.noarch
>> openstack-utils-2013.2-2.el6.noarch
>>
>> Both nodes are
>>
>> [root@squid ~]# cat /etc/redhat-release
>> CentOS release 6.5 (Final)
>>
>
> Thanks for the information. I'm adding my colleague Nikola to this
> thread, he's familiar with Nova and should be better able to help.
>
> Julie
>
Hi Dimitri,
So for this kind of upgrade to work, you will need to set an RPC version
cap so that all your icehouse computes know they are talking to possibly
lower version nodes and downgrade their messages.
This can be done (as described in much more detail on [1]) using the
[upgrade_levels] section of your nova.conf file.
Please let me know if this works and if you need further assistance.
Thanks,
Nikola Đipanov, SSE - OpenStack @ Red Hat
[1] http://openstack.redhat.com/Upgrading_RDO_To_Icehouse
10 years, 6 months
Re: [Rdo-list] [Openstack-operators] Installing Openstack IceHouse by RDO. Cinder database in empty.
by Pádraig Brady
On 05/19/2014 09:27 AM, 苌智 wrote:
> I installed Openstack IceHouse followed by http://openstack.redhat.com/Quickstart. There is no error message when I run "packstack --allinone". But the database of cinder is empty
> mysql> use cinder
> Database changed
> mysql> show tables;
> Empty set (0.00 sec)
> Could someone gives me some advice? Thanks a lot.
Weird that no errors were reported.
Is there anything of note in, or could you attach:
/var/tmp/packstack/*/manifests/*cinder.pp.log
Let's analyse to see if we can adjust anything allowing
us to rerun packstack --answer-file=... to sync up the cinder DB.
Hopefully we won't need this but just for completeness
to init the DB outside of packstack on RDO you can do:
openstack-db --service cinder --init
Passwords for that can be seen with:
grep 'PW' /root/keystonerc_admin
I've CC'd the RDO specific mailing list.
thanks,
Pádraig.
10 years, 6 months
[Rdo-list] Creating nova instance without backing image
by Daniel Speichert
Hi,
In our OpenStack installation (Havana, RDO), we use Ceph as storage as
Nova, Glance and Cinder backend.
On compute nodes, instance's disks are kept on Ceph but base images are
copied from Glance (Ceph) to local disk. This works well with small
images of OS's.
However, if we upload a big image (e.g. migrating bare hardware system
to the cloud) that is only used by one instance, the image becomes the
backing file stored locally on the compute node. Here, the compute node
hit disk space limit because we expect all the data to reside on Ceph.
Is there any way to tell Nova to not use backing image for an instance?
Maybe there exists a special image property to set that tells Nova to
copy the whole image, which would put it entirely in Ceph without a
backing file?
I hope this use case makes sense for and I'd appreciate if you had any
suggestions how to resolve this issue.
Best,
Daniel
10 years, 6 months
[Rdo-list] neutron-openvswitch-agent resetting MAC address on my bridges
by Lars Kellogg-Stedman
Someone reported some problems running through:
http://openstack.redhat.com/Neutron_with_existing_external_network
I thought I would walk through it myself first to make sure that (a)
it was correct and (b) that I remembered all the steps, but I've run
into a puzzling problem:
My target system is itself an OpenStack instance, which means that
once br-ex is configured it really need to have the MAC address that
was previously exposed by eth0, because otherwise traffic will be
blocked by the MAC filtering rules attached to the instance's tap
device:
-A neutron-openvswi-s55439d7d-a -s 10.0.0.8/32 -m mac
--mac-source FA:16:3E:EF:91:EC -j RETURN
-A neutron-openvswi-s55439d7d-a -j DROP
I have set MACADDR in br-ex, which works just fine until I restart
neutron-openvswitch-agent (or, you know, reboot the instance), at
which point the MAC address on br-ex changes any everything stops
working.
I've been poking through the code for a bit and I can't find either
the source or an explanation for this behavior. It would be great if a
wiser set of eyes could shed some light on this.
Cheers,
--
Lars Kellogg-Stedman <lars(a)redhat.com> | larsks @ irc
Cloud Engineering / OpenStack | " " @ twitter
10 years, 6 months
[Rdo-list] LDAP configuration
by Devine, Patrick D.
All,
I have deployed the Havana version of Openstack via Foreman. However now I want to switch Keystone to utilize my LDAP server for authentication vs MySQL. I have followed the instructions for configuring the keystone.conf to point at my server but I haven't seen any documentation on how the LDAP should be populated. For example do I have to re-create all the user accounts for each openstack module? I get that I need to have a people, role, and project set up but there is nothing about what users are needed, how they relate to the project and roles.
Has anyone got their Openstack working with LDAP and if so what does you ldap look like?
Thanks
--
Patrick Devine | Leidos
Software Integration Engineer | Command and Intelligence Support Operation
mobile: 443-562-0668 | office: 443-574-4266 | email: Patrick.D.Devine(a)Leidos.com<mailto:Patrick.D.Devine@Leidos.com>
Please consider the environment before printing this email.
10 years, 6 months
[Rdo-list] Attempt to setup Two Node IceHouse Controller+Compute Neutron OVS&VLAN on CentOS 6.5
by Boris Derzhavets
Two KVMs created , each one having 2 virtual NICs (eth0,eth1)
Answer file here
http://textuploader.com/9a32
Openstack-status here
http://textuploader.com/9a3e
Looks not bad , except dead Neutron-Server
Neutron.conf here
http://textuploader.com/9aiy
Stack trace in /var/log/neutron.log<br>
2014-05-17 18:32:12.138 9365 INFO neutron.openstack.common.rpc.common [-] Connected to AMQP server on 192.168.122.127:5672
2014-05-17 18:32:12.138 9365 ERROR neutron.openstack.common.rpc.common [-] Returning exception 'NoneType' object is not callable to caller
2014-05-17 18:32:12.138 9365 ERROR neutron.openstack.common.rpc.common [-] ['Traceback (most recent call last):\n', ' File "/usr/lib/python2.6/site-packages/neutron/openstack/common/rpc/amqp.py", line 462, in _process_data\n **args)\n', ' File "/usr/lib/python2.6/site-packages/neutron/common/rpc.py", line 45, in dispatch\n neutron_ctxt, version, method, namespace, **kwargs)\n', ' File "/usr/lib/python2.6/site-packages/neutron/openstack/common/rpc/dispatcher.py", line 172, in dispatch\n result = getattr(proxyobj, method)(ctxt, **kwargs)\n', ' File "/usr/lib/python2.6/site-packages/neutron/db/dhcp_rpc_base.py", line 92, in get_active_networks_info\n networks = self._get_active_networks(context, **kwargs)\n', ' File "/usr/lib/python2.6/site-packages/neutron/db/dhcp_rpc_base.py", line 38, in _get_active_networks\n plugin = manager.NeutronManager.get_plugin()\n', ' File "/usr/lib/python2.6/site-packages/neutron/manager.py", line 211, in get_plugin\n return cls.get_instance().plugin\n', ' File "/usr/lib/python2.6/site-packages/neutron/manager.py", line 206, in get_instance\n cls._create_instance()\n', ' File "/usr/lib/python2.6/site-packages/neutron/openstack/common/lockutils.py", line 249, in inner\n return f(*args, **kwargs)\n', ' File "/usr/lib64/python2.6/contextlib.py", line 34, in __exit__\n self.gen.throw(type, value, traceback)\n', ' File "/usr/lib/python2.6/site-packages/neutron/openstack/common/lockutils.py", line 212, in lock\n yield sem\n', ' File "/usr/lib/python2.6/site-packages/neutron/openstack/common/lockutils.py", line 249, in inner\n return f(*args, **kwargs)\n', ' File "/usr/lib/python2.6/site-packages/neutron/manager.py", line 200, in _create_instance\n cls._instance = cls()\n', ' File "/usr/lib/python2.6/site-packages/neutron/manager.py", line 110, in __init__\n LOG.info(_("Loading core plugin: %s"), plugin_provider)\n', "TypeError: 'NoneType' object is not callable\n"]
10 years, 6 months
[Rdo-list] enable dhcp on public network
by Victor Barba
Hi,
This is my first post. Then forgive me if this is off-topic for this list
and ignore it :)
I need to assign public ips directly to my instances (not using floating
ips). The packstack installation out-of-the-box do not enable dhcp on the
public_net and then the ips are not assigned to the instances. How could I
solve this?
To be clear I need this:
--------- eth0 (192.168.66.1)
|
| (br0 - 192.168.55.1) ----------------- VM (192.168.55.2)
VM
(192.168.55.3)
VM get ip by dhcp and gw is 192.168.55.1
eth0 and br0 have ip_forwarding enabled.
Thank you in advance.
Regards,
Victor
10 years, 6 months
[Rdo-list] Failure to run yum update on F20 due to openstack-neutron-2013.2.3-2 dependency problem
by Boris Derzhavets
# yum update
---> Package python-neutron.noarch 0:2013.2.3-2.fc20 will be an update
--> Processing Dependency: python-neutronclient >= 2.3.4 for package: python-neutron-2013.2.3-2.fc20.noarch
--> Finished Dependency Resolution
Error: Package: python-neutron-2013.2.3-2.fc20.noarch (updates)
Requires: python-neutronclient >= 2.3.4
Installed: python-neutronclient-2.3.1-3.fc20.noarch (@updates)
python-neutronclient = 2.3.1-3.fc20
Available: python-neutronclient-2.3.1-2.fc20.noarch (fedora)
python-neutronclient = 2.3.1-2.fc20
You could try using --skip-broken to work around the problem
You could try running: rpm -Va --nofiles --nodigest
# yum update --skip-broken
Packages skipped because of dependency problems:
openstack-neutron-2013.2.3-2.fc20.noarch from updates
openstack-neutron-openvswitch-2013.2.3-2.fc20.noarch from updates
python-eventlet-0.14.0-1.fc20.noarch from updates
python-greenlet-0.4.2-1.fc20.x86_64 from updates
python-neutron-2013.2.3-2.fc20.noarch from updates
Thanks.
B.
10 years, 6 months
[Rdo-list] Automatic resizing of root partitions in RDO Icehouse
by Elías David
Hello all,
I would like to know what's the current state of auto resizing the root
partition in current RDO Icehouse, more specifically, CentOS and Fedora
images.
I've read many versions of the story so I'm not really sure what works and
what doesn't.
For instance, I've read that currently, auto resizing of a CentOS 6.5 image
for would require the filesystem to be ext3 and I've also read that auto
resizing currently works only with kernels >= 3.8, so what's really the
deal with this currently?
Also, it's as simple as having cloud-init, dracut-modules-growroot and
cloud-initramfs-tools installed on the image or are there any other steps
required for the auto resizing to work?
Thanks in advance!
--
Elías David.
10 years, 6 months