 
                                        
                                
                         
                        
                                
                                
                                        
                                                
                                        
                                        
                                        [Rdo-list] Neutron with existing network issues.
                                
                                
                                
                                    
                                        by NS Murthy
                                    
                                
                                
                                        Hi 
I am new to RDO openstack. I have installed in one single bare metal server which had 2 NICS connected with different subnets on my company network. 
1) Primary NIC subnet: 10.58.100.24 - This is where i have installed openstack and openstack dashboard is listening to logon from this IP.2) Tenant NIC subnet: 10.68.200.5 - this is a secondary IP which i kept for tenant network so that i can logon.3) 1 br-ex interface ip 10.58.100.24
I have used br-ex interface address(which is my primary 1GB NIC  IP)  configured for  tenant however i cannot ping tenant instance ip  10.68.200.25 from my management node but can ping floating IPs(10.58.100.25)  and can do SSH to floating ip.
can someone provide a tip or help how to ping tenant IP as well connect to tenant IP without floating IP. 
At the end i need the below.1) i need to have tenant network on 10.68.200.0/24, i need to ping this network so that tenant can logon to SSH without floating IP.2) i have requirement to have 2 Network interfaces on a instance ? how do i accomplish this ? becasue every tenant have fense network so i can only get one network when i spin instance from image ?
BR,Murthy
I 
                                
                         
                        
                                
                                10 years, 5 months
                        
                        
                 
         
 
        
            
        
        
        
                
                        
                                
                                 
                                        
                                
                         
                        
                                
                                
                                        
                                                
                                        
                                        
                                        [Rdo-list] neutron-openvswitch-agent without ping lost
                                
                                
                                
                                    
                                        by Chris
                                    
                                
                                
                                        Hello,
 
We made some changes on our compute nodes in the
"/etc/neutron/neutron.conf". For example qpid_hostname. But nothing what
effects the network infrastructure in the compute node.
To apply the changes I think we need to restart the
"neutron-openvswitch-agent" service.
 
By restarting this service the VM gets disconnected for around one ping, the
reason is the restart causes recreation of the int-br-bond0 and phy-br-bond0
interfaces:
ovs-vsctl: ovs|00001|vsctl|INFO|Called as /usr/bin/ovs-vsctl --timeout=10 --
--may-exist add-br br-int
ovs-vsctl: ovs|00001|vsctl|INFO|Called as /usr/bin/ovs-vsctl --timeout=10 --
set-fail-mode br-int secure
ovs-vsctl: ovs|00001|vsctl|INFO|Called as /usr/bin/ovs-vsctl --timeout=10 --
--if-exists del-port br-int patch-tun
ovs-vsctl: ovs|00001|vsctl|INFO|Called as /usr/bin/ovs-vsctl --timeout=10 --
--if-exists del-port br-int int-br-bond0
kernel: [73873.047999] device int-br-bond0 left promiscuous mode
ovs-vsctl: ovs|00001|vsctl|INFO|Called as /usr/bin/ovs-vsctl --timeout=10 --
--if-exists del-port br-bond0 phy-br-bond0
kernel: [73873.086241] device phy-br-bond0 left promiscuous mode
ovs-vsctl: ovs|00001|vsctl|INFO|Called as /usr/bin/ovs-vsctl --timeout=10 --
--may-exist add-port br-int int-br-bond0
kernel: [73873.287466] device int-br-bond0 entered promiscuous mode
ovs-vsctl: ovs|00001|vsctl|INFO|Called as /usr/bin/ovs-vsctl --timeout=10 --
--may-exist add-port br-bond0 phy-br-bond0
 
Is there a way to apply this changes without loose pings?
 
Cheers
Chris
                                
                         
                        
                                
                                10 years, 6 months
                        
                        
                 
         
 
        
            
        
        
        
            
        
        
        
                
                        
                                
                                 
                                        
                                
                         
                        
                                
                                
                                        
                                                
                                        
                                        
                                        Re: [Rdo-list] FW: RDO build that passed CI (rc2)
                                
                                
                                
                                    
                                        by Arash Kaffamanesh
                                    
                                
                                
                                        I did a CenOS fresh install with the following steps for AIO:
yum -y update
cat /etc/redhat-release
CentOS Linux release 7.1.1503 (Core)
yum install http://rdoproject.org/repos/openstack-kilo/rdo-release-kilo.rpm
yum install epel-release
cd /etc/yum.repos.d/
curl -O
https://repos.fedorapeople.org/repos/openstack/openstack-trunk/epel-7/rc2...
yum install openstack-packstack
setenforce 0
packstack --allinone
and got again:
Error: nmcli (1.0.0) and NetworkManager (0.9.9.1) versions don't match.
Force execution using --nocheck, but the results are unpredictable.
But if I don't do a yum update and install AIO it finishes successfully and
I can yum update afterwards.
So if nobody can reproduce this issue, then something is wrong with my base
CentOS install, I'll try to install the latest CentOS from ISO now.
Thanks!
Arash
On Fri, May 1, 2015 at 12:42 AM, Arash Kaffamanesh <ak(a)cloudssky.com> wrote:
> I'm installing CentOS with cobbler and kickstart (from centos7-mini) on 2
> machines
> and I'm trying a 2 node install. With rc1 it worked without yum update.
> I'll do a fresh install now with yum update and let you know.
>
> Thanks!
> Arash
>
>
>
> On Fri, May 1, 2015 at 12:23 AM, Alan Pevec <apevec(a)gmail.com> wrote:
>
>> 2015-05-01 0:12 GMT+02:00 Arash Kaffamanesh <ak(a)cloudssky.com>:
>> > But if I yum update it into 7.1, then we have the issue with nmcli:
>> >
>> > Error: nmcli (1.0.0) and NetworkManager (0.9.9.1) versions don't match.
>> > Force execution using --nocheck, but the results are unpredictable.
>>
>> Huh, again?! I thought that was solved after you did yum update...
>> My original answer to that is still the same "Not sure how could that
>> happen, nmcli is part of NetworkManager RPM."
>> Can you reproduce this w/o RDO in the picture, starting with the clean
>> centos installation? How are you installing centos?
>>
>> Cheers,
>> Alan
>>
>
>
                                
                         
                        
                                
                                10 years, 6 months
                        
                        
                 
         
 
        
            
        
        
        
                
                        
                                
                                 
                                        
                                
                         
                        
                                
                                
                                        
                                                
                                        
                                        
                                        [Rdo-list] RDO build that passed CI (rc2)
                                
                                
                                
                                    
                                        by Itzik Brown
                                    
                                
                                
                                        Hi,
I installed Openstack Kilo (rc2) on RHEL7.1 using RDO repositories.
It's a distributed environment (Controller and 2 compute nodes).
The installation process itself finished without errors.
Issues:
1)Problem with Horizon - getting permission denied error.
   There is an old bug here: https://bugzilla.redhat.com/show_bug.cgi?id=1150678.
   I added a comment there.
   Workaround - Changing the ownership of the /usr/share/openstack-dashboard/static/dashboard to 
    apache:apache solves the issue
2) openstack-nova-novncproxy service fails to start:
    There is a bug: https://bugzilla.redhat.com/show_bug.cgi?id=1200701
3) When enabling LBaaS neutron-lbaas-agent  fails to start:
   neutron-lbaas-agent: Error importing loadbalancer device driver: neutron.services.loadbalancer.drivers.haproxy.namespace_driver.HaproxyNSDriver
   There is a bug:
   https://bugs.launchpad.net/neutron/+bug/1441107/
   A fix is in review for Kilo
   Workaround:
   In /etc/neutron/lbaas_agent.ini change:
   device_driver = neutron_lbaas.services.loadbalancer.drivers.haproxy.namespace_driver.HaproxyNSDriver
Itzik
                                
                         
                        
                                
                                10 years, 6 months
                        
                        
                 
         
 
        
            
        
        
        
            
        
        
        
                
                        
                                
                                 
                                        
                                
                         
                        
                                
                                
                                        
                                                
                                        
                                        
                                        [Rdo-list] rdo-manager hostname issues with rabbitmq
                                
                                
                                
                                    
                                        by James Slagle
                                    
                                
                                
                                        Hi, there were a few different threads related to rdo-manager and encountering
rabbitmq errors related to the hostname when installing the undercloud. 
I was able to reproduce a couple different issues, and in my environment it
came down to $HOSTNAME not matching the defined FQDN hostname. You could
use hostnamectl to set a hostname, but that does not update $HOSTNAME in your
current shell.
rabbitmq-env, which is sourced at the start of rabbitmq-server, reads the
hostname from $HOSTNAME in some scenarios, and then uses that value to define
the rabbit node name. Therefore, if you have a mismatch between $HOSTNAME and
the actual FQDN, things can go off the rails with rabbitmq.
I've tried to address this issue in this patch:
https://review.gerrithub.io/#/c/232052/
For the virt-setup, setting of the FQDN hostname and adding it to /etc/hosts
will now be done automatically, with the instack.localdomain used. The hostname
can always be redefined later if desired.
For baremetal, I've added some notes to the docs to hopefully cover the
requirements.
--
-- James Slagle
--
                                
                         
                        
                                
                                10 years, 6 months
                        
                        
                 
         
 
        
            
        
        
        
                
                        
                        
                                
                                
                                        
                                                
                                        
                                        
                                        [Rdo-list] sourcing admin-openrc.sh gives me an error
                                
                                
                                
                                    
                                        by Chamambo Martin
                                    
                                
                                
                                        http://docs.openstack.org/icehouse/install-guide/install/yum/content/keyston
e-verify.html
 
I have followed this document to check if my admin-openrc.sh file is
configured correctly and everything is working as expected until I do this 
 
source admin-openrc.sh
 
keystone user-list
 
this command returns an error 
 
[root@controller ~]# keystone user-list
 
WARNING:keystoneclient.httpclient:Failed to retrieve management_url from
token
 
What am I missing 
 
NB: All The above commands work if I manually input the following on the
command line but not when I source the admin-openrc.sh file
 
export OS_SERVICE_TOKEN=xxxxxxxxxxxxxxxxxxxxx
 
export OS_SERVICE_ENDPOINT=http://xxxxxxx.ai.co.zw:35357/v2.0
 
 
 
                                
                         
                        
                                
                                10 years, 6 months
                        
                        
                 
         
 
        
            
        
        
        
                
                        
                                
                                 
                                        
                                
                         
                        
                                
                                
                                        
                                                
                                        
                                        
                                        [Rdo-list] Instance auto resume after compute node restart
                                
                                
                                
                                    
                                        by Chris
                                    
                                
                                
                                        Hello,
 
We want to have instances auto resume their status after a compute node
reboot/failure. Means when the VM has the running state before it should be
automatically started. We are using Icehouse.
 
There is the option resume_guests_state_on_host_boot=true|false which should
exactly do what we want:
# Whether to start guests that were running before the host
# rebooted (boolean value)
resume_guests_state_on_host_boot=true
 
I tried it out and it just didn't work. Libvirt fails to start the VMs
because I couldn't find the interfaces:
2015-04-30 06:16:00.783+0000: 3091: error : virNetDevGetMTU:343 : Cannot get
interface MTU on 'qbr62d7e489-f8': No such device
2015-04-30 06:16:00.897+0000: 3091: warning : qemuDomainObjStart:6144 :
Unable to restore from managed state
/var/lib/libvirt/qemu/save/instance-0000025f.save. Maybe the file is
corrupted?
 
I did some research and found some corresponding experiences from other
users:
"AFAIK at the present time OpenStack (Icehouse) still not completely aware
about environments inside it, so it can't restore completely after reboot."
Source:
http://stackoverflow.com/questions/23150148/how-to-get-instances-back-after-
reboot-in-openstack
 
Is this feature really broken or do I just miss something?
 
Thanks in advance!
 
Cheers
Chris
                                
                         
                        
                                
                                10 years, 6 months