Eventually made it working. For the benefit of all, here's how I've done it,
include bizarre workarounds.
1. The Jenkins job is a shell script, essentially SSH'ing to the node and running a
script.
SSH="ssh -o UserKnownHostsFile=/dev/null -o StrictHostKeyChecking=no -o
GlobalKnownHostsFile=/dev/null"
${SSH} root@${CONTROLLER} "/home/public/scripts/devstack.sh"
echo "Running stack!"
${SSH} root@${CONTROLLER} "su - stack -c \"cd /opt/stack ; git config --global
;
./stack.sh\""
${SSH} root@${CONTROLLER} "firewall-cmd --add-service http"
2. The /home/public/scripts/devstack.sh script:
firewall-cmd --add-service http || true
git config --global
/opt/devstack/tools/create-stack-user.sh
chown -R stack:stack /opt/devstack/
mv /opt/devstack /opt/stack
cat << 'EOF' >> local.conf
<your local.conf comes here>
EOF
sed -i 's/Defaults requiretty/#Defaults requiretty/' /etc/sudoers
mv local.conf stack
chown stack:stack stack/local.conf
Y.
-----Original Message-----
From: Kashyap Chamarthy [mailto:kchamart@redhat.com]
Sent: Tuesday, February 24, 2015 11:12 AM
To: Kaul, Yaniv
Cc: Alan Pevec; Martin Mágr; Rdo-list(a)redhat.com
Subject: Re: [Rdo-list] [openstack-packstack] Initial Kilo release
On Tue, Feb 24, 2015 at 03:52:18AM -0500, Kaul, Yaniv wrote:
> > -----Original Message-----
> > From: Kashyap Chamarthy [mailto:kchamart@redhat.com]
> > Sent: Tuesday, February 24, 2015 10:23 AM
> > To: Kaul, Yaniv
> > Cc: Alan Pevec; Martin Mágr; Rdo-list(a)redhat.com
> > Subject: Re: [Rdo-list] [openstack-packstack] Initial Kilo release
> >
> > On Mon, Feb 23, 2015 at 05:36:18PM -0500, Kaul, Yaniv wrote:
> > > Anything is better than Devstack...
> >
> > Hmm, most (80%) of my test environment is via DevStack and I find it
> > a huge time saver. Probably I just got used to it, I find it
> > extremely quick (after the first
> > run) to setup/tear-down environments -- just about 5 minutes or
> > less. I know people running multi-node DevStack environments for
> > rapid testing as well. :-)
> >
> > --
> > /Kashyap
>
> By definition, pulling every single component from its latest greatest
> upstream means it cannot be stable.
To avoid that, you can check out stable release of DevStack, which will inturn
use only stable branches of the other OpenStack projects
DevStack>$ git checkout remotes/origin/stable/juno
> In my specific case, it failed on heat - which I don't care about and
> is not very useful to my work.
Also, to avoid issues like that the 'ENABLED_SERVICES' bit in local.conf is
important.
You can just add the components that you use, it's been rock-solid for me that
way. I use only components that I care about and absolutely nothing else --
Nova, Glance, Neutron and Keystone.
ENABLED_SERVICES=g-api,g-reg,key,n-api,n-cpu,n-sch,n-
cond,mysql,rabbit,dstat,quantum,q-svc,q-agt,q-dhcp,q-l3,q-meta
[NOTE: I also disable the 'n-cert' (Nova cert) service - which is
only for EC2, and is slated to be removed upstream.]
Since you care about Cinder too, so you can just add Cinder specific services in
the ENABLED_SERVICES along with the above.
That's the localrc conf file I use:
https://kashyapc.fedorapeople.org/virt/openstack/2-
minimal_devstack_localrc.conf
Added benefit with the above config for me is also smaller footprint inside
DevStack VM (with a single Nova instance running, I have about
1.3 GB of mem usage)
https://kashyapc.fedorapeople.org/virt/openstack/heuristics/Memory-
profiling-inside-DevStack.txt
You can compare what you see in your env by running the same $ ps_mem (as
root). To install: $ yum install ps_mem
> I've tried disabling it (by adding 'disable_service heat h-api h-api-cfn
h-api-cw
h-eng' to my localrc) and then things broke even worse.
> Re-trying...
>
> Here's my local.conf:
> [[local|localrc]]
> ADMIN_PASSWORD=123456
> DATABASE_PASSWORD=$ADMIN_PASSWORD
> RABBIT_PASSWORD=$ADMIN_PASSWORD
> SERVICE_PASSWORD=$ADMIN_PASSWORD
> SERVICE_TOKEN=a682f596-76f3-11e3-b3b2-e716f9080d54
> #FIXED_RANGE=172.31.1.0/24
> #FLOATING_RANGE=192.168.20.0/25
> HOST_IP=10.103.233.161
> CINDER_ENABLED_BACKENDS=xio_gold:xtremio_1
>
> [[post-config|$CINDER_CONF]]
> [DEFAULT]
> rpc_response_timeout=600
> service_down_time=600
> volume_name_template = CI-%s
> enabled_backends=xtremio_1
> default_volume_type=xtremio_1
> [xtremio_1]
> volume_driver=cinder.volume.drivers.emc.xtremio.XtremIOISCSIDriver
> san_ip=vxms-xbrickdrm168
> san_login=admin
> san_password=admin
> volume_backend_name = xtremio_1
>
>
> Y.
--
/kashyap