[Rdo-list] Overcloud Horizon
by AliReza Taleghani
The overcloud has been finally deployed via the following :
$ openstack overcloud deploy --compute-scale 4 --templates --compute-flavor
compute --control-flavor control
http://paste.ubuntu.com/12775291/
there seem's I has missed some things cos I wished to have Horizon at the
end! but seems it's not evolved right now.
Do i need to add any other templates or better how can I force my
controller to serve horizon service! if it's possible...
tnx
--
Sincerely,
Ali R. Taleghani
9 years
[Rdo-list] error in doing deployment with RDO-Manager
by Erming Pei
Hi,
I am trying with deploying Openstack with RDO Manager, but am
having an issue for now with executing "openstack overcloud deploy
--templates" command (I am just following the user guide for a basic
deployment without changing/creating any template yet):
[stack@gcloudcon-3 ~]$ openstack overcloud deploy --templates
Deploying templates in the directory
/usr/share/openstack-tripleo-heat-templates
ERROR: openstack Heat Stack create failed.
[stack@gcloudcon-3 ~]$ heat stack-list
+--------------------------------------+------------+---------------+----------------------+
| id | stack_name | stack_status |
creation_time |
+--------------------------------------+------------+---------------+----------------------+
| 34eb7053-e504-4183-b39b-e87d0d3f7b4c | overcloud | CREATE_FAILED |
2015-10-09T17:40:17Z |
+--------------------------------------+------------+---------------+----------------------+
[stack@gcloudcon-3 ~]$ ironic node-list
+--------------------------------------+------+--------------------------------------+-------------+-----------------+-------------+
| UUID | Name | Instance
UUID | Power State | Provision State | Maintenance |
+--------------------------------------+------+--------------------------------------+-------------+-----------------+-------------+
| 248f2695-c43b-4d13-8aca-a3f5732f72ac | None |
4971f77b-d233-4431-a8ee-b29d18262394 | power off | error | False |
| 3cdc8f0e-eb3f-47df-b4f0-bc68b671e23f | None |
6388c30a-97f3-4141-b02f-b53d36782cbd | power off | error | False |
+--------------------------------------+------+--------------------------------------+-------------+-----------------+-------------+
[stack@gcloudcon-3 ~]$ ironic node-show 248f2695-c43b-4d13-8aca-a3f5732f72ac
+------------------------+------------------------------------------------------------------------+
| Property | Value |
+------------------------+------------------------------------------------------------------------+
| target_power_state | None |
| extra | {u'newly_discovered': u'true',
u'block_devices': {u'serials': |
| | [u'600605b0016ae53012feea5d1b60cdb9', |
| | u'600605b0016ae53012ff5378180b6c6f']},
u'hardware_swift_object': u |
| |
'extra_hardware-248f2695-c43b-4d13-8aca-a3f5732f72ac'} |
| last_error | Failed to tear down. Error: [Errno 13]
Permission denied: |
| | '/tftpboot/master_images' |
| updated_at | 2015-10-13T21:19:14+00:00 |
| maintenance_reason | None |
| provision_state | error |
| uuid | 248f2695-c43b-4d13-8aca-a3f5732f72ac |
| console_enabled | False |
| target_provision_state | available |
| maintenance | False |
| inspection_started_at | None |
| inspection_finished_at | None |
| power_state | power
off |
| driver | pxe_ipmitool |
| reservation | None |
| properties | {u'memory_mb': u'131072', u'cpu_arch':
u'x86_64', u'local_gb': u'463', |
| | u'cpus': u'8', u'capabilities':
u'boot_option:local'} |
| instance_uuid | 4971f77b-d233-4431-a8ee-b29d18262394 |
| name | None |
| driver_info | {u'ipmi_password': u'******',
u'ipmi_address': u'10.0.8.30', |
| | u'ipmi_username': u'USERID',
u'deploy_kernel': u'9e82182f-c1a0-420c- |
| | a7dc-b532c36892ca', u'deploy_ramdisk':
u'982008b4-2d53-41db-803c- |
| | 3d97405a2e0a'} |
| created_at | 2015-09-02T20:10:39+00:00 |
| driver_internal_info | {u'clean_steps': None,
u'is_whole_disk_image': False} |
| chassis_uuid | |
| instance_info | {} |
+------------------------+------------------------------------------------------------------------+
Is there any hint for me?
Thanks,
Erming
--
---------------------------------------------
Erming Pei, Ph.D
Senior System Analyst; Grid/Cloud Specialist
Research Computing Group
Information Services & Technology
University of Alberta, Canada
Tel: +1 7804929914 Fax: +1 7804921729
---------------------------------------------
9 years
Re: [Rdo-list] [OpenStack-docs] [install-guide] Status of RDO
by Steve Gordon
----- Original Message -----
> From: "Lana Brindley" <openstack(a)lanabrindley.com>
> To: openstack-docs(a)lists.openstack.org
>
> -----BEGIN PGP SIGNED MESSAGE-----
> Hash: SHA256
>
> On 14/10/15 17:36, Christian Berendt wrote:
> > On 10/14/2015 08:22 AM, Lana Brindley wrote:
> >> We've been unable to obtain good pre-release packages from Red Hat for the
> >> Fedora and Red Hat/CentOS repos, despite our best efforts.
> >
> > We tested with the Delorean repository. Why does this not work?
> >
>
> The Delorean repo is a pretty hilarious combination of old, out of date
> config files, and a few Mitaka packages thrown in for good measure. Red Hat
> have confirmed that Delorean is the only pre-release packages repo available
> to us as of Liberty, but because the packages aren't tied to a release it
> makes it virtually impossible to test against.
>
> The Red Hat packages, on the other hand, are missing quite a few crucial
> deps, including the PyMySQL deps.
Are there other examples of missing deps? My understanding is that the package name is python-mysql in Fedora etc.:
https://www.redhat.com/archives/rdo-list/2015-October/msg00004.html
-Steve
> Right now, the Fedora testing situation is slightly better than the Red
> Hat/CentOS one, thanks to Delorean being in slightly better shape, and
> thanks to Brian Moss's dogged determination in getting it working. But we're
> not confident enough in any of the RDO work right now to want to release
> this. We really need to wait for the packages so we can test properly and
> release.
>
> We've spoken to a few Red Hat contacts today to try and get a better
> understanding of what's going on, but at the moment, that's all we have.
>
> It's very disappointing, but I'm hoping we can test and publish this very
> soon.
>
> Lana
>
> - --
> Lana Brindley
> Technical Writer
> Rackspace Cloud Builders Australia
> http://lanabrindley.com
> -----BEGIN PGP SIGNATURE-----
> Version: GnuPG v2
> Comment: Using GnuPG with Thunderbird - http://www.enigmail.net/
>
> iQEcBAEBCAAGBQJWHhhLAAoJELppzVb4+KUyGjcIAMlUdrL4gRXEvrEjjUrQUjHq
> frMVIyLfoyPhrvvRTGXduWMt9HqX6HROqpvsfXuPmOaQzfQ+nniAZ9m0uF6qYolG
> qc5a96V+Emhz0InIcHcMxO9hDsVAWpf/7rC+IBhHvwt/NBOmWgu7pmAxRDSXdwoh
> klxwzPtvnFmShj6Xtiit0MVukgKoBTbtfZkXZ30765xbZd/uOzyiyUBUon9aiD/Q
> BQC9LVu391vBRXEqHioPMlL9wE5oG71BuYnlNF7A/4q+drqsgwhBJIoxYOtiOO4z
> 37nen8kMQ4YeqwT+cZQdZpJwfwMu6+/uy7ZCYCKz/KrOd2PRjybUiRnaox9kddg=
> =RcW7
> -----END PGP SIGNATURE-----
>
> _______________________________________________
> OpenStack-docs mailing list
> OpenStack-docs(a)lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-docs
>
--
Steve Gordon, RHCE
Sr. Technical Product Manager,
Red Hat Enterprise Linux OpenStack Platform
9 years
[Rdo-list] Software Factory for RDO experiment
by Frédéric Lepied
Hi,
After some discussions, we came up with the idea to have the RDO
project owns its build infrastructure. To play with this idea, we
propose to experiment to use an OpenStack like workflow to build RDO
packages by deploying a Software Factory instance
(http://softwarefactory-project.io) on top of an RDO or RHEL-OSP
cloud. That will allow us to have our own Gerrit, Zuul, Nodepool and
Jenkins instances like in the OpenStack upstream project while adding
our package building specific needs like using the Delorean and
Mock/Koji/Mash machineries.
The objectives from these changes are:
1. to have a full gating CI for RDO to never break the package repository.
2. to be in control of our infrastructure.
3. to simplify the work-flow where we can to make it more efficient and
easier to grasp.
Nothing is set in stone so feel free to comment or ask questions.
Cheers,
--
Fred - May the Source be with you
9 years
[Rdo-list] Best known working OS for RDO packstack
by Outback Dingo
ok so whats the current best known working iso for RDO packstack... Ive got
a couple blades here
id like to do an all-in-one on one blade then join a secondary compute only
node.,
thoughts and input appreciated, as i dont want to jump through hoops like
last time.
9 years
[Rdo-list] Cinder API Call error
by AliReza Taleghani
Hi;
When I Lunch new instance there is an error ins logs as following:
#####
Oct 14 06:00:25 overcloud-controller-0 cinder-api[27185]: 2015-10-14
06:00:25.332 27232 ERROR cinder.api.middleware.fault
[req-e0e7a1f4-6caf-422e-ace1-381d87a85b11 9664b863bbba4ff4a1bf5936ce2202c2
a1572260d6f14c4a8f0e1a209eeeb7b4 - - -] Caught error: Authorization failed:
Unable to establish connection to http://localhost:5000/v3/auth/tokens
######
instance disk image can't be attached so the instance don't get boot from
disk...
I change glance api version on cinder.conf into 1 and restart all cinder
services but do not help to overcome the problem.
I have deployed overcloud via TripleO good known Trunk undercloud....
Sincerely,
Ali R. Taleghani
@linkedIn <http://ir.linkedin.com/in/taleghani>
9 years
[Rdo-list] Basic HA deployment
by Marius Cornea
Hi everyone,
I tried a deployment on virt with 3 x ctrls + 1 x compute and it currently fails due to a ceilometer dbsync issue(BZ#1271002). To workaround it I did the following. This gets the deployment successful but some of the Neutron related pacemaker resources are stopped(same as BZ#1270964):
1. Mount the overcloud-full.qcow2 image on a host with libguestfs-tools installed (I used the physical machine where I run the virt env for this)
guestfish --rw -a overcloud-full.qcow2
><fs> run
><fs> mount /dev/sda /
><fs> vi /etc/puppet/modules/ceilometer/manifests/init.pp
#Apply the changes below:
diff -c2 init.pp.orig init.pp.new
*** init.pp.orig 2015-10-13 14:35:57.514488094 +0000
--- init.pp.new 2015-10-13 14:35:01.614488094 +0000
***************
*** 154,157 ****
--- 154,158 ----
$qpid_reconnect_interval_max = 0,
$qpid_reconnect_interval = 0,
+ $mongodb_replica_set = 'tripleo',
) {
***************
*** 293,296 ****
--- 294,298 ----
'database/metering_time_to_live' : value => $metering_time_to_live;
'database/alarm_history_time_to_live' : value => $alarm_history_time_to_live;
+ 'database/mongodb_replica_set' : value => $mongodb_replica_set;
}
><fs> quit
2. Get the overcloud-full.qcow2 image back on the undercloud and update the existing Glance image:
openstack overcloud image upload --update-existing
3. Deploy overcloud:
openstack overcloud deploy --templates ~/templates/my-overcloud -e ~/templates/my-overcloud/environments/network-isolation.yaml -e ~/templates/network-environment.yaml --control-scale 3 --compute-scale 1 --libvirt-type qemu -e /usr/share/openstack-tripleo-heat-templates/environments/puppet-pacemaker.yaml --ntp-server clock.redhat.com
Thanks,
Marius
9 years
[Rdo-list] Test day issue: parse error
by Udi Kalifon
Hello,
We are encountering an error during instack-virt-setup:
++ sudo virsh net-list --all --persistent
++ grep default
++ awk 'BEGIN{OFS=":";} {print $2,$3}'
+ default_net=active:yes
+ state=active
+ autostart=yes
+ '[' active '!=' active ']'
+ '[' yes '!=' yes ']'
Domain seed has been undefined
seed VM not running
seed VM not defined
Created machine seed with UUID f59eb2f0-c7ac-429e-950c-df2fd4b6f301
Seed VM created with MAC 52:54:00:05:af:0f
parse error: Invalid string: control characters from U+0000 through U+001F
must be escaped at line 32, column 30
Any ideas? I don't know which file causes this parse error, it's not the
instack-virt-setup.
Thanks.
9 years
[Rdo-list] fatal: The remote end hung up unexpectedly
by Udi Kalifon
I failed during build images in the RDO test day:
/var/tmp/image.Hu0HkfDD/mnt/opt/stack/puppet-modules/tempest ~/images
From
/home/stack/.cache/image-create/source-repositories/puppet_tempest_2aa8dee360256cbbbbc450f20322094249aa9dba
* [new branch] master -> fetch_master
HEAD is now at 09b2b5c Try to use zuul-cloner to prepare fixtures
~/images
0+1 records in
0+1 records out
34 bytes (34 B) copied, 7.9125e-05 s, 430 kB/s
Caching puppetlabs-vcsrepo from
https://github.com/puppetlabs/puppetlabs-vcsrepo.git in
/home/stack/.cache/image-create/source-repositories/puppetlabs_vcsrepo_23ad1fc998579fd9683437046883d2cbdc42d3e3
Cloning into
'/home/stack/.cache/image-create/source-repositories/puppetlabs_vcsrepo_23ad1fc998579fd9683437046883d2cbdc42d3e3.tmp'...
error: RPC failed; result=7, HTTP code = 0
fatal: The remote end hung up unexpectedly
Has anyone ever seen an issue like this ?
Thanks,
Udi.
9 years