[Rdo-list] (no subject)
by Nathan M.
So I've tried to setup a local controller node and run into a problem with
getting cinder to create a volume, first up the service can't find a place
to drop the volume I create.
If I disable and reenable the service, it shows as up - so I'm not sure how
to proceed on this. I'll note nothing ever shows up in /etc/cinder/volumes
Thanks in advance for any help gents/gals
--Nathan
[root@node1 cinder(openstack_admin)]# cinder service-list
+------------------+------------------+------+---------+-------+----------------------------+-----------------+
| Binary | Host | Zone | Status | State |
Updated_at | Disabled Reason |
+------------------+------------------+------+---------+-------+----------------------------+-----------------+
| cinder-scheduler | node1.local | nova | enabled | up |
2014-07-11T19:29:45.000000 | None |
| cinder-volume | node1.local@ | nova | enabled | up |
2014-07-11T19:29:46.000000 | None |
| cinder-volume | node1.local@lvm1 | nova | enabled | down |
2014-07-11T19:28:51.000000 | None |
+------------------+------------------+------+---------+-------+----------------------------+-----------------+
[root@node1 cinder(openstack_admin)]# openstack-status
== Nova services ==
openstack-nova-api: active
openstack-nova-cert: active
openstack-nova-compute: dead (disabled on boot)
openstack-nova-network: dead (disabled on boot)
openstack-nova-scheduler: active
openstack-nova-conductor: active
== Glance services ==
openstack-glance-api: active
openstack-glance-registry: active
== Keystone service ==
openstack-keystone: active
== Horizon service ==
openstack-dashboard: active
== neutron services ==
neutron-server: active
neutron-dhcp-agent: inactive (disabled on boot)
neutron-l3-agent: inactive (disabled on boot)
neutron-metadata-agent: inactive (disabled on boot)
neutron-lbaas-agent: inactive (disabled on boot)
neutron-openvswitch-agent: inactive (disabled on boot)
== Swift services ==
openstack-swift-proxy: active
openstack-swift-account: dead (disabled on boot)
openstack-swift-container: dead (disabled on boot)
openstack-swift-object: dead (disabled on boot)
== Cinder services ==
openstack-cinder-api: active
openstack-cinder-scheduler: active
openstack-cinder-volume: active
openstack-cinder-backup: inactive (disabled on boot)
== Ceilometer services ==
openstack-ceilometer-api: active
openstack-ceilometer-central: active
openstack-ceilometer-compute: dead (disabled on boot)
openstack-ceilometer-collector: active
== Heat services ==
openstack-heat-api: active
openstack-heat-api-cfn: active
openstack-heat-api-cloudwatch: inactive (disabled on boot)
openstack-heat-engine: active
== Support services ==
openvswitch: dead (disabled on boot)
messagebus: active
tgtd: active
rabbitmq-server: active
memcached: active
== Keystone users ==
+----------------------------------+------------+---------+----------------------+
| id | name | enabled | email
|
+----------------------------------+------------+---------+----------------------+
| 555e3e826c9f445c9975d0e1c6e00fc6 | admin | True | admin@local
|
| 4cbb547624004bbeb650d9f73875c1a2 | ceilometer | True |
ceilometer@localhost |
| 492d8baa1ae94e8dbf503187b5ccd0a9 | cinder | True |
cinder@localhost |
| fdcac23cb0bc4cd08712722b213d2e93 | glance | True |
glance@localhost |
| fc0d0960be5b4714b37f969fbc48d9e4 | heat | True |
heat@localhost |
| 48a6c949564d4e96b465fe670c92015c | heat-cfn | True |
heat-cfn@localhost |
| 1c531b23585e4cefb7fff7659cded687 | neutron | True |
neutron@localhost |
| 6ede5eb09ca64cdb934f6c92b20ba3b3 | nova | True |
nova@localhost |
| 6ba7e665409e4ee883e29e6def759255 | swift | True |
swift@localhost |
+----------------------------------+------------+---------+----------------------+
== Glance images ==
+--------------------------------------+--------+-------------+------------------+-----------+--------+
| ID | Name | Disk Format | Container
Format | Size | Status |
+--------------------------------------+--------+-------------+------------------+-----------+--------+
| 6019cfa8-ee46-4617-8b56-ae5dc82013a3 | cirros | qcow2 | bare
| 237896192 | active |
+--------------------------------------+--------+-------------+------------------+-----------+--------+
== Nova managed services ==
+------------------+-------------+----------+---------+-------+----------------------------+-----------------+
| Binary | Host | Zone | Status | State | Updated_at
| Disabled Reason |
+------------------+-------------+----------+---------+-------+----------------------------+-----------------+
| nova-consoleauth | node1.local | internal | enabled | up |
2014-07-11T19:31:14.000000 | - |
| nova-scheduler | node1.local | internal | enabled | up |
2014-07-11T19:31:15.000000 | - |
| nova-conductor | node1.local | internal | enabled | up |
2014-07-11T19:31:14.000000 | - |
| nova-cert | node1.local | internal | enabled | up |
2014-07-11T19:31:15.000000 | - |
+------------------+-------------+----------+---------+-------+----------------------------+-----------------+
== Nova networks ==
+--------------------------------------+-------+------+
| ID | Label | Cidr |
+--------------------------------------+-------+------+
| 9295ee23-c93b-43f6-801e-51d67e66313f | net1 | - |
+--------------------------------------+-------+------+
== Nova instance flavors ==
+----+-----------+-----------+------+-----------+------+-------+-------------+-----------+
| ID | Name | Memory_MB | Disk | Ephemeral | Swap | VCPUs |
RXTX_Factor | Is_Public |
+----+-----------+-----------+------+-----------+------+-------+-------------+-----------+
| 1 | m1.tiny | 512 | 1 | 0 | | 1 | 1.0
| True |
| 2 | m1.small | 2048 | 20 | 0 | | 1 | 1.0
| True |
| 3 | m1.medium | 4096 | 40 | 0 | | 2 | 1.0
| True |
| 4 | m1.large | 8192 | 80 | 0 | | 4 | 1.0
| True |
| 5 | m1.xlarge | 16384 | 160 | 0 | | 8 | 1.0
| True |
+----+-----------+-----------+------+-----------+------+-------+-------------+-----------+
== Nova instances ==
+----+------+--------+------------+-------------+----------+
| ID | Name | Status | Task State | Power State | Networks |
+----+------+--------+------------+-------------+----------+
+----+------+--------+------------+-------------+----------+
[root@node1 cinder(openstack_admin)]# nova volume-create --volume-type lvm
--availability-zone nova 1
+---------------------+--------------------------------------+
| Property | Value |
+---------------------+--------------------------------------+
| attachments | [] |
| availability_zone | nova |
| bootable | false |
| created_at | 2014-07-11T19:34:43.064107 |
| display_description | - |
| display_name | - |
| encrypted | False |
| id | 17959681-7b63-4dd2-b856-083aef246fd9 |
| metadata | {} |
| size | 1 |
| snapshot_id | - |
| source_volid | - |
| status | creating |
| volume_type | lvm |
+---------------------+--------------------------------------+
[root@node1 cinder(openstack_admin)]# tail -5 scheduler.log
2014-07-11 12:34:43.200 8665 WARNING cinder.context [-] Arguments dropped
when creating context: {'user': u'555e3e826c9f445c9975d0e1c6e00fc6',
'tenant': u'ff6a2b534e984db58313ae194b2d908c', 'user_identity':
u'555e3e826c9f445c9975d0e1c6e00fc6 ff6a2b534e984db58313ae194b2d908c - - -'}
2014-07-11 12:34:43.275 8665 ERROR cinder.scheduler.filters.capacity_filter
[req-2acb85f1-5b7b-4b63-bf95-9037338cb52b 555e3e826c9f445c9975d0e1c6e00fc6
ff6a2b534e984db58313ae194b2d908c - - -] Free capacity not set: volume node
info collection broken.
2014-07-11 12:34:43.275 8665 WARNING
cinder.scheduler.filters.capacity_filter
[req-2acb85f1-5b7b-4b63-bf95-9037338cb52b 555e3e826c9f445c9975d0e1c6e00fc6
ff6a2b534e984db58313ae194b2d908c - - -] Insufficient free space for volume
creation (requested / avail): 1/0.0
2014-07-11 12:34:43.325 8665 ERROR cinder.scheduler.flows.create_volume
[req-2acb85f1-5b7b-4b63-bf95-9037338cb52b 555e3e826c9f445c9975d0e1c6e00fc6
ff6a2b534e984db58313ae194b2d908c - - -] Failed to schedule_create_volume:
No valid host was found.
2014-07-11 12:35:16.253 8665 WARNING cinder.context [-] Arguments dropped
when creating context: {'user': None, 'tenant': None, 'user_identity': u'-
- - - -'}
[root@node1 cinder(openstack_admin)]# cinder service-disable
node1.local@lvm1 cinder-volume
+------------------+---------------+----------+
| Host | Binary | Status |
+------------------+---------------+----------+
| node1.local@lvm1 | cinder-volume | disabled |
+------------------+---------------+----------+
[root@node1 cinder(openstack_admin)]# cinder service-enable node1.local@lvm1
cinder-volume
+------------------+---------------+---------+
| Host | Binary | Status |
+------------------+---------------+---------+
| node1.local@lvm1 | cinder-volume | enabled |
+------------------+---------------+---------+
[root@node1 cinder(openstack_admin)]# cinder service-list
+------------------+------------------+------+---------+-------+----------------------------+-----------------+
| Binary | Host | Zone | Status | State |
Updated_at | Disabled Reason |
+------------------+------------------+------+---------+-------+----------------------------+-----------------+
| cinder-scheduler | node1.local | nova | enabled | up |
2014-07-11T19:36:29.000000 | None |
| cinder-volume | node1.local@ | nova | enabled | up |
2014-07-11T19:36:30.000000 | None |
| cinder-volume | node1.local@lvm1 | nova | enabled | up |
2014-07-11T19:36:35.000000 | None |
+------------------+------------------+------+---------+-------+----------------------------+-----------------+
[root@node1 cinder(openstack_admin)]# sed -e '/^#/d' -e '/^$/d'
/etc/cinder/cinder.conf
[DEFAULT]
amqp_durable_queues=False
rabbit_host=localhost
rabbit_port=5672
rabbit_hosts=localhost:5672
rabbit_userid=openstack
rabbit_password=
rabbit_virtual_host=/
rabbit_ha_queues=False
notification_driver=cinder.openstack.common.notifier.rpc_notifier
rpc_backend=cinder.openstack.common.rpc.impl_kombu
control_exchange=openstack
osapi_volume_listen=0.0.0.0
api_paste_config=/etc/cinder/api-paste.ini
glance_host=192.168.0.6
auth_strategy=keystone
enabled_backends=
debug=False
verbose=True
log_dir=/var/log/cinder
use_syslog=False
iscsi_ip_address=192.168.0.6
volume_backend_name=DEFAULT
iscsi_helper=tgtadm
volumes_dir=/etc/cinder/volumes
volume_group=cinder-volumes
volume_driver=cinder.volume.drivers.lvm.LVMISCSIDriver
[BRCD_FABRIC_EXAMPLE]
[database]
connection=mysql://cinder:@localhost/cinder
idle_timeout=3600
[fc-zone-manager]
[keymgr]
[keystone_authtoken]
[matchmaker_ring]
[ssl]
[root@node1 cinder(openstack_admin)]# lvdisplay cinder-volumes
--- Logical volume ---
LV Path /dev/cinder-volumes/cinder-volumes
LV Name cinder-volumes
VG Name cinder-volumes
LV UUID wxnyZJ-3BM0-Dnzt-h2Pt-k6qY-1lFG-CEuwfp
LV Write Access read/write
LV Creation host, time node1.local, 2014-07-10 21:59:48 -0700
LV Status available
# open 0
LV Size 4.00 GiB
Current LE 1023
Segments 1
Allocation inherit
Read ahead sectors auto
- currently set to 256
Block device 253:2
10 years, 4 months
[Rdo-list] Fwd: [Bug 1117871] Could not evaluate: Could not find init script for 'messagebus' - RDO Icehouse AIO on CentOS 7
by Steve Gordon
Is anyone able to take it looks - seems like we have an issue on the recently release CentOS 7?
----- Forwarded Message -----
> From: bugzilla(a)redhat.com
> To: sgordon(a)redhat.com
> Sent: Friday, July 11, 2014 12:37:23 PM
> Subject: [Bug 1117871] Could not evaluate: Could not find init script for 'messagebus' - RDO Icehouse AIO on CentOS 7
>
> https://bugzilla.redhat.com/show_bug.cgi?id=1117871
>
> Alejandro Cortina <alitox(a)gmail.com> changed:
>
> What |Removed |Added
> ----------------------------------------------------------------------------
> CC| |alitox(a)gmail.com
>
>
>
> --- Comment #1 from Alejandro Cortina <alitox(a)gmail.com> ---
> I had a different error but I fixed with the same solution provided in:
>
> https://ask.openstack.org/en/question/35705/attempt-of-rdo-aio-install-ic...
>
> "..replace content /etc/redhat-release with "Fedora release 20 (Heisenbug)"
> and
> rerun packstack --allinone. In meantime I have IceHouse AIO Instance on
> CentOS
> 7 completely functional."
>
> Terminal:
>
> ERROR : Error appeared during Puppet run: 192.168.11.19_prescript.pp
> Error: comparison of String with 7 failed at
> /var/tmp/packstack/2761ac128766421ab10ff27c754a6285/manifests/192.168.11.19_prescript.pp:15
> on node stack1.local.lan
> You will find full trace in log
> /var/tmp/packstack/20140712-012704-8RBDNB/manifests/192.168.11.19_prescript.pp.log
> Please check log file
> /var/tmp/packstack/20140712-012704-8RBDNB/openstack-setup.log for more
> information
>
>
> openstack-setup.log:
>
> ...
> tar --dereference -cpzf - apache ceilometer certmonger cinder concat firewall
> glance heat horizon inifile keystone memcached mongodb mysql neutron nova
> nssdb
> openstack packstack qp
> id rabbitmq rsync ssh stdlib swift sysctl tempest vcsrepo vlan vswitch xinetd
> |
> ssh -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null
> root(a)192.168.11.19 tar -C
> /var/tmp/packstack/2761ac128766421ab10ff27c754a6285/modules -xpzf -
> 2014-07-12 01:28:57::ERROR::run_setup::920::root:: Traceback (most recent
> call
> last):
> File "/usr/lib/python2.7/site-packages/packstack/installer/run_setup.py",
> line 915, in main
> _main(confFile)
> File "/usr/lib/python2.7/site-packages/packstack/installer/run_setup.py",
> line 605, in _main
> runSequences()
> File "/usr/lib/python2.7/site-packages/packstack/installer/run_setup.py",
> line 584, in runSequences
> controller.runAllSequences()
> File
> "/usr/lib/python2.7/site-packages/packstack/installer/setup_controller.py",
> line 68, in runAllSequences
> sequence.run(config=self.CONF, messages=self.MESSAGES)
> File
> "/usr/lib/python2.7/site-packages/packstack/installer/core/sequences.py",
> line
> 98, in run
> step.run(config=config, messages=messages)
> File
> "/usr/lib/python2.7/site-packages/packstack/installer/core/sequences.py",
> line
> 44, in run
> raise SequenceError(str(ex))
> SequenceError: Error appeared during Puppet run: 192.168.11.19_prescript.pp
> Error: comparison of String with 7 failed at
> /var/tmp/packstack/2761ac128766421ab10ff27c754a6285/manifests/192.168.11.19_prescript.pp:15
> on node stack1.local.lan
> You will find full trace in log
> /var/tmp/packstack/20140712-012704-8RBDNB/manifests/192.168.11.19_prescript.pp.log
>
> 2014-07-12 01:28:57::INFO::shell::81::root:: [192.168.11.19] Executing
> script:
> rm -rf /var/tmp/packstack/2761ac128766421ab10ff27c754a6285
> [root@stack1 20140712-012704-8RBDNB]#
>
>
> 192.168.11.19_prescript.pp.log:
>
> Warning: Config file /etc/puppet/hiera.yaml not found, using Hiera defaults
> Error: comparison of String with 7 failed at
> /var/tmp/packstack/2761ac128766421ab10ff27c754a6285/manifests/192.168.11.19_prescript.pp:15
> on node stack1.local.lan
> Wrapped exception:
> comparison of String with 7 failed
> Error: comparison of String with 7 failed at
> /var/tmp/packstack/2761ac128766421ab10ff27c754a6285/manifests/192.168.11.19_prescript.pp:15
> on node stack1.local.lan
>
> --
> You are receiving this mail because:
> You reported the bug.
>
--
Steve Gordon, RHCE
Sr. Technical Product Manager,
Red Hat Enterprise Linux OpenStack Platform
10 years, 4 months
[Rdo-list] Need Help: openstack Repos Are Missing
by Chandra Ganguly (ganguly)
Hi RedHat/Openstack Team
I am trying to install foreman and I am seeing the following RPM missing, which is causing my the download of my foreman-installer to fail. Can somebody let me know what is the new openstack repo to get; I am running it on RHEL6.5
[root@foreman-server ~]# subscription-manager repos --enable rhel-6-server-openstack-4.0-rpms
Error: rhel-6-server-openstack-4.0-rpms is not a valid repo ID. Use --list option to see valid repos.
root@foreman-server ~]# subscription-manager repos --list | grep openstack
[root@foreman-server ~]# yum install openstack-foreman-installer foreman-selinux
Loaded plugins: priorities, product-id, security, subscription-manager
This system is receiving updates from Red Hat Subscription Management.
rhel-6-server-optional-rpms | 3.5 kB 00:00
rhel-6-server-realtime-rpms | 3.8 kB 00:00
rhel-6-server-rpms | 3.7 kB 00:00
rhel-ha-for-rhel-6-server-rpms | 3.7 kB 00:00
rhel-hpn-for-rhel-6-server-rpms | 3.7 kB 00:00
rhel-lb-for-rhel-6-server-rpms | 3.7 kB 00:00
rhel-rs-for-rhel-6-server-rpms | 3.7 kB 00:00
rhel-sap-for-rhel-6-server-rpms | 3.7 kB 00:00
rhel-sap-hana-for-rhel-6-server-rpms | 2.8 kB 00:00
rhel-scalefs-for-rhel-6-server-rpms | 3.7 kB 00:00
rhel-server-6-rhds-9-rpms | 3.1 kB 00:00
rhel-server-dts-6-rpms | 2.9 kB 00:00
rhel-server-dts2-6-rpms | 2.6 kB 00:00
rhel-sjis-for-rhel-6-server-rpms | 3.1 kB 00:00
Setting up Install Process
No package openstack-foreman-installer available.
No package foreman-selinux available.
Error: Nothing to do
Thanks
Chandra
10 years, 4 months
[Rdo-list] ssh access to a fedora cloud image instance
by Madko
Hi,
I have an almost working openstack platform deployed via foreman. When I
launch an instance from the Fedora 19 cloud image, everything seems fine,
the VM is running on one of my hypervisor, but I can't access it (ping is
ok)...
I'm following this documentation
http://openstack.redhat.com/Running_an_instance
I only get a permission denied when I do the last part:
ssh -l root -i my_key_pair.pem floating_ip_address
I also try by importing an ssh key. Same error.
In the VM console, I see that CloudInit service is starting inside the VM,
no error are shown here. So my question is: Where are the logs for that
parts (cloud init server) in openstack ? Is the above documentation fine ?
best regards,
--
Edouard Bourguignon
10 years, 4 months
[Rdo-list] Fwd: Fedora 21 Mass Branching
by Ihar Hrachyshka
-----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA512
To whom it may concern: Fedora 21 was branched, so from now on any fix
to Icehouse should go to el6-icehouse (EL6) and f21 (next Fedora
release + EL7).
As for master, Juno should eventually arrive there. Till that time, we
still probably want to track Icehouse backports there not to leave the
branch without proper fixes that reached other Icehouse branches.
/Ihar
- -------- Original Message --------
Subject: Fedora 21 Mass Branching
Date: Wed, 9 Jul 2014 01:22:30 -0500
From: Dennis Gilmore <dennis(a)ausil.us>
Reply-To: devel(a)lists.fedoraproject.org
To: devel-announce(a)lists.fedoraproject.org
Hi All,
Fedora 21 has been branched, please be sure to do a git pull --rebase to
pick up the new branch, as an additional reminder rawhide/f22 has had
inheritance cut off from previous releases, so this means that
anything you do for f21 you also have to do in the master branch and do
a build there. This is the same as we did for fedora 19 and 20.
Dennis
_______________________________________________
devel-announce mailing list
devel-announce(a)lists.fedoraproject.org
https://admin.fedoraproject.org/mailman/listinfo/devel-announce
-----BEGIN PGP SIGNATURE-----
Version: GnuPG/MacGPG2 v2.0.22 (Darwin)
Comment: Using GnuPG with Thunderbird - http://www.enigmail.net/
iQEcBAEBCgAGBQJTvQ3GAAoJEC5aWaUY1u57J+cH/3tGhazyADj+RTRUrFqx0HOA
hflUMxDkQS2yvRHnaaVSgzOaRV0GT6lXiewxMlTb3HcxsrF/CJp3EU+sUVwFewD/
8VdFGOq8GMoElAzrZddPPzVsgd8biojWCMqCF8BYetzDlUCxLnz18SszdC/HPiGk
yP3NjIex0AOP8YGZtUZg78QwHfTlKmzr2ozONt3qoe37sAoiOT16uLYu0FQmEA5n
iRvO5wRmdyH1H/gozAVdkZVCzdvvZwQHUj9NF+dN7Pwbkg1qYzq8H8Seu0ijbv8n
98m153VH2IOVS82FD9wLXARcZ9fJjotK2nJZl2fsMdGcyEhoJ5A0tYmM4pYhCYM=
=BPl6
-----END PGP SIGNATURE-----
10 years, 4 months
[Rdo-list] ERROR while installing RDO (rabbitmq-server)
by Fang, Lance
All,
I am hoping you can help to resolve this. While installing RDO into a single VM, I continue to hit this problem. Appreciate any inputs ..
==
10.110.80.62_mysql.pp: [ DONE ]
10.110.80.62_amqp.pp: [ ERROR ]
Applying Puppet manifests [ ERROR ]
ERROR : Error appeared during Puppet run: 10.110.80.62_amqp.pp
err: /Stage[main]/Rabbitmq::Service/Service[rabbitmq-server]/ensure: change from stopped to running failed: Could not start Service[rabbitmq-server]: Execution of '/sbin/service rabbitmq-server start' returned 1: at /var/tmp/packstack/754293704d5e4f66b3dd8532e8bd0300/modules/rabbitmq/manifests/service.pp:37
You will find full trace in log /var/tmp/packstack/20140702-123959-kJYnai/manifests/10.110.80.62_amqp.pp.log
Please check log file /var/tmp/packstack/20140702-123959-kJYnai/openstack-setup.log for more information
-----------------------------------------
Lance K. Fang
Consultant Solutions Engineer
Mobile: (510) 393-6208
------------------------------------------
10 years, 4 months