[Rdo-list] Concerning Rabbits
by John Eckersberg
(In the spirit of "Concerning Hobbits")
Ryan O'Hara and I have been investigating RabbitMQ as it pertains to RDO
recently. There has been a lot of discussion on several disparate
threads, so I wanted to try and capture it on the list for the benefit
of everyone.
Ryan has been working on getting RabbitMQ running in a multi-node HA
configuration. I won't steal his thunder, and he can speak to it better
than I can, so I'll defer to him on the details.
As for me, I've been working on el7 support and bug squashing along the
way.
The first bug[1] causes the daemon to load incredibly slow, or outright
fail by timing out. This is due to the SELinux policy disallowing
name_bind on ports lower than 32768. RabbitMQ tries to name_bind to a
port starting at 10000, and increments if it fails. So if you have
SELinux in enforcing mode, you'll get 22768 AVC denials in the log
before it finally starts.
The second bug[2] causes the daemon to intermittently fail to start due
to a race condition in the creation of the erlang cookie file. This
happens only the first time the service starts. Really this is an
Erlang bug, but there's a workaround for the RabbitMQ case.
I've submitted patches for both issues. Until those get merged in, I've
rebuilt[3] RabbitMQ for F20 which includes the fixes.
Beyond bugs, I've also built out RabbitMQ and all the build/runtime
dependencies for el7. I have a yum repo[4] on my fedorapeople page
containing all the bits. This is all the stuff that is presently
missing from EPEL7. In time, I would hope the maintainers build all
this stuff, but for now it'll work for testing. You will also need the
EPEL 7 Beta repository[5] enabled.
As a side note, I built everything using mock with a local override repo
on my workstation. I've not used copr before but it seems relevant to
this sort of thing, so if it's any benefit I'll look to rebuilt the el7
stack there for easier consumption.
Hopefully this helps get the discussion into one place, and provide a
baseline for further investigation by everyone interested in RabbitMQ.
John.
---
[1] Is really two bugzillas, but the same bug:
[1a] https://bugzilla.redhat.com/show_bug.cgi?id=998682
[1b] https://bugzilla.redhat.com/show_bug.cgi?id=1032595
[2] https://bugzilla.redhat.com/show_bug.cgi?id=1059913
[3] http://jeckersb.fedorapeople.org/rabbitmq-server-3.1.5-3.fc20.noarch.rpm
[4] http://jeckersb.fedorapeople.org/rabbitmq-el7/
[5] http://dl.fedoraproject.org/pub/epel/beta/7/x86_64/
10 years
[Rdo-list] Can't Deploy Foreman with openstack-foreman-installer for Bare Metal Provisioning (undefined method `[]' for nil:NilClass)
by Ramon Acedo
Hi all,
I have been trying to test the OpenStack Foreman Installer with different combinations of Foreman versions and of the installer itself (and even different versions of Puppet) with no success so far.
I know that Packstack alone works but I want to go all the way with multiple hosts and bare metal provisioning to eventually use it for large deployments and scale out Nova Compute and other services seamlessly.
The error I get when running the foreman_server.sh script is always:
--------------
rake aborted!
undefined method `[]' for nil:NilClass
Tasks: TOP => db:seed
(See full trace by running task with --trace)
--------------
After that, if Foreman starts, there’s nothing in the "Host groups" section which is supposed to be prepopulated by the foreman_server.sh script (as described in http://red.ht/1jdJ03q).
The process I follow is very simple:
1. Install a clean RHEL 6.5 or CentOS 6.5
2. Enable EPEL
3. Enable the rdo-release repo:
a. rdo-release-havana-7: Foreman 1.3 and openstack-foreman-installer 1.0.6
b. rdo-release-havana-8: Foreman 1.5 and openstack-foreman-installer 1.0.6
c. rdo-release-icehouse-3: Foreman 1.5 and openstack-foreman-installer 2.0 (as a note here, the SCL repo needs to be enabled before the next step too).
4. Install openstack-foreman-installer
5. Create and export the needed variables:
export PROVISIONING_INTERFACE=eth0
export FOREMAN_GATEWAY=192.168.5.100
export FOREMAN_PROVISIONING=true
6. Run the script foreman_server.sh from /usr/share/openstack-foreman-installer/bin
For 3a and 3b I also tried with an older version of Puppet (3.2) with the same result.
These are the full outputs:
3a: http://fpaste.org/97739/ (Havana and Foreman 1.3)
3b: http://fpaste.org/97760/ (Havana and Foreman 1.3 with Puppet 3.2)
3c: http://fpaste.org/97838/ (Icehouse and Foreman 1.5)
I’m sure somebody in the list has tried to deploy and configure Foreman for bare metal installations (DHCP+PXE) and the documentation and the foreman_server.sh script suggest it should be possible in a fairly easy way.
I filled a bug as it might well be one, pending confirmation: https://bugzilla.redhat.com/show_bug.cgi?id=1092443
Any help is really appreciated!
Many thanks.
Ramon
10 years, 6 months
[Rdo-list] Open vSwitch issues....
by Erich Weiler
Hi Y'all,
I recently began rebuilding my OpenStack installation under the latest
RDO icehouse release (as of two days ago at least), and everything is
almost working, but I'm having issues with Open vSwitch, at least on the
compute nodes.
I'm use the ML2 plugin and VLAN tenant isolation. I have this in my
compute node's /etc/neutron/plugin.ini file
----------
[ovs]
bridge_mappings = physnet1:br-eth1
[ml2]
type_drivers = vlan
tenant_network_types = vlan
mechanism_drivers = openvswitch
[ml2_type_flat]
[ml2_type_vlan]
network_vlan_ranges = physnet1:200:209
----------
My switchports that the nodes connect to are configured as trunks,
allowing VLANs 200-209 to flow over them.
My network that the VMs should be connecting to is:
# neutron net-show cbse-net
+---------------------------+--------------------------------------+
| Field | Value |
+---------------------------+--------------------------------------+
| admin_state_up | True |
| id | 23028b15-fb12-4a9f-9fba-02f165a52d44 |
| name | cbse-net |
| provider:network_type | vlan |
| provider:physical_network | physnet1 |
| provider:segmentation_id | 200 |
| router:external | False |
| shared | False |
| status | ACTIVE |
| subnets | dd25433a-b21d-475d-91e4-156b00f25047 |
| tenant_id | 7c1980078e044cb08250f628cbe73d29 |
+---------------------------+--------------------------------------+
# neutron subnet-show dd25433a-b21d-475d-91e4-156b00f25047
+------------------+--------------------------------------------------+
| Field | Value |
+------------------+--------------------------------------------------+
| allocation_pools | {"start": "10.200.0.2", "end": "10.200.255.254"} |
| cidr | 10.200.0.0/16 |
| dns_nameservers | 121.43.52.1 |
| enable_dhcp | True |
| gateway_ip | 10.200.0.1 |
| host_routes | |
| id | dd25433a-b21d-475d-91e4-156b00f25047 |
| ip_version | 4 |
| name | |
| network_id | 23028b15-fb12-4a9f-9fba-02f165a52d44 |
| tenant_id | 7c1980078e044cb08250f628cbe73d29 |
+------------------+--------------------------------------------------+
So those VMs on that network should send packets that would be tagged
with VLAN 200.
I launch an instance, then look at the compute node with the instance on
it. It doesn't get a DHCP address, so it can't talk to the neutron node
with the dnsmasq server running on it. I configure the VM's interface
to be a static IP on VLAN200, 10.200.0.30, and netmask 255.255.0.0. I
have another node set up on VLAN 200 on my switch to test with
(10.200.0.50) that is a real bare-metal server.
I can't ping my bare-metal server. I see the packets getting to eth1 on
my compute node, but stopping there. Then I figure out that the packets
are *not being tagged* for VLAN 200 as they leave the compute node!! So
the switch is dropping them. As a test I configure the switchport
with "native vlan 200", and voila, the ping works.
So, Open vSwitch is not getting that it needs to tag the packets for
VLAN 200. A little diagnostics on the compute node:
ovs-ofctl dump-flows br-int
NXST_FLOW reply (xid=0x4):
cookie=0x0, duration=966.803s, table=0, n_packets=0, n_bytes=0,
idle_age=966, priority=0 actions=NORMAL
Shouldn't that show some VLAN tagging?
and a tcpdump on eth1 on the compute node:
# tcpdump -e -n -vv -i eth1 | grep -i arp
tcpdump: WARNING: eth1: no IPv4 address assigned
tcpdump: listening on eth1, link-type EN10MB (Ethernet), capture size
65535 bytes
11:21:50.462447 fa:16:3e:94:b3:63 > Broadcast, ethertype ARP (0x0806),
length 42: Ethernet (len 6), IPv4 (len 4), Request who-has 10.200.0.50
tell 10.200.0.30, length 28
11:21:51.462968 fa:16:3e:94:b3:63 > Broadcast, ethertype ARP (0x0806),
length 42: Ethernet (len 6), IPv4 (len 4), Request who-has 10.200.0.50
tell 10.200.0.30, length 28
11:21:52.462330 fa:16:3e:94:b3:63 > Broadcast, ethertype ARP (0x0806),
length 42: Ethernet (len 6), IPv4 (len 4), Request who-has 10.200.0.50
tell 10.200.0.30, length 28
11:21:53.462311 fa:16:3e:94:b3:63 > Broadcast, ethertype ARP (0x0806),
length 42: Ethernet (len 6), IPv4 (len 4), Request who-has 10.200.0.50
tell 10.200.0.30, length 28
11:21:54.463169 fa:16:3e:94:b3:63 > Broadcast, ethertype ARP (0x0806),
length 42: Ethernet (len 6), IPv4 (len 4), Request who-has 10.200.0.50
tell 10.200.0.30, length 28
That tcpdump also confirms the ARP packets are not being tagged 200 as
they leave the physical interface.
This worked before when I was testing icehouse RC1, I don't know what
changed with Open vSwitch... Anyone have any ideas?
Thanks as always for the help!! This list has been very helpful.
cheers,
erich
10 years, 6 months
[Rdo-list] Glance problems...
by Erich Weiler
Hi Y'all,
I was able to set up RDO Openstack just fine with Icehouse RC1, and then
I wiped it out and am trying again with the official stable release
(2014.1) and am having weird problems. It seems there were many changes
between this and RC1 unless I'm mistaken.
The main issue I'm having now is that I can't seem to create the glance
database properly, and I was able to do this before no problem. I do:
$ mysql -u root -p
mysql> CREATE DATABASE glance;
mysql> GRANT ALL PRIVILEGES ON glance.* TO 'glance'@'localhost' \
IDENTIFIED BY 'GLANCE_DBPASS';
mysql> GRANT ALL PRIVILEGES ON glance.* TO 'glance'@'%' \
IDENTIFIED BY 'GLANCE_DBPASS';
(Obviously 'GLANCE_DBPASS' is replaced with the real password).
Then:
su -s /bin/sh -c "glance-manage db_sync" glance
And it creates the 'glance' database and only one table,
"migrate_version". I can't get it to create the rest of the tables it
needs. I've tried also:
openstack-db --init --service glance --password GLANCE_DBPASS
And that returned success but in reality nothing happened... Any idea
what's going on?
In the api.conf and registry.conf the correct database credentials are
listed, and I can connect to the database as the mysql glance user on
the command line just fine using those credentials.
When I run any glance commands I get this in the registry log:
ProgrammingError: (ProgrammingError) (1146, "Table 'glance.images'
doesn't exist") 'SELECT anon_1.anon_2_images_created_at AS
anon_1_anon_2_images_created_at, anon_1.anon_2_images_updated_at AS
anon_1_anon_2_images_updated_at, anon_1.anon_2_images_deleted_at AS
anon_1_anon_2_images_deleted_at, anon_1.anon_2_images_deleted AS
anon_1_anon_2_images_deleted, anon_1.anon_2_images_id AS
anon_1_anon_2_images_id, anon_1.anon_2_images_name AS
anon_1_anon_2_images_name, anon_1.anon_2_images_disk_format AS
anon_1_anon_2_images_disk_format, anon_1.anon_2_images_container_format
AS anon_1_anon_2_images_container_format, anon_1.anon_2_images_size AS
anon_1_anon_2_images_size, anon_1.anon_2_images_virtual_size AS
anon_1_anon_2_images_virtual_size, anon_1.anon_2_images_status AS
anon_1_anon_2_images_status, anon_1.anon_2_images_is_public AS
anon_1_anon_2_images_is_public, anon_1.anon_2_images_checksum AS
anon_1_anon_2_images_checksum, anon_1.anon_2_images_min_disk AS
anon_1_anon_2_images_min_disk, anon_1.anon_2_images_min_ram AS
anon_1_anon_2_images_min_ram, anon_1.anon_2_images_owner AS
anon_1_anon_2_images_owner, anon_1.anon_2_images_protected AS
anon_1_anon_2_images_protected, image_properties_1.created_at AS
image_properties_1_created_at, image_properties_1.updated_at AS
image_properties_1_updated_at, image_properties_1.deleted_at AS
image_properties_1_deleted_at, image_properties_1.deleted AS
image_properties_1_deleted, image_properties_1.id AS
image_properties_1_id, image_properties_1.image_id AS
image_properties_1_image_id, image_properties_1.name AS
image_properties_1_name, image_properties_1.value AS
image_properties_1_value, image_locations_1.created_at AS
image_locations_1_created_at, image_locations_1.updated_at AS
image_locations_1_updated_at, image_locations_1.deleted_at AS
image_locations_1_deleted_at, image_locations_1.deleted AS
image_locations_1_deleted, image_locations_1.id AS image_locations_1_id,
image_locations_1.image_id AS image_locations_1_image_id,
image_locations_1.value AS image_locations_1_value,
image_locations_1.meta_data AS image_locations_1_meta_data,
image_locations_1.status AS image_locations_1_status \nFROM (SELECT
anon_2.images_created_at AS anon_2_images_created_at,
anon_2.images_updated_at AS anon_2_images_updated_at,
anon_2.images_deleted_at AS anon_2_images_deleted_at,
anon_2.images_deleted AS anon_2_images_deleted, anon_2.images_id AS
anon_2_images_id, anon_2.images_name AS anon_2_images_name,
anon_2.images_disk_format AS anon_2_images_disk_format,
anon_2.images_container_format AS anon_2_images_container_format,
anon_2.images_size AS anon_2_images_size, anon_2.images_virtual_size AS
anon_2_images_virtual_size, anon_2.images_status AS
anon_2_images_status, anon_2.images_is_public AS
anon_2_images_is_public, anon_2.images_checksum AS
anon_2_images_checksum, anon_2.images_min_disk AS
anon_2_images_min_disk, anon_2.images_min_ram AS anon_2_images_min_ram,
anon_2.images_owner AS anon_2_images_owner, anon_2.images_protected AS
anon_2_images_protected \nFROM (SELECT images.created_at AS
images_created_at, images.updated_at AS images_updated_at,
images.deleted_at AS images_deleted_at, images.deleted AS
images_deleted, images.id AS images_id, images.name AS images_name,
images.disk_format AS images_disk_format, images.container_format AS
images_container_format, images.size AS images_size, images.virtual_size
AS images_virtual_size, images.status AS images_status, images.is_public
AS images_is_public, images.checksum AS images_checksum, images.min_disk
AS images_min_disk, images.min_ram AS images_min_ram, images.owner AS
images_owner, images.protected AS images_protected \nFROM images \nWHERE
images.deleted = %s AND images.status IN (%s, %s, %s, %s, %s) AND
images.is_public = %s UNION SELECT images.created_at AS
images_created_at, images.updated_at AS images_updated_at,
images.deleted_at AS images_deleted_at, images.deleted AS
images_deleted, images.id AS images_id, images.name AS images_name,
images.disk_format AS images_disk_format, images.container_format AS
images_container_format, images.size AS images_size, images.virtual_size
AS images_virtual_size, images.status AS images_status, images.is_public
AS images_is_public, images.checksum AS images_checksum, images.min_disk
AS images_min_disk, images.min_ram AS images_min_ram, images.owner AS
images_owner, images.protected AS images_protected \nFROM images \nWHERE
images.owner = %s AND images.deleted = %s AND images.status IN (%s, %s,
%s, %s, %s) UNION SELECT images.created_at AS images_created_at,
images.updated_at AS images_updated_at, images.deleted_at AS
images_deleted_at, images.deleted AS images_deleted, images.id AS
images_id, images.name AS images_name, images.disk_format AS
images_disk_format, images.container_format AS images_container_format,
images.size AS images_size, images.virtual_size AS images_virtual_size,
images.status AS images_status, images.is_public AS images_is_public,
images.checksum AS images_checksum, images.min_disk AS images_min_disk,
images.min_ram AS images_min_ram, images.owner AS images_owner,
images.protected AS images_protected \nFROM images INNER JOIN
image_members ON images.id = image_members.image_id \nWHERE
images.deleted = %s AND images.status IN (%s, %s, %s, %s, %s) AND
image_members.deleted = %s AND image_members.member = %s) AS anon_2
ORDER BY anon_2.images_name ASC, anon_2.images_created_at ASC,
anon_2.images_id ASC \n LIMIT %s) AS anon_1 LEFT OUTER JOIN
image_properties AS image_properties_1 ON anon_1.anon_2_images_id =
image_properties_1.image_id LEFT OUTER JOIN image_locations AS
image_locations_1 ON anon_1.anon_2_images_id =
image_locations_1.image_id ORDER BY anon_1.anon_2_images_name ASC,
anon_1.anon_2_images_created_at ASC, anon_1.anon_2_images_id ASC' (0,
'active', 'saving', 'queued', 'pending_delete', 'deleted', 1,
'7c1980078e044cb08250f628cbe73d29', 0, 'active', 'saving', 'queued',
'pending_delete', 'deleted', 0, 'active', 'saving', 'queued',
'pending_delete', 'deleted', 0, '7c1980078e044cb08250f628cbe73d29', 20)
Sure, enough, all the rest of the tables are missing from mysql so it
complains.
Also, I tried this:
keystone user-create --name=glance --pass=GLANCE_PASS --tenant=service
--email=glance(a)myco.com
exceptions must be old-style classes or derived from BaseException, not
NoneType (HTTP 400)
Creating the glance user was easy last time, now it doesn't work... Any
insight would be greatly appreciated!!
cheers,
erich
10 years, 6 months
[Rdo-list] unsubscribe
by John_Ingle@dell.com
Dell - Internal Use - Confidential
John Ingle
Onsite Systems Engineer
Dell | Enterprise Solutions Group
Cell - +1 512-431-8567, Office - +1 512-728-5452
10 years, 6 months
[Rdo-list] Running RDO IceHouse on EC2/Google
by Geert Jansen
Hi,
I thought people might find this interesting. I wrote up a blog post
on how you can run a multi-node OpenStack IceHouse setup on EC2 or
GCE. This includes nested virtualization so your instances run full
speed (private beta atm. Note: no QEmu in emulation!) and private
tenant networks using VLANs.
This could be useful for trying out OpenStack, or for quickly
launching new installs for development and test. When an install is
done the entire env. can be snapshotted.
http://www.ravellosystems.com/blog/multi-node-openstack-rdo-icehouse-aws-...
Feedback is welcome.
Regards,
Geert Jansen
10 years, 6 months
[Rdo-list] Cannot log into Foreman.
by Minton, Rich
I’m not having much luck getting logged into the Foreman Web UI? After installing Foreman using the procedures on the RDO website I cannot login as admin using the default password. I constantly get “Incorrect username or password”.
The Foreman version is 1.5.0-RC2
The “production.log” in /var/log/foreman contains this line after I attempt to log in.
Started POST "/users/login" for 10.0.64.100 at 2014-04-29 14:15:03 -0400
Processing by UsersController#login as HTML
Parameters: {"utf8"=>"✓", "authenticity_token"=>"Cv76I6hPvlIYcC71Lnfnsp2JBVZHZQqNII5BbNs+PHI=", "login"=>{"login"=>"admin", "password"=>"[FILTERED]"}, "commit"=>"Login"}
invalid user
Redirected to https://foreman-test-1/users/login
Completed 302 Found in 238ms (ActiveRecord: 8.3ms)
I also get this a lot… not sure if it is related.
Started GET "/node/foreman-test-1.umtd-du.lmdit.us.lmco.com?format=yml" for 10.0.64.100 at 2014-04-29 14:04:09 -0400
Processing by HostsController#externalNodes as YML
Parameters: {"name"=>"foreman-test-1.umtd-du.lmdit.us.lmco.com"}
No smart proxy server found on ["foreman-test-1.umtd-du.lmdit.us.lmco.com"] and is not in trusted_puppetmaster_hosts
Redirected to https://foreman-test-1.umtd-du.lmdit.us.lmco.com/users/login
Filter chain halted as :require_puppetmaster_or_login rendered or redirected
Completed 403 Forbidden in 5ms (ActiveRecord: 0.7ms)
All ideas are welcome.
Thank you,
Rick
Richard Minton
Lockheed Martin - D&IS
LMICC Systems Administrator
4000 Geerdes Blvd, 13D31
King of Prussia, PA 19406
Phone: 610-354-5482
10 years, 6 months
[Rdo-list] Community sync meeting, April 29
by Rich Bowen
We had a very quick community sync meeting this morning on the #RDO IRC
channel on Freenode. I didn't do the minutes correctly, so here's a
summary. (Full log at
http://meetbot.fedoraproject.org/rdo/2014-04-29/rdo_community_irc_meeting...
)
* Icehouse
We have the RDO packages this morning, which is awesome:
https://www.redhat.com/archives/rdo-list/2014-April/msg00105.html
mburned mentioned that we would have live images by Summit, but we won't
have any physical media to hand out for that.
But we will have the bookmarks, which link to the QuickStart, and we can
put the image there. I note that someone has already updated the
QuickStart page to point to Icehouse - thanks for that.
* Hangout
We have a Heat hangout today.
Steve Baker is doing one first thing in the morning tomorrow, which is
5pm my time today.
https://plus.google.com/u/1/events/ckhqrki6iepg12vkqk5vnt7ijd0
And I talked with Hugh about doing one on TripleO and StayPuft next
month, and we decided that we really want to focus on TripleO.
So some time late May we'll do that. Date to be announced soon.
* Newsletter
The May newsletter is almost ready to go out. If you have anything you'd
like to get in it, please contact me asap
*OpenStack Summit
Lots of us will be at the Summit. Hopefully I'll have RDO polos for
everyone that needs one. If you're going to be there and haven't
requested an RDO polo, please let me know as soon as you can, since that
parcel is going out pretty soon.
I'm hoping this time to do a better job of collecting user stories. I'll
have my portable mic this time, and hopefully get a few recorded. If you
hear any cool user stories at Summit, please send them my way so that I
can get them recorded.
I'd also like to do more of the engineer interviews I'm always intending
to do.
With so many of us there I should be able to get a few anyways.
And then the next week, I'm going to be at LinuxCon tokyo, and red_trela
will be there too, I believe.
* Bug Triage
Been chugging along at it slowly. Current stats as of today:
http://kashyapc.fedorapeople.org/virt/openstack/bugzilla/rdo-bug-status/a...
The above URL has bi-weekly stats. And, for Grizzly RDO bugs, we'd
follow Fedora EOL style approach, i.e.
We'd request on Bugzilla to try w/ latest RDO IceHouse bits and re-open
if the bug still persists
To borrow pixelb's wording: Once N+2 is released (Icehouse), we EOL N
(Grizzly).
Maybe we can organize a community bug triage days after the next test
day or earlier depending on bug stream
* ask.openstack.org
We're doing awesome with the ask.openstack site so far as keeping up
with questions.
Last night's count was 19 which is the lowest its ever been.
I've started looking at other keywords, too, and we're doing pretty well
there, too.
* CentOS Cloud Sig
MBurns pinged on it a week or 2 ago, will followup again this week
Didn't hear anything back
* Social Media
Rikki Endsley has been pushing the @RDOCommunity Twitter recently, and
we almost doubled our followers in the last two weeks. (431 followers
this morning)
There is also a Google Plus group that we have recently started paying
more attention to, which is at http://tm3.org/rdogplus
* End Meeting
--
Rich Bowen - rbowen(a)rcbowen.com - @rbowen
http://apachecon.com/ - @apachecon
10 years, 6 months
[Rdo-list] [package announce] Icehouse GA
by Pádraig Brady
The full Icehouse package set is now available in the RDO repos,
for el6, el7 and Fedora 20 distros and derivatives.
Instructions to get started with these repos are at:
http://openstack.redhat.com/QuickStart
In this release we have:
openstack-ceilometer-2014.1
openstack-cinder-2014.1
openstack-glance-2014.1
openstack-heat-2014.1
openstack-keystone-2014.1
openstack-neutron-2014.1
openstack-nova-2014.1
openstack-sahara-2014.1
openstack-trove-2014.1
openstack-utils-2014.1
python-django-horizon-2014.1
python-django-sahara-2014.1
also this set of client packages:
python-ceilometerclient-1.0.8-1
python-cinderclient-1.0.8-1
python-glanceclient-0.12.0-1
python-heatclient-0.2.9-1
python-keystoneclient-0.8.0-1
python-neutronclient-2.3.4-1
python-novaclient-2.17.0-1
python-openstackclient-0.3.1-1
python-saharaclient-0.7.0-1
python-swiftclient-2.0.3-1
python-troveclient-1.0.3-3
In the Fedora 20 repo (initially) we also have these newer incubated projects:
openstack-tuskar-0.3.0
openstack-tuskar-ui-0.1.0
python-tuskarclient-0.1.4
openstack-tripleo-0.0.2
openstack-ironic-2014.1
python-ironicclient-0.1.2
thanks,
Pádraig.
10 years, 6 months