[Rdo-list] Decrease the memories of the undercloud
by Ashraf Hassan
Hi All, My undercloud has 48GB memories, and I am thinking to move memories to a compute node, so the final memories of the undercloud is 30GB, and I will do as per the following: 1- Delete the discovered compute node 2- Shutdown the undercloud as normal linux node. 3- Swap the memories. 4- Boot the undercloud 5- Rediscover the node. Is that correct? is 30GB for undercloud is enough? Thanks, Ashraf
8 years, 11 months
[Rdo-list] Mitaka 1 test day stats
by Rich Bowen
Many thanks to those that participated in the Mitaka 1 test day.
Several people have written up their experiences in the etherpad at
https://etherpad.openstack.org/p/rdo-test-days-mitaka-m1 If you have
notes that you have not added there, please do so before you forget what
happened.
Last test day I posted some stats from the event. There were fewer
tickets opened this time, during the "official" test window, but people
have mentioned that they were testing the entire week. We had 9 tickets
opened in the week including the test day, with 5 of those on Thursday.
There are also several other tickets referenced in the etherpad that are
either older than this, or are in other ticket trackers upstream.
https://goo.gl/51Dz97
61 names were seen on the IRC channel (I don't believe I collected this
statistic last time) saying 931 lines of conversation.
--
Rich Bowen - rbowen(a)redhat.com
OpenStack Community Liaison
http://rdoproject.org/
8 years, 11 months
[Rdo-list] Mitaka 2 test day (Please VOTE)
by Rich Bowen
Thanks to everyone that participated in the Mitaka 1 test day.
TLDR; VOTE for the Mitaka 2 test day at
http://doodle.com/poll/f2vm4cxs4nfuxdg2
The date for the Mitaka 2 test day is currently on the calendar as
January 27th, 28th. That is the week of FOSDEM, and some of us will be
in Brussels for the RDO Community Day on January 29th. That makes that
date difficult for those people.
The next week, some of us will be in Brno, Czech Republic, for DevConf,
making that week also difficult.
If we go much later than that, we're getting into Mitaka 3 time.
However, as both events are weekend events, not weekday events, we can
still do either one of these dates, depending on how many people this
works for.
So, to choose the best of two bad options, I'd appreciate it if people
could be willing to vote for one of the two options, by going to
http://doodle.com/poll/f2vm4cxs4nfuxdg2 and indicating their preference.
Thanks!
--
Rich Bowen - rbowen(a)redhat.com
OpenStack Community Liaison
http://rdoproject.org/
8 years, 11 months
[Rdo-list] Enabling a Neutron Service Plugins using NeutronServicePlugins
by syed muhammad
Folks,
I am passing a parameter_defaults to enable neutron service plugins via an
environment file as follow:
parameter_defaults:
NeutronCorePlugin: 'ml2'
NeutronServicePlugins: "firewall,lbaas"
For some reasons neutron.conf has router , default value in
puppet/controller.yaml, as a neutron service plugin after deployment. What
am I missing here?
Regards,
Syed Muhammad
8 years, 11 months
[Rdo-list] RDO/OpenStack meetups, week of Dec 14
by Rich Bowen
The following are the meetups I'm aware of in the coming week where
OpenStack and/or RDO enthusiasts are likely to be present. If you know
of others, please let me know, and/or add them to
http://rdoproject.org/Events
If there's a meetup in your area, please consider attending. If you
attend, please consider taking a few photos, and possibly even writing
up a brief summary of what was covered.
--Rich
* Monday December 14 in Guadalajara, MX: Software Defined Storage - What
makes Ceph special - http://www.meetup.com/OpenStack-GDL/events/224852574/
* Tuesday December 15 in Seattle, WA, US: Openstack Seattle Meetup: A
Deep Dive of the OpenStack Security Project -
http://www.meetup.com/OpenStack-Seattle/events/226702941/
* Thursday December 17 in Los Angeles, CA, US: Webinar - Red Hat and
Cisco: Making OpenStack work for the Enterprise -
http://www.meetup.com/Southern-California-Red-Hat-User-Group-RHUG/events/...
* Thursday December 17 in Phoenix, AZ, US: Whiteboarding OpenStack
Architecture - http://www.meetup.com/OpenStack-Phoenix/events/227367393/
* Thursday December 17 in San Francisco, CA, US: SFBay OpenStack
Advanced Track #OSSFO - http://www.meetup.com/openstack/events/224424928/
* Thursday December 17 in Atlanta, GA, US: OpenStack Meetup (Topic TBD)
- http://www.meetup.com/openstack-atlanta/events/226994564/
* Saturday December 19 in Bangalore, IN: OpenStack awareness camp at
Indore - http://www.meetup.com/Indian-OpenStack-User-Group/events/226561165/
* Saturday December 19 in Xian, CN: Docker&OpenStack Meetup -
http://www.meetup.com/Docker-Xian/events/227336534/
* Monday December 21 in Moscow, RU: Митап OpenStack и контейнеры + Новый
Год RCCPA - http://www.meetup.com/OpenStack-Russia/events/227183795/
--
Rich Bowen - rbowen(a)redhat.com
OpenStack Community Liaison
http://rdoproject.org/
8 years, 11 months
[Rdo-list] RDO bloggers: December 14
by Rich Bowen
Here's what RDO enthusiasts were blogging about last week.
Asking Me Questions about Keystone, by Adam Young
As many of you have found out, I am relatively willing to help people
out with Keystone related questions. Here are a couple guidelines.
… read more at http://tm3.org/47
Setting up a local caching proxy for Fedora YUM repositories, by Daniel
Berrange
For my day-to-day development work I currently have four separate
physical servers, one old x86_64 server for file storage, two new x86_64
servers and one new aarch64 server. Even with a fast fibre internet
connection, downloading the never ending stream of Fedora RPM updates
takes non-negligible time. I also have cause to install distro chroots
on a reasonably frequent basis for testing various things related to
containers & virtualization, which involves yet more RPM downloads. So I
decided it was time to investigate the setup of a local caching proxy
for Fedora YUM repositories. I could have figured this out myself, but I
fortunately knew that Matthew Booth had already setup exactly the kind
of system I wanted, and he shared the necessary config steps that are
outlined below.
… read more at http://tm3.org/48
Why cloud-native depends on modernization by Gordon Haff
This is the fourth in a series of posts that delves deeper into the
questions that IDC’s Mary Johnston Turner and Gary Chen considered in a
recent IDC Analyst Connection. The fourth question asked: What about
existing conventional applications and infrastructure? Is it worth the
time and effort to continue to modernize and upgrade conventional systems?
… read more at http://tm3.org/49
HA for Tripleo, by Adam Young
Juan Antonio Osorio Robles was instrumental in me getting Tripelo up and
running. He sent me the following response, which he’s graciously
allowed me to share with you.
… read more at http://tm3.org/4a
New Neutron testing guidelines!, by Addaf Muller
Yesterday we merged https://review.openstack.org/#/c/245984/ which adds
content to the Neutron testing guidelines:
… read more at http://tm3.org/4b
Rippowam, by Adam Young
Ossipee started off as OS-IPA. As it morphed into a tool for building
development clusters,I realized it was more useful to split the building
of the cluster from the Install and configuration of the application on
that cluster. To install IPA and OpenStack, and integrate them together,
we now use an ansible-playbook called Rippowam.
… read more at http://tm3.org/4c
In-use Volume Backups in Cinder by Gorka Eguileor
Prior to the Liberty release of OpenStack, Cinder backup functionality
was limited to available volumes; but in the latest L release, the
possibility to create backups of in-use volumes was added, so let’s have
a look into how this is done inside Cinder.
… read more at http://tm3.org/4d
--
Rich Bowen - rbowen(a)redhat.com
OpenStack Community Liaison
http://rdoproject.org/
8 years, 11 months
[Rdo-list] OS Liberty + Ceph Hammer: Block Device Mapping is Invalid.
by Kevin
Can someone help me?
Help would be highly appreciated ;-)
Last message on OpenStack mailing list:
Dear OpenStack-users,
I just installed my first multi-node OS-setup with Ceph as my storage backend.
After configuring cinder, nova and glance as described in the Ceph-HowTo (http://docs.ceph.com/docs/master/rbd/rbd-openstack/), there remains one blocker for me:
When creating a new instance based on a bootable glance image (same ceph cluster), it fails with:
Dashboard:
> Block Device Mapping is Invalid.
nova-compute.log (http://pastebin.com/bKfEijDu):
> 2015-12-06 16:44:15.991 2333 ERROR nova.compute.manager [instance: 83677788-eafc-4d9c-9f38-3cad8030ecd3] Traceback (most recent call last):
> 2015-12-06 16:44:15.991 2333 ERROR nova.compute.manager [instance: 83677788-eafc-4d9c-9f38-3cad8030ecd3] File "/usr/lib/python2.7/site-packages/nova/compute/manager.py", line 1738, in _prep_block_device
> 2015-12-06 16:44:15.991 2333 ERROR nova.compute.manager [instance: 83677788-eafc-4d9c-9f38-3cad8030ecd3] wait_func=self._await_block_device_map_created)
> 2015-12-06 16:44:15.991 2333 ERROR nova.compute.manager [instance: 83677788-eafc-4d9c-9f38-3cad8030ecd3] File "/usr/lib/python2.7/site-packages/nova/virt/block_device.py", line 476, in attach_block_devices
> 2015-12-06 16:44:15.991 2333 ERROR nova.compute.manager [instance: 83677788-eafc-4d9c-9f38-3cad8030ecd3] map(_log_and_attach, block_device_mapping)
> 2015-12-06 16:44:15.991 2333 ERROR nova.compute.manager [instance: 83677788-eafc-4d9c-9f38-3cad8030ecd3] File "/usr/lib/python2.7/site-packages/nova/virt/block_device.py", line 474, in _log_and_attach
> 2015-12-06 16:44:15.991 2333 ERROR nova.compute.manager [instance: 83677788-eafc-4d9c-9f38-3cad8030ecd3] bdm.attach(*attach_args, **attach_kwargs)
> 2015-12-06 16:44:15.991 2333 ERROR nova.compute.manager [instance: 83677788-eafc-4d9c-9f38-3cad8030ecd3] File "/usr/lib/python2.7/site-packages/nova/virt/block_device.py", line 385, in attach
> 2015-12-06 16:44:15.991 2333 ERROR nova.compute.manager [instance: 83677788-eafc-4d9c-9f38-3cad8030ecd3] self._call_wait_func(context, wait_func, volume_api, vol['id'])
> 2015-12-06 16:44:15.991 2333 ERROR nova.compute.manager [instance: 83677788-eafc-4d9c-9f38-3cad8030ecd3] File "/usr/lib/python2.7/site-packages/nova/virt/block_device.py", line 344, in _call_wait_func
> 2015-12-06 16:44:15.991 2333 ERROR nova.compute.manager [instance: 83677788-eafc-4d9c-9f38-3cad8030ecd3] {'volume_id': volume_id, 'exc': exc})
> 2015-12-06 16:44:15.991 2333 ERROR nova.compute.manager [instance: 83677788-eafc-4d9c-9f38-3cad8030ecd3] File "/usr/lib/python2.7/site-packages/oslo_utils/excutils.py", line 195, in __exit__
> 2015-12-06 16:44:15.991 2333 ERROR nova.compute.manager [instance: 83677788-eafc-4d9c-9f38-3cad8030ecd3] six.reraise(self.type_, self.value, self.tb)
> 2015-12-06 16:44:15.991 2333 ERROR nova.compute.manager [instance: 83677788-eafc-4d9c-9f38-3cad8030ecd3] File "/usr/lib/python2.7/site-packages/nova/virt/block_device.py", line 335, in _call_wait_func
> 2015-12-06 16:44:15.991 2333 ERROR nova.compute.manager [instance: 83677788-eafc-4d9c-9f38-3cad8030ecd3] wait_func(context, volume_id)
> 2015-12-06 16:44:15.991 2333 ERROR nova.compute.manager [instance: 83677788-eafc-4d9c-9f38-3cad8030ecd3] File "/usr/lib/python2.7/site-packages/nova/compute/manager.py", line 1426, in _await_block_device_map_created
> 2015-12-06 16:44:15.991 2333 ERROR nova.compute.manager [instance: 83677788-eafc-4d9c-9f38-3cad8030ecd3] volume_status=volume_status)
> 2015-12-06 16:44:15.991 2333 ERROR nova.compute.manager [instance: 83677788-eafc-4d9c-9f38-3cad8030ecd3] VolumeNotCreated: Volume eba9ed20-09b1-44fe-920e-de8b6044500d did not finish being created even after we waited 0 seconds or 1 attempts. And its status is error.
> 2015-12-06 16:44:15.991 2333 ERROR nova.compute.manager [instance: 83677788-eafc-4d9c-9f38-3cad8030ecd3]
> 2015-12-06 16:44:16.034 2333 ERROR nova.compute.manager [req-8a7e1c2c-09ea-4c10-acb3-2716e04fe214 051f7eb0c4df40dda84a69d40ee86a48 3c297aff8cb44e618fb88356a2dd836b - - -] [instance: 83677788-eafc-4d9c-9f38-3cad8030ecd3] Build of instance 83677788-eafc-4d9c-9f38-3cad8030ecd3 aborted: Block Device Mapping is Invalid.
> 2015-12-06 16:44:16.034 2333 ERROR nova.compute.manager [instance: 83677788-eafc-4d9c-9f38-3cad8030ecd3] Traceback (most recent call last):
> 2015-12-06 16:44:16.034 2333 ERROR nova.compute.manager [instance: 83677788-eafc-4d9c-9f38-3cad8030ecd3] File "/usr/lib/python2.7/site-packages/nova/compute/manager.py", line 1905, in _do_build_and_run_instance
> 2015-12-06 16:44:16.034 2333 ERROR nova.compute.manager [instance: 83677788-eafc-4d9c-9f38-3cad8030ecd3] filter_properties)
> 2015-12-06 16:44:16.034 2333 ERROR nova.compute.manager [instance: 83677788-eafc-4d9c-9f38-3cad8030ecd3] File "/usr/lib/python2.7/site-packages/nova/compute/manager.py", line 2025, in _build_and_run_instance
> 2015-12-06 16:44:16.034 2333 ERROR nova.compute.manager [instance: 83677788-eafc-4d9c-9f38-3cad8030ecd3] 'create.error', fault=e)
> 2015-12-06 16:44:16.034 2333 ERROR nova.compute.manager [instance: 83677788-eafc-4d9c-9f38-3cad8030ecd3] File "/usr/lib/python2.7/site-packages/oslo_utils/excutils.py", line 195, in __exit__
> 2015-12-06 16:44:16.034 2333 ERROR nova.compute.manager [instance: 83677788-eafc-4d9c-9f38-3cad8030ecd3] six.reraise(self.type_, self.value, self.tb)
> 2015-12-06 16:44:16.034 2333 ERROR nova.compute.manager [instance: 83677788-eafc-4d9c-9f38-3cad8030ecd3] File "/usr/lib/python2.7/site-packages/nova/compute/manager.py", line 1996, in _build_and_run_instance
> 2015-12-06 16:44:16.034 2333 ERROR nova.compute.manager [instance: 83677788-eafc-4d9c-9f38-3cad8030ecd3] block_device_mapping) as resources:
> 2015-12-06 16:44:16.034 2333 ERROR nova.compute.manager [instance: 83677788-eafc-4d9c-9f38-3cad8030ecd3] File "/usr/lib64/python2.7/contextlib.py", line 17, in __enter__
> 2015-12-06 16:44:16.034 2333 ERROR nova.compute.manager [instance: 83677788-eafc-4d9c-9f38-3cad8030ecd3] return self.gen.next()
> 2015-12-06 16:44:16.034 2333 ERROR nova.compute.manager [instance: 83677788-eafc-4d9c-9f38-3cad8030ecd3] File "/usr/lib/python2.7/site-packages/nova/compute/manager.py", line 2143, in _build_resources
> 2015-12-06 16:44:16.034 2333 ERROR nova.compute.manager [instance: 83677788-eafc-4d9c-9f38-3cad8030ecd3] reason=e.format_message())
> 2015-12-06 16:44:16.034 2333 ERROR nova.compute.manager [instance: 83677788-eafc-4d9c-9f38-3cad8030ecd3] BuildAbortException: Build of instance 83677788-eafc-4d9c-9f38-3cad8030ecd3 aborted: Block Device Mapping is Invalid.
> 2015-12-06 16:44:16.034 2333 ERROR nova.compute.manager [instance: 83677788-eafc-4d9c-9f38-3cad8030ecd3]
Glance seems to work well. I was able to upload images.
Creating non-bootable disks seem to work, as soon as I try to make one bootable, it fails.
This seems to be related to the known threads I found online but the mentioned fix was merged long before Liberty so I am finaly stuck at this point.
How can I fix this problem?
Thanks.
Kind regards
Kevin
8 years, 11 months