[Rdo-list] physical and virtual ressources matching
by pauline phaure
hey,
please I wanna know for OpenStack the matching that should exists between
virtual ressources and physical ones. any idea? should we stick to
1vcp=cpu?? what about the memory and disk space?
pauline,
9 years, 6 months
[Rdo-list] rdo-manager - instack-install-undercloud fail
by Mohammed Arafa
hi
just did a reinstall and it failed. rabbitmq again.
i am attaching the screen grabs. if someone wants the logs, i can send
them. but i expect to get rid of the instack vm later this afternoon if i
cannot get instack to run after fiddling with rabbitmq.
logs again, pls specify which logs and the location you need
my hosts file which worked yesterday
[stack@instack ~]$ cat /etc/hosts
127.0.0.1 localhost localhost.localdomain localhost4
localhost4.localdomain4
::1 localhost localhost.localdomain localhost6
localhost6.localdomain6
127.0.0.1 instack.marafa.vm
pertinent output of openstack-status
== Support services ==
openvswitch: active
dbus: active
rabbitmq-server: failed (disabled on boot)
memcached: active
== Keystone users ==
/usr/lib/python2.7/site-packages/keystoneclient/shell.py:65:
DeprecationWarning: The keystone CLI is deprecated in favor of
python-openstackclient. For a Python library, continue using
python-keystoneclient.
'python-keystoneclient.', DeprecationWarning)
Could not find user: admin (Disable debug mode to suppress these details.)
(HTTP 401) (Request-ID: req-0037d194-7b75-4c9e-a48f-c8ac122a99ff)
== Glance images ==
Could not find user: admin (Disable debug mode to suppress these details.)
(HTTP 401) (Request-ID: req-3fed983d-76b2-43cb-b9ff-b0445e470773)
== Nova managed services ==
ERROR (Unauthorized): Could not find user: admin (Disable debug mode to
suppress these details.) (HTTP 401) (Request-ID:
req-1cafa2c3-fa56-4ea5-804b-4910c0fde9ba)
== Nova networks ==
ERROR (Unauthorized): Could not find user: admin (Disable debug mode to
suppress these details.) (HTTP 401) (Request-ID:
req-3de8f916-c7c6-4e9e-a309-73359261366c)
== Nova instance flavors ==
ERROR (Unauthorized): Could not find user: admin (Disable debug mode to
suppress these details.) (HTTP 401) (Request-ID:
req-6f360f3f-8f0f-4d65-b489-e7050df1fd49)
== Nova instances ==
ERROR (Unauthorized): Could not find user: admin (Disable debug mode to
suppress these details.) (HTTP 401) (Request-ID:
req-37f006f0-d121-4ba5-9663-717e1457d54d)
--
<https://candidate.peoplecert.org/ReportsLink.aspx?argType=1&id=13D642E995...>
*805010942448935*
<https://www.redhat.com/wapps/training/certification/verify.html?certNumbe...>
*GR750055912MA*
<https://candidate.peoplecert.org/ReportsLink.aspx?argType=1&id=13D642E995...>
*Link to me on LinkedIn <http://www.linkedin.com/in/mohammedarafa>*
9 years, 6 months
[Rdo-list] rdo-manager python?
by Mohammed Arafa
so .. i edited instack's host file to show
127.0.0.1 instack.domain.tld instack #shortname added
and on this pass rabbitmq worked then i got to neutron setup and it hung at
+ setup-neutron -n /tmp/tmp.miEe7xK1qL
/usr/lib/python2.7/site-packages/novaclient/v1_1/__init__.py:30:
UserWarning: Module novaclient.v1_1 is deprecated (taken as a basis for
novaclient.v2). The preferable way to get client class or object you can
find in novaclient.client module.
warnings.warn("Module novaclient.v1_1 is deprecated (taken as a basis for
"
neutron logs were full of this:
2015-04-22 17:14:37.666 10981 DEBUG oslo_messaging._drivers.impl_rabbit [-]
Received recoverable error from kombu: on_error
/usr/lib/python2.7/site-packages/oslo_messaging/_drivers/impl_rabbit.py:789
2015-04-22 17:14:37.666 10981 TRACE oslo_messaging._drivers.impl_rabbit
Traceback (most recent call last):
2015-04-22 17:14:37.666 10981 TRACE oslo_messaging._drivers.impl_rabbit
File "/usr/lib/python2.7/site-packages/kombu/utils/__init__.py", line 217,
in retry_over_time
2015-04-22 17:14:37.666 10981 TRACE oslo_messaging._drivers.impl_rabbit
return fun(*args, **kwargs)
2015-04-22 17:14:37.666 10981 TRACE oslo_messaging._drivers.impl_rabbit
File "/usr/lib/python2.7/site-packages/kombu/connection.py", line 246, in
connect
2015-04-22 17:14:37.666 10981 TRACE oslo_messaging._drivers.impl_rabbit
return self.connection
2015-04-22 17:14:37.666 10981 TRACE oslo_messaging._drivers.impl_rabbit
File "/usr/lib/python2.7/site-packages/kombu/connection.py", line 761, in
connection
2015-04-22 17:14:37.666 10981 TRACE oslo_messaging._drivers.impl_rabbit
self._connection = self._establish_connection()
2015-04-22 17:14:37.666 10981 TRACE oslo_messaging._drivers.impl_rabbit
File "/usr/lib/python2.7/site-packages/kombu/connection.py", line 720, in
_establish_connection
2015-04-22 17:14:37.666 10981 TRACE oslo_messaging._drivers.impl_rabbit
conn = self.transport.establish_connection()
2015-04-22 17:14:37.666 10981 TRACE oslo_messaging._drivers.impl_rabbit
File "/usr/lib/python2.7/site-packages/kombu/transport/pyamqp.py", line
115, in establish_connection
2015-04-22 17:14:37.666 10981 TRACE oslo_messaging._drivers.impl_rabbit
conn = self.Connection(**opts)
2015-04-22 17:14:37.666 10981 TRACE oslo_messaging._drivers.impl_rabbit
File "/usr/lib/python2.7/site-packages/amqp/connection.py", line 180, in
__init__
2015-04-22 17:14:37.666 10981 TRACE oslo_messaging._drivers.impl_rabbit
(10, 30), # tune
2015-04-22 17:14:37.666 10981 TRACE oslo_messaging._drivers.impl_rabbit
File "/usr/lib/python2.7/site-packages/amqp/abstract_channel.py", line 67,
in wait
2015-04-22 17:14:37.666 10981 TRACE oslo_messaging._drivers.impl_rabbit
self.channel_id, allowed_methods)
2015-04-22 17:14:37.666 10981 TRACE oslo_messaging._drivers.impl_rabbit
File "/usr/lib/python2.7/site-packages/amqp/connection.py", line 240, in
_wait_method
2015-04-22 17:14:37.666 10981 TRACE oslo_messaging._drivers.impl_rabbit
self.method_reader.read_method()
2015-04-22 17:14:37.666 10981 TRACE oslo_messaging._drivers.impl_rabbit
File "/usr/lib/python2.7/site-packages/amqp/method_framing.py", line 189,
in read_method
2015-04-22 17:14:37.666 10981 TRACE oslo_messaging._drivers.impl_rabbit
raise m
2015-04-22 17:14:37.666 10981 TRACE oslo_messaging._drivers.impl_rabbit
IOError: Socket closed
2015-04-22 17:14:37.666 10981 TRACE oslo_messaging._drivers.impl_rabbit
2015-04-22 17:14:37.667 10981 ERROR oslo_messaging._drivers.impl_rabbit [-]
AMQP server 192.0.2.1:5672 closed the connection. Check login credentials:
Socket closed
--
<https://candidate.peoplecert.org/ReportsLink.aspx?argType=1&id=13D642E995...>
*805010942448935*
<https://www.redhat.com/wapps/training/certification/verify.html?certNumbe...>
*GR750055912MA*
<https://candidate.peoplecert.org/ReportsLink.aspx?argType=1&id=13D642E995...>
*Link to me on LinkedIn <http://www.linkedin.com/in/mohammedarafa>*
9 years, 6 months
[Rdo-list] [RDO-Manager] [AHC] allow matching without re-sending to ironic-discoverd
by John Trowbridge
The current AHC workflow[1] requires us to send the already introspected
nodes back to ironic-discoverd, if we change the matching rules after
the initial introspection step.
This is problematic, because if we want to match on the benchmark data,
the benchmarks will need to be re-run. Currently, the edeploy plugin[2]
to ironic-discoverd is doing the matching, and it only deals with data
posted by the discovery ramdisk. Running the benchmarks can be very time
consuming on a typical production server, and we already store the
results in the ironic db. The benchmarks should not vary much between
runs, so this time is wasted in future runs.
One solution would be to add a feature to the benchmark analysis tool,
ironic-cardiff,[3] to do the subsequent rounds of matching. This would
be straight forward as this tool already gets an ironic client, and
already requires the hardware library which has the matching logic.
I would like to gather feedback on whether this approach seems
reasonable, or if there are any better suggestions to solve this problem.
[1]
https://repos.fedorapeople.org/repos/openstack-m/instack-undercloud/html/...
[2]
https://github.com/stackforge/ironic-discoverd/blob/master/ironic_discove...
[3]
https://github.com/rdo-management/rdo-ramdisk-tools/blob/master/rdo_ramdi...
9 years, 6 months
[Rdo-list] rdo-manager error running instack-deploy-overcloud --tuskar
by Pedro Sousa
Hi,
I'm testing virtual deployment, after registering and discovering my
overcloud nodes I'm getting this error, deploying my nodes:
$instack-deploy-overcloud --tuskar
The following templates will be written:
tuskar_templates/puppet/manifests/overcloud_volume.pp
tuskar_templates/hieradata/object.yaml
tuskar_templates/puppet/manifests/overcloud_controller.pp
tuskar_templates/puppet/hieradata/common.yaml
tuskar_templates/provider-Swift-Storage-1.yaml
tuskar_templates/provider-Cinder-Storage-1.yaml
tuskar_templates/provider-Compute-1.yaml
tuskar_templates/puppet/bootstrap-config.yaml
tuskar_templates/net-config-bridge.yaml
tuskar_templates/provider-Ceph-Storage-1.yaml
tuskar_templates/puppet/controller-post-puppet.yaml
tuskar_templates/puppet/cinder-storage-puppet.yaml
tuskar_templates/puppet/manifests/overcloud_cephstorage.pp
tuskar_templates/puppet/hieradata/object.yaml
tuskar_templates/puppet/controller-puppet.yaml
tuskar_templates/puppet/cinder-storage-post.yaml
tuskar_templates/puppet/swift-storage-post.yaml
tuskar_templates/provider-Controller-1.yaml
tuskar_templates/puppet/manifests/overcloud_object.pp
tuskar_templates/hieradata/controller.yaml
tuskar_templates/hieradata/volume.yaml
tuskar_templates/puppet/compute-post-puppet.yaml
tuskar_templates/puppet/swift-storage-puppet.yaml
tuskar_templates/puppet/swift-devices-and-proxy-config.yaml
tuskar_templates/puppet/compute-puppet.yaml
tuskar_templates/puppet/hieradata/volume.yaml
tuskar_templates/puppet/ceph-storage-post-puppet.yaml
tuskar_templates/puppet/ceph-storage-puppet.yaml
tuskar_templates/puppet/hieradata/ceph.yaml
tuskar_templates/puppet/hieradata/controller.yaml
tuskar_templates/plan.yaml
tuskar_templates/environment.yaml
tuskar_templates/puppet/all-nodes-config.yaml
tuskar_templates/hieradata/compute.yaml
tuskar_templates/puppet/hieradata/compute.yaml
tuskar_templates/hieradata/ceph.yaml
tuskar_templates/puppet/manifests/overcloud_compute.pp
tuskar_templates/hieradata/common.yaml
tuskar_templates/puppet/manifests/ringbuilder.pp
tuskar_templates/firstboot/userdata_default.yaml
tuskar_templates/net-config-noop.yaml
tuskar_templates/puppet/ceph-cluster-config.yaml
tuskar_templates/extraconfig/post_deploy/default.yaml
+ OVERCLOUD_YAML_PATH=tuskar_templates/plan.yaml
+ ENVIROMENT_YAML_PATH=tuskar_templates/environment.yaml
+ heat stack-create -t 240 -f tuskar_templates/plan.yaml -e
tuskar_templates/environment.yaml overcloud
ERROR: Timed out waiting for a reply to message ID
16477b6b6ee04c7fa9f8d7cc45461d8f
Any hint?
Thanks,
Pedro Sousa
9 years, 6 months
Re: [Rdo-list] [rhos-dev] [RDO-Manager] Rewriting instack scripts into python? Why?
by James Slagle
On Thu, Apr 23, 2015 at 09:30:13AM +0200, Dmitry Tantsur wrote:
> On 04/23/2015 08:06 AM, Jaromir Coufal wrote:
>
> I don't see it a plus tbh, there's no point in showing a person details
> he/she don't care about.
>
> One particular problem is that flavor should match Ironic node data (=
> something introspected automagically), not real data. And yes, they differ.
> Due to Ironic partitioning limitations we have to -1 actual disk size.
>
> That's why I proposed an RFE to create flavors automatically:
> https://bugzilla.redhat.com/show_bug.cgi?id=1214343
os-cloud-config already has the functionality to create flavors based on nodes
definition. It takes the node definitions from a json file though instead of
querying Ironic. However, there is also code already that uses that same json
file to register the nodes in Ironic, so it would be a simple enhancement I'd
think to make it query Ironic for the nodes definition.
Can you take a look and see if this is what you had in mind? I'd propose
driving this feature request in os-cloud-config directly.
--
-- James Slagle
--
9 years, 6 months
Re: [Rdo-list] [rhos-dev] [RDO-Manager] Rewriting instack scripts into python? Why?
by James Slagle
On Thu, Apr 23, 2015 at 08:06:47AM +0200, Jaromir Coufal wrote:
>
>
> On 23/04/15 04:07, James Slagle wrote:
> >On Wed, Apr 22, 2015 at 01:20:25PM -0500, Jacob Liberman wrote:
> >>
> >>
> >>On 4/22/15 11:46 AM, Ben Nemec wrote:
> >>>>I am very concerned about this single call action which is doing all the
> >>>>>magic in the background but gives user zero flexibility. It will not
> >>>>>help neither educate users about the project.
> >>>Our job is not to educate the users about all the implementation details
> >>>of the deployment process. Our job is to write a deployment tool that
> >>>simplifies that process to the point where an ordinary human can
> >>>actually complete it. In theory you could implement the entire
> >>>deployment process in documentation without any code whatsoever, and in
> >>>fact upstream devtest tries to do exactly that:
> >>>http://docs.openstack.org/developer/tripleo-incubator/devtest_overcloud.html
> >>>
> >>>And let me tell you - as someone who has tried to follow those docs -
> >>>it's a horrible user experience. Fortunately we have a tool in
> >>>instack-undercloud that rolls up those 100+ steps from devtest into
> >>>maybe a dozen or so steps that combine the logically related bits into
> >>>single commands. Moving back toward the devtest style is heading in the
> >>>wrong direction IMNSHO.
> >>>
> >>>Does instack-undercloud/rdo-manager need to be more flexible?
> >>>Absolutely. Does that mean we should throw out our existing code and
> >>>convert it to documentation? I don't believe so.
> >>>
> >>
> >>Definitely get input from our field folks on this topic.
> >>
> >>You are basically recapitulating the entire argument against packstack (not
> >>flexible enough, obscured details) that brought about foreman installer (too
> >>many parameters and steps) and then staypuft (difficult to automate, etc.)
> >>
> >>Up to this point our field folks have mostly ignored our installers for all
> >>but the simplest POCs.
> >>
> >>My personal belief is that a well documented and complete CLI is an absolute
> >>necessity to tackle and automate real world deployments. A big reason
> >>Foreman failed IMO is because it did not have hammer out of the box. I
> >>submitted the original RFE to include hammer but it didnt make it until
> >>foreman was dead.
> >>
> >>A GUI and unified deployment scripts are nice to have but are not
> >>replacements for complete CLIs + docs.
> >
> >I'm just replying to the thread, I'm not picking on your specific point :-).
> >Your feedback is really good and something that we need to keep in mind.
> >
> >However, this discussion isn't actually about POC vs production vs flexibility.
> >That's pretty much a strawman to the discussion given that the 'openstack
> >flavor' command is already there, and will always be there. No one is saying
> >you shouldn't use it, or we shouldn't document why and how we use flavors. Or
> >for that matter that any amount of customization via flavors wouldn't
> >eventually be possible.
>
> But this is our primary focus - production ready flow. Which we should test
> as soon as we can and it is still not out there. So this discussion actually
> is about it. And also about production ready people's user experience.
>
>
> >There's also going to be a big difference between our end user documentation
> >that just gets you any repeatable process (what we have now), and advanced
> >workflows we might document for the field or consultants, or hide behind a "not
> >officially supported without RH consultants" banner.
>
> End user documentation is not what we have now. What we have now is very
> narrow restricted flow for people to get started with 1 controller and 1
> compute -- which is just POC. With zero knowledge of what is happening in
> the background.
>
>
> >Moreso, the point is that:
> >
> >The shell script we currently have, the proposed Python code, and the proposed
> >documentation change all are 100% equivalent in terms of functionality and
> >flexiblity. The proposed documentation change doesn't even offer anything other
> >than replacing 1 command with 6, with no explanation of why or how you might
> >customize them (that's why I say it's worse).
>
> First of all -- reason why it *has* to run 6 commands is how was written the
> instack deployment script which requires 3 flavors with very specific name
> for each role, despite the fact that one of the roles is not deployed.
>
> If user follows regular way (when we get out of the deployment scripts), he
> would have to create *one* single flavor (2 commands) and in these commands
> is specifically listed what features are being registered with the flavor
> (ram, disk, vcpu). So it is not hidden from user.
>
> This is very important. If you even want to improve this flow, we should
> suggest flavors to user and improve unified CLI.
>
> >Here's really my main point I guess:
> >
> >If it takes 12 (eventually) CLI commands to create flavors, do we expect people
> >to *type* those into their shell? I hope not.
> >
> >Let's suppose we document it that way...in my experience the most likely thing
> >someone would do (especially if they're repeating the process) would be to
> >copy/paste those 12 commands out of the documentation, and write their own
> >shell script/deployment tool to execute them, or just copy/paste them
> >straight into their shell, while perhaps customizing them along the way.
> >
> >That would certainlly be a totally valid way to do it.
>
> If user has homogeneous environment, he can have just one simple flavor. If
> he has heterogeneous or he wants to get more specific, he will take
> additional actions (create more flavors or deal with edeploy matching).
>
> You are overstating - at the moment the problem with those 6 commands is
> really how the instack scripts are written. So that I could get us out of
> scripts and replace flavors script I had to create three flavors instead of
> one.
>
> >So...why don't we just give them that shell script to start off with?
> >Better yet, let's write something we'd actually like to support long term (in
> >Python) and is just as flexible, perhaps taking json (or more likely YAML tbh)
> >as input, with a nice Python program to log stuff and offer --help's along the
> >way. Something that's actually supportable so we could ask: What input did you
> >provide to this tool and what was the output? VS. How did you customize these
> >12 commands, now go please run these other commands so we can figure out what
> >happened.
>
> Yes, let's write something what we support longer term. Didn't we agree it
> is unified CLI? Didn't we agree we should support and improve it? Feeding
> another yaml file to create a single flavor? I disagree that this is better
> user experience.
>
>
> >I'm already seeing developers and external people saying that it's too much
> >as-is and that they're going to write custom tooling to help out. Why wouldn't
> >we just offer that already (especially when 90% of the code is already written
> >upstream), and still be able to keep all the flexibility, as long we actually
> >document how it works, and what commands you'd run to customize it?
>
> Are these developers real world users in production environments? Ask field
> guys. The feedback you are getting is from people who are running these
> commands 20 times a day. Then it is valid point that it is too many steps
> for them. But this is *completely* different use case than production
> environments.
>
> And we will not get out of the use case that people will write automation
> scripts. But instack will not help them with that because they will write
> very specific automation scripts on top of their production environments and
> they will use our CLI as the basis.
>
> We should focus on production environments first and then simplify for POCs.
> Not vice-versa. And not improving the scripts.
This is why this is not a POC vs. production discussion:
- What you've proposed is in no way any more flexible or production ready than
what we have today. You've proposed documentation changes that offer no
explanation of how anything works, why you would change it, what effect that
might have and uses only hardcoded values. That is functionally equivalent to what
we already have.
- Rewriting our existing shell script into a more supportable and flexible tool
is what we're getting with the proposed Python change. If you would go look
at the code review posted and os-cloud-config, you'd in fact see it's
obviously not just for one flavor at a time. Honestly, we're going to do this
regardless, it will be upstream improvements (b/c already exists there). If
the choice is made not to consume such a tool in rdo-manager, that would
certainly be a fair choice.
Both of the above points move us no further or away from POC vs production. If
you'd like to see us move towards what you consider a production environment,
your documentation changes should include the what/why/how and show the
flexibility. I agree this is the right direction to go, but I disagree that
anything currently proposed has a tangible effect on that.
--
-- James Slagle
--
9 years, 6 months