[Rdo-list] Fwd: 50% discount on passes to Cloud Connect China in Shanghai, September 16-18
by Rich Bowen
For anyone in the Shanghai area ...
-------- Original Message --------
Subject: [openstack-community] 50% discount on passes to Cloud Connect
China in Shanghai, September 16-18
Date: Mon, 7 Jul 2014 09:22:16 -0500 (CDT)
From: Kathy Cacciatore <kathyc(a)openstack.org>
To: community(a)lists.openstack.org, marketing(a)lists.openstack.org
Cloud Connect China <http://www.cloudconnectevent.cn/> is offering
OpenStack community members and their clients and prospects a 50%
discount on conference passes. It is limited to the first 50 people and
must be used by July 31. Feel free to pass this on to other OpenStack
people who may be interested in attending.
OpenStack is sponsoring a half-day workshop on Monday, September 16,
given by leading community members in China. Tom Fifield, OpenStack
Community Manager, is a conference advisor and will also attending.
Visit www.cloudconnectevent.cn/registration/registration_en.php
<http://www.cloudconnectevent.cn/registration/registration_en.php>, and
register for the package desired using registration code *CLOU14XP8ND.
* Here are the packages with pre-discount prices. Note that a VIP Pass
will be under $400! Thank you.
cid:image007.png@01CF63B8.42D2FE90
--
Regards,
Kathy Cacciatore
OpenStack Industry Event Planner
1-512-970-2807 (mobile)
Part time: Monday - Thursday, 9am - 2pm US CT
kathyc(a)openstack.org
10 years, 4 months
[Rdo-list] Red Hat OpenStack Evaluation installation
by Lodgen, Brad
I have a quick question, as I may be misunderstanding the intention of the RHOS product and the installation/configuration guide. I'm using an evaluation, so I can't open tickets or I'll get forwarded to this mailing list. I've had a considerable number of issues installing and getting RHOS running in an initial "let's get started doing actual OpenStack tasks" kind of state.
Is the RHOS product meant to be installable and running without going through the section of the installation/configuration guide that covers manual installation? Or are you expected to still go through the entire manual installation section? Because there are integral parts that are not discussed outside of the manual section; storage implementation, for example, is not mentioned outside of the manual installation section. And even the sections that are in the manual section basically say you can't rely solely on the Foreman host groups to set up storage, as there are some manual steps. Can someone shed some light on the product's intentions and how far it goes with setting up OpenStack for you?
10 years, 5 months
[Rdo-list] [Rdo-newsletter] July 2014 RDO Community Newsletter
by Rich Bowen
With the first milestone behind us this month, and the second
one coming up fast - https://wiki.openstack.org/wiki/Juno_Release_Schedule -
the Juno cycle seems to be speeding past. Here's some
of what's happened in June, and what's coming in July.
Hangouts:
On June 6, Hugh Brock and the TripleO team talked about what's
planned for OpenStack TripleO (the OpenStack deployment tool) in a
Google Hangout. You can watch that at
https://www.youtube.com/watch?v=ol5LuedIWBw
On July 9, 15:00 UTC (11 am Eastern US time) Eoghan Glynn will be
leading a Google Hangout in which he'll discuss what's new in
Ceilometer in OpenStack Icehouse, and what's coming in Juno. Sign up to
attend that event at
https://plus.google.com/events/c6e8vjjn8klrf78ruhkr95j4tas
Conferences:
Also, in July, RDO will have a presence at OSCON, July 20-24, in
Portland, Oregon, both in the Red Hat booth, and also in the Cloud track -
http://www.oscon.com/oscon2014/public/schedule/topic/1113 If you're
going to be at OSCON, drop by to say hi.
In early August, the Flock conference will be held in Prague, Czech
Republic - http://flocktofedora.com/ (August 6-9). In addition to all of
the great Fedora content, Kashyap Chamarthy will be speaking about
deploying OpenStack on Fedora. - http://sched.co/1kI1BWf
Although the OpenStack Summit is still a few months away, be sure it's
on your
calendar. The summit will be held in Paris, November 3-7. More
information and
registration will be available in the next month or two.
Blog posts:
This month's blog posts from the RDO range from the technical to the
philosophical. If you want to see the latest posts from the RDO
community, you can follow at http://planet.rdoproject.org/
* Mark McLoughlin - An ideal openstack developer -
http://blogs.gnome.org/markmc/2014/06/06/an-ideal-openstack-developer/
* Liz Blanchard - Moving forward as a User Experience Team in the
OpenStack Juno release cycle -
http://uxd-stackabledesign.rhcloud.com/moving-forward-user-experience-tea...
* Rich Bowen - Red Hat at the OpenStack Summit (recordings) -
http://drbacchus.com/red-hat-at-the-openstack-summit
* Adam Young - Why POpen for OpenSSL calls -
http://adam.younglogic.com/2014/06/why-popen-for-openssl-calls/
* Flavio Percoco - Marconi to AMQP: See you later -
http://blog.flaper87.com/post/53a09586d987d23f49c777bf/
* Kashyap Chamarthy - On bug reporting. . .
http://kashyapc.com/2014/06/22/on-bug-reporting/
eNovance Acquisition:
The biggest news in the RDO world this month was Red Hat's
acquisition of eNovance:
http://www.redhat.com/about/news/press-archive/2014/6/red-hat-to-acquire-...
eNovance's engineers are prolific contributors to the OpenStack
upstream and respected names in the OpenStack community. And
eNovance is 9th, by number of contributions, on the list of
organizations contributing to the OpenStack code:
http://activity.openstack.org/dash/browser/scm-companies.html
Stay in Touch:
The best ways to keep up with what's going on in the RDO community
are:
* Follow us on Twitter - http://twitter.com/rdocommunity
* Google+ - http://tm3.org/rdogplus
* rdo-list mailing list - http://www.redhat.com/mailman/listinfo/rdo-list
* This newsletter - http://www.redhat.com/mailman/listinfo/rdo-newsletter
* RDO Q&A - http://ask.openstack.org/
Thanks again for being part of the RDO community!
--
Rich Bowen, OpenStack Community Liaison
rbowen(a)redhat.com
http://openstack.redhat.com
_______________________________________________
Rdo-newsletter mailing list
Rdo-newsletter(a)redhat.com
https://www.redhat.com/mailman/listinfo/rdo-newsletter
10 years, 5 months
Re: [Rdo-list] RH OpenStack v4 Evaluation: Initial Setup: First Compute Node Doesn't Show Up in Hypervisor List
by Lodgen, Brad
I think I figured it out.
I started looking into saslauthd configuration and the configuration looked right, so I checked the service status, it was off. I checked the chkconfig status of saslauthd, and it was off for all init levels. I ran "/etc/init.d/saslauthd start", it turned on, so I changed the /etc/qpidd.conf "auth=no" to "auth=yes", and restarted qpidd service while tailing the /var/log/nova/compute.log of my compute node. It had 2 failure notices immediately, but then right after said communication was successful, and all my hypervisors are showing in the dashboard, communication in logs still looks good.
I guess for some reason the Foreman controller host group doesn't turn on saslauthd service and doesn't turn on chkconfig for saslauthd?
-----Original Message-----
From: Lodgen, Brad
Sent: Thursday, July 03, 2014 11:52 AM
To: 'Rhys Oxenham'
Cc: 'rdo-list(a)redhat.com'
Subject: RE: [Rdo-list] RH OpenStack v4 Evaluation: Initial Setup: First Compute Node Doesn't Show Up in Hypervisor List
Well, it had the same result with the second compute node I brought up which was a fresh system with RHEL6.5/RHOS package updates.
I checked the nova.conf on controller and both compute nodes. All the same configuration, username, passwords, everything.
rpc_backend=nova.openstack.common.rpc.impl_qpid
qpid_hostname={controller node private IP}
qpid_port=5672
#qpid_hosts=$qpid_hostname:$qpid_port
qpid_username={same username}
qpid_password={same password}
#qpid_sasl_mechanisms=
qpid_heartbeat=60
qpid_protocol=tcp
qpid_tcp_nodelay=True
#qpid_topology_version=1
Should the qpid client be installed on the compute nodes? Because this page notes that doing it manually, it should be (https://access.redhat.com/documentation/en-US/Red_Hat_Enterprise_Linux_Op...) but, as you can see via Foreman host group deployment it IS installed on the controller and IS NOT on the compute nodes.
[root@ctlr ~]# yum list installed | grep qpid This system is receiving updates from Red Hat Subscription Management.
python-qpid.noarch 0.14-11.el6_3 @rhel-6-server-rpms
qpid-cpp-client.x86_64 0.14-22.el6_3 @rhel-6-server-rpms
qpid-cpp-server.x86_64 0.14-22.el6_3 @rhel-6-server-rpms
[root@comp1 ~]# yum list installed | grep qpid This system is receiving updates from Red Hat Subscription Management.
python-qpid.noarch 0.14-11.el6_3 @rhel-6-server-rpms
[root@comp2 ~]# yum list installed | grep qpid This system is receiving updates from Red Hat Subscription Management.
python-qpid.noarch 0.14-11.el6_3 @rhel-6-server-rpms
-----Original Message-----
From: Rhys Oxenham [mailto:roxenham@redhat.com]
Sent: Thursday, July 03, 2014 11:22 AM
To: Lodgen, Brad
Cc: rdo-list(a)redhat.com
Subject: Re: [Rdo-list] RH OpenStack v4 Evaluation: Initial Setup: First Compute Node Doesn't Show Up in Hypervisor List
OK looks like a bug somewhere... if qpid auth is enabled it requires the authentication mechanism to be completed properly.
See: http://qpid.apache.org/releases/qpid-0.14/books/AMQP-Messaging-Broker-CPP...
>From looking at puppet-qpid it should have done this for you.
Have you been able to reproduce this issue on a clean system?
Cheers
Rhys
On 3 Jul 2014, at 17:16, Lodgen, Brad <Brad.Lodgen(a)centurylink.com> wrote:
> No worries. I understand you're busy and thank you for the assistance. To answer your question: yes, following the change and qpidd restart, the logs showed successful communication and the initial compute node showed up as a hypervisor in the dashboard. I also successfully added a second compute node. Success here relies upon disabling the puppet agent so it doesn't change auth back to yes; otherwise, communication fails.
>
>
> -----Original Message-----
> From: Rhys Oxenham [mailto:roxenham@redhat.com]
> Sent: Thursday, July 03, 2014 11:13 AM
> To: Lodgen, Brad
> Cc: rdo-list(a)redhat.com
> Subject: Re: [Rdo-list] RH OpenStack v4 Evaluation: Initial Setup:
> First Compute Node Doesn't Show Up in Hypervisor List
>
> Sorry I didn't respond to this... I have auth set to no in my environment, but that's just for testing. Do things work when auth is set to no and the service is restarted?
>
> On 3 Jul 2014, at 17:11, Lodgen, Brad <Brad.Lodgen(a)centurylink.com> wrote:
>
>> Follow-up from yesterday... is this the same default in RDO, to have qpidd.conf default to auth=yes?
>>
>> Does that mean I have something on the compute side misconfigured? It looks to me like the username/password is the same on the controller/compute.
>>
>> For now, I've had to disable puppet agent on the controller, as it keeps resetting "auth=no" back to "auth=yes", and I don't see a host group parameter that would change that.
>>
>>
>>
>>
>> -----Original Message-----
>> From: Lodgen, Brad
>> Sent: Wednesday, July 02, 2014 2:01 PM
>> To: 'Rhys Oxenham'
>> Cc: 'rdo-list(a)redhat.com'
>> Subject: RE: [Rdo-list] RH OpenStack v4 Evaluation: Initial Setup:
>> First Compute Node Doesn't Show Up in Hypervisor List
>>
>> So even though the default controller puppet module configures qpidd.conf to say "auth=yes", I changed it to "auth=no" and restarted qpidd service. I now see my compute host in the dashboard.
>>
>> Is that a misconfiguration in RHOSv4 that I should submit for a change somewhere?
>>
>>
>>
>> -----Original Message-----
>> From: Lodgen, Brad
>> Sent: Wednesday, July 02, 2014 12:53 PM
>> To: 'Rhys Oxenham'
>> Cc: 'rdo-list(a)redhat.com'
>> Subject: RE: [Rdo-list] RH OpenStack v4 Evaluation: Initial Setup:
>> First Compute Node Doesn't Show Up in Hypervisor List
>>
>> The settings in the controller/compute host group parameters regarding qpidd are all default, except for the host, which is the private IP of the controller.
>>
>> I haven't made any changes outside of the Foreman host group parameters and I don't see any compute host group parameters that would allow me to specify whether a service uses Qpid authentication or not.
>>
>> I did change the compute host group parameter "auth_host" and "nova_host" (originally by default set to 127.0.0.1) to the private IP of the controller. Would that have any effect?
>>
>>
>> -----Original Message-----
>> From: Rhys Oxenham [mailto:roxenham@redhat.com]
>> Sent: Wednesday, July 02, 2014 12:35 PM
>> To: Lodgen, Brad
>> Cc: rdo-list(a)redhat.com
>> Subject: Re: [Rdo-list] RH OpenStack v4 Evaluation: Initial Setup:
>> First Compute Node Doesn't Show Up in Hypervisor List
>>
>> Have you specified Qpid authentication in any of the rest of the services? I suspect that Qpid is set up to use authentication but none of the other services are.
>>
>> On 2 Jul 2014, at 18:30, Lodgen, Brad <Brad.Lodgen(a)centurylink.com> wrote:
>>
>>> # GENERATED BY PUPPET
>>> #
>>> # Configuration file for qpidd. Entries are of the form:
>>> # name=value
>>> #
>>> # (Note: no spaces on either side of '='). Using default settings:
>>> # "qpidd --help" or "man qpidd" for more details.
>>> port=5672
>>> max-connections=65535
>>> worker-threads=17
>>> connection-backlog=10
>>> auth=yes
>>> realm=QPID
>>>
>>>
>>> -----Original Message-----
>>> From: Rhys Oxenham [mailto:roxenham@redhat.com]
>>> Sent: Wednesday, July 02, 2014 12:27 PM
>>> To: Lodgen, Brad
>>> Cc: rdo-list(a)redhat.com
>>> Subject: Re: [Rdo-list] RH OpenStack v4 Evaluation: Initial Setup:
>>> First Compute Node Doesn't Show Up in Hypervisor List
>>>
>>> No worries!
>>>
>>> Can you paste out your /etc/qpidd.conf file from the controller?
>>> (Make sure you sanitise the output)
>>>
>>> Cheers
>>> Rhys
>>>
>>>
>>> On 2 Jul 2014, at 18:23, Lodgen, Brad <Brad.Lodgen(a)centurylink.com> wrote:
>>>
>>>> Thanks for the quick response! Based on the below log findings and what I just found searching, is this caused by the controller host group parameter "freeipa" being set to the default "false"? Change it to "true"?
>>>>
>>>>
>>>>
>>>> On the compute node, I'm seeing this over and over in the compute log:
>>>>
>>>> Unable to connect to AMQP server: Error in sasl_client_start (-1)
>>>> SASL(-1): generic failure: GSSAPI Error: Unspecified GSS failure.
>>>> Minor code may provide more information (Cannot determine realm for
>>>> numeric host address). Sleeping 5 seconds
>>>>
>>>> On the controller conductor log:
>>>>
>>>> Unable to connect to AMQP server: Error in sasl_client_start (-1)
>>>> SASL(-1): generic failure: GSSAPI Error: Unspecified GSS failure.
>>>> Minor code may provide more information (Cannot determine realm for
>>>> numeric host address). Sleeping 5 seconds
>>>>
>>>> In the controller messages file:
>>>>
>>>> python: GSSAPI Error: Unspecified GSS failure. Minor code may
>>>> provide more information (Cannot determine realm for numeric host
>>>> address)
>>>>
>>>>
>>>>
>>>>
>>>>
>>>> -----Original Message-----
>>>> From: Rhys Oxenham [mailto:roxenham@redhat.com]
>>>> Sent: Wednesday, July 02, 2014 12:14 PM
>>>> To: Lodgen, Brad
>>>> Cc: rdo-list(a)redhat.com
>>>> Subject: Re: [Rdo-list] RH OpenStack v4 Evaluation: Initial Setup:
>>>> First Compute Node Doesn't Show Up in Hypervisor List
>>>>
>>>> Hi Brad,
>>>>
>>>> Have you checked the nova-compute logs in /var/log/nova/compute.log
>>>> (on your new compute node?)
>>>>
>>>> This should point towards why it's unable to connect/start etc. I suspect that it's unable to join the message queue, and hence show up as an available hypervisor.
>>>>
>>>> Many thanks
>>>> Rhys
>>>>
>>>> On 2 Jul 2014, at 18:05, Lodgen, Brad <Brad.Lodgen(a)centurylink.com> wrote:
>>>>
>>>>> Hi folks,
>>>>>
>>>>> I have an issue where I've just done the initial setup and added a controller node, it finished, then I added a compute node, it finished, but I don't see the compute node in the hypervisor list on the dashboard. I'm using the RH OpenStack evaluation version 4. I have five hosts present in Foreman.
>>>>>
>>>>> -Foreman host (purely for Foreman) -Controller host (applied
>>>>> Controller(Nova) host group) -Compute Host (applied Compute(Nova)
>>>>> host group)
>>>>> -2 other hosts (not host group applied, but one will be compute
>>>>> and one will be storage)
>>>>>
>>>>> Did I miss something on the controller/compute host group parameters that would cause it to not show up in the dashboard?
>>>>> _______________________________________________
>>>>> Rdo-list mailing list
>>>>> Rdo-list(a)redhat.com
>>>>> https://www.redhat.com/mailman/listinfo/rdo-list
>>>>
>>>
>>
>
10 years, 5 months
Re: [Rdo-list] RH OpenStack v4 Evaluation: Initial Setup: First Compute Node Doesn't Show Up in Hypervisor List
by Lodgen, Brad
Follow-up from yesterday... is this the same default in RDO, to have qpidd.conf default to auth=yes?
Does that mean I have something on the compute side misconfigured? It looks to me like the username/password is the same on the controller/compute.
For now, I've had to disable puppet agent on the controller, as it keeps resetting "auth=no" back to "auth=yes", and I don't see a host group parameter that would change that.
-----Original Message-----
From: Lodgen, Brad
Sent: Wednesday, July 02, 2014 2:01 PM
To: 'Rhys Oxenham'
Cc: 'rdo-list(a)redhat.com'
Subject: RE: [Rdo-list] RH OpenStack v4 Evaluation: Initial Setup: First Compute Node Doesn't Show Up in Hypervisor List
So even though the default controller puppet module configures qpidd.conf to say "auth=yes", I changed it to "auth=no" and restarted qpidd service. I now see my compute host in the dashboard.
Is that a misconfiguration in RHOSv4 that I should submit for a change somewhere?
-----Original Message-----
From: Lodgen, Brad
Sent: Wednesday, July 02, 2014 12:53 PM
To: 'Rhys Oxenham'
Cc: 'rdo-list(a)redhat.com'
Subject: RE: [Rdo-list] RH OpenStack v4 Evaluation: Initial Setup: First Compute Node Doesn't Show Up in Hypervisor List
The settings in the controller/compute host group parameters regarding qpidd are all default, except for the host, which is the private IP of the controller.
I haven't made any changes outside of the Foreman host group parameters and I don't see any compute host group parameters that would allow me to specify whether a service uses Qpid authentication or not.
I did change the compute host group parameter "auth_host" and "nova_host" (originally by default set to 127.0.0.1) to the private IP of the controller. Would that have any effect?
-----Original Message-----
From: Rhys Oxenham [mailto:roxenham@redhat.com]
Sent: Wednesday, July 02, 2014 12:35 PM
To: Lodgen, Brad
Cc: rdo-list(a)redhat.com
Subject: Re: [Rdo-list] RH OpenStack v4 Evaluation: Initial Setup: First Compute Node Doesn't Show Up in Hypervisor List
Have you specified Qpid authentication in any of the rest of the services? I suspect that Qpid is set up to use authentication but none of the other services are.
On 2 Jul 2014, at 18:30, Lodgen, Brad <Brad.Lodgen(a)centurylink.com> wrote:
> # GENERATED BY PUPPET
> #
> # Configuration file for qpidd. Entries are of the form:
> # name=value
> #
> # (Note: no spaces on either side of '='). Using default settings:
> # "qpidd --help" or "man qpidd" for more details.
> port=5672
> max-connections=65535
> worker-threads=17
> connection-backlog=10
> auth=yes
> realm=QPID
>
>
> -----Original Message-----
> From: Rhys Oxenham [mailto:roxenham@redhat.com]
> Sent: Wednesday, July 02, 2014 12:27 PM
> To: Lodgen, Brad
> Cc: rdo-list(a)redhat.com
> Subject: Re: [Rdo-list] RH OpenStack v4 Evaluation: Initial Setup:
> First Compute Node Doesn't Show Up in Hypervisor List
>
> No worries!
>
> Can you paste out your /etc/qpidd.conf file from the controller? (Make
> sure you sanitise the output)
>
> Cheers
> Rhys
>
>
> On 2 Jul 2014, at 18:23, Lodgen, Brad <Brad.Lodgen(a)centurylink.com> wrote:
>
>> Thanks for the quick response! Based on the below log findings and what I just found searching, is this caused by the controller host group parameter "freeipa" being set to the default "false"? Change it to "true"?
>>
>>
>>
>> On the compute node, I'm seeing this over and over in the compute log:
>>
>> Unable to connect to AMQP server: Error in sasl_client_start (-1)
>> SASL(-1): generic failure: GSSAPI Error: Unspecified GSS failure.
>> Minor code may provide more information (Cannot determine realm for
>> numeric host address). Sleeping 5 seconds
>>
>> On the controller conductor log:
>>
>> Unable to connect to AMQP server: Error in sasl_client_start (-1)
>> SASL(-1): generic failure: GSSAPI Error: Unspecified GSS failure.
>> Minor code may provide more information (Cannot determine realm for
>> numeric host address). Sleeping 5 seconds
>>
>> In the controller messages file:
>>
>> python: GSSAPI Error: Unspecified GSS failure. Minor code may
>> provide more information (Cannot determine realm for numeric host
>> address)
>>
>>
>>
>>
>>
>> -----Original Message-----
>> From: Rhys Oxenham [mailto:roxenham@redhat.com]
>> Sent: Wednesday, July 02, 2014 12:14 PM
>> To: Lodgen, Brad
>> Cc: rdo-list(a)redhat.com
>> Subject: Re: [Rdo-list] RH OpenStack v4 Evaluation: Initial Setup:
>> First Compute Node Doesn't Show Up in Hypervisor List
>>
>> Hi Brad,
>>
>> Have you checked the nova-compute logs in /var/log/nova/compute.log
>> (on your new compute node?)
>>
>> This should point towards why it's unable to connect/start etc. I suspect that it's unable to join the message queue, and hence show up as an available hypervisor.
>>
>> Many thanks
>> Rhys
>>
>> On 2 Jul 2014, at 18:05, Lodgen, Brad <Brad.Lodgen(a)centurylink.com> wrote:
>>
>>> Hi folks,
>>>
>>> I have an issue where I've just done the initial setup and added a controller node, it finished, then I added a compute node, it finished, but I don't see the compute node in the hypervisor list on the dashboard. I'm using the RH OpenStack evaluation version 4. I have five hosts present in Foreman.
>>>
>>> -Foreman host (purely for Foreman)
>>> -Controller host (applied Controller(Nova) host group) -Compute Host
>>> (applied Compute(Nova) host group)
>>> -2 other hosts (not host group applied, but one will be compute and
>>> one will be storage)
>>>
>>> Did I miss something on the controller/compute host group parameters that would cause it to not show up in the dashboard?
>>> _______________________________________________
>>> Rdo-list mailing list
>>> Rdo-list(a)redhat.com
>>> https://www.redhat.com/mailman/listinfo/rdo-list
>>
>
10 years, 5 months
Re: [Rdo-list] RH OpenStack v4 Evaluation: Initial Setup: First Compute Node Doesn't Show Up in Hypervisor List
by Lodgen, Brad
So even though the default controller puppet module configures qpidd.conf to say "auth=yes", I changed it to "auth=no" and restarted qpidd service. I now see my compute host in the dashboard.
Is that a misconfiguration in RHOSv4 that I should submit for a change somewhere?
-----Original Message-----
From: Lodgen, Brad
Sent: Wednesday, July 02, 2014 12:53 PM
To: 'Rhys Oxenham'
Cc: 'rdo-list(a)redhat.com'
Subject: RE: [Rdo-list] RH OpenStack v4 Evaluation: Initial Setup: First Compute Node Doesn't Show Up in Hypervisor List
The settings in the controller/compute host group parameters regarding qpidd are all default, except for the host, which is the private IP of the controller.
I haven't made any changes outside of the Foreman host group parameters and I don't see any compute host group parameters that would allow me to specify whether a service uses Qpid authentication or not.
I did change the compute host group parameter "auth_host" and "nova_host" (originally by default set to 127.0.0.1) to the private IP of the controller. Would that have any effect?
-----Original Message-----
From: Rhys Oxenham [mailto:roxenham@redhat.com]
Sent: Wednesday, July 02, 2014 12:35 PM
To: Lodgen, Brad
Cc: rdo-list(a)redhat.com
Subject: Re: [Rdo-list] RH OpenStack v4 Evaluation: Initial Setup: First Compute Node Doesn't Show Up in Hypervisor List
Have you specified Qpid authentication in any of the rest of the services? I suspect that Qpid is set up to use authentication but none of the other services are.
On 2 Jul 2014, at 18:30, Lodgen, Brad <Brad.Lodgen(a)centurylink.com> wrote:
> # GENERATED BY PUPPET
> #
> # Configuration file for qpidd. Entries are of the form:
> # name=value
> #
> # (Note: no spaces on either side of '='). Using default settings:
> # "qpidd --help" or "man qpidd" for more details.
> port=5672
> max-connections=65535
> worker-threads=17
> connection-backlog=10
> auth=yes
> realm=QPID
>
>
> -----Original Message-----
> From: Rhys Oxenham [mailto:roxenham@redhat.com]
> Sent: Wednesday, July 02, 2014 12:27 PM
> To: Lodgen, Brad
> Cc: rdo-list(a)redhat.com
> Subject: Re: [Rdo-list] RH OpenStack v4 Evaluation: Initial Setup:
> First Compute Node Doesn't Show Up in Hypervisor List
>
> No worries!
>
> Can you paste out your /etc/qpidd.conf file from the controller? (Make
> sure you sanitise the output)
>
> Cheers
> Rhys
>
>
> On 2 Jul 2014, at 18:23, Lodgen, Brad <Brad.Lodgen(a)centurylink.com> wrote:
>
>> Thanks for the quick response! Based on the below log findings and what I just found searching, is this caused by the controller host group parameter "freeipa" being set to the default "false"? Change it to "true"?
>>
>>
>>
>> On the compute node, I'm seeing this over and over in the compute log:
>>
>> Unable to connect to AMQP server: Error in sasl_client_start (-1)
>> SASL(-1): generic failure: GSSAPI Error: Unspecified GSS failure.
>> Minor code may provide more information (Cannot determine realm for
>> numeric host address). Sleeping 5 seconds
>>
>> On the controller conductor log:
>>
>> Unable to connect to AMQP server: Error in sasl_client_start (-1)
>> SASL(-1): generic failure: GSSAPI Error: Unspecified GSS failure.
>> Minor code may provide more information (Cannot determine realm for
>> numeric host address). Sleeping 5 seconds
>>
>> In the controller messages file:
>>
>> python: GSSAPI Error: Unspecified GSS failure. Minor code may
>> provide more information (Cannot determine realm for numeric host
>> address)
>>
>>
>>
>>
>>
>> -----Original Message-----
>> From: Rhys Oxenham [mailto:roxenham@redhat.com]
>> Sent: Wednesday, July 02, 2014 12:14 PM
>> To: Lodgen, Brad
>> Cc: rdo-list(a)redhat.com
>> Subject: Re: [Rdo-list] RH OpenStack v4 Evaluation: Initial Setup:
>> First Compute Node Doesn't Show Up in Hypervisor List
>>
>> Hi Brad,
>>
>> Have you checked the nova-compute logs in /var/log/nova/compute.log
>> (on your new compute node?)
>>
>> This should point towards why it's unable to connect/start etc. I suspect that it's unable to join the message queue, and hence show up as an available hypervisor.
>>
>> Many thanks
>> Rhys
>>
>> On 2 Jul 2014, at 18:05, Lodgen, Brad <Brad.Lodgen(a)centurylink.com> wrote:
>>
>>> Hi folks,
>>>
>>> I have an issue where I've just done the initial setup and added a controller node, it finished, then I added a compute node, it finished, but I don't see the compute node in the hypervisor list on the dashboard. I'm using the RH OpenStack evaluation version 4. I have five hosts present in Foreman.
>>>
>>> -Foreman host (purely for Foreman)
>>> -Controller host (applied Controller(Nova) host group) -Compute Host
>>> (applied Compute(Nova) host group)
>>> -2 other hosts (not host group applied, but one will be compute and
>>> one will be storage)
>>>
>>> Did I miss something on the controller/compute host group parameters that would cause it to not show up in the dashboard?
>>> _______________________________________________
>>> Rdo-list mailing list
>>> Rdo-list(a)redhat.com
>>> https://www.redhat.com/mailman/listinfo/rdo-list
>>
>
10 years, 5 months
[Rdo-list] RH OpenStack v4 Evaluation: Initial Setup: First Compute Node Doesn't Show Up in Hypervisor List
by Lodgen, Brad
Hi folks,
I have an issue where I've just done the initial setup and added a controller node, it finished, then I added a compute node, it finished, but I don't see the compute node in the hypervisor list on the dashboard. I'm using the RH OpenStack evaluation version 4. I have five hosts present in Foreman.
-Foreman host (purely for Foreman)
-Controller host (applied Controller(Nova) host group)
-Compute Host (applied Compute(Nova) host group)
-2 other hosts (not host group applied, but one will be compute and one will be storage)
Did I miss something on the controller/compute host group parameters that would cause it to not show up in the dashboard?
10 years, 5 months
[Rdo-list] Problem regarding mysql.pp
by sharad aggarwal
Dear Admin,
I am trying to install rdo latest release i.e. openstack icehouse on CentOS
6.5 (64 bit) nut I am getting following error,
*Applying 192.168.11.6_prescript.pp*
*192.168.11.6_prescript.pp: [ DONE ]*
*Applying 192.168.11.6_mysql.pp*
*Applying 192.168.11.6_amqp.pp*
*192.168.11.6_mysql.pp: [ ERROR ]*
*Applying Puppet manifests [ ERROR ]*
*ERROR : Error appeared during Puppet run: 192.168.11.6_mysql.pp*
*Package mariadb-galera-server has not been found in enabled Yum repos.*
*You will find full trace in log
/var/tmp/packstack/20140702-152003-VVOe1r/manifests/192.168.11.6_mysql.pp.log*
*Please check log file
/var/tmp/packstack/20140702-152003-VVOe1r/openstack-setup.log for more
information*
I would like to inform you that I have installed MariaDB-Galera-server and
removed mysql-server. Earlier I was getting error with prescript.pp but
that I resolved by making a timeout changes in netns.pp file. I have also
executed "yum install iproute iputils".
Attached file holds the output
*/var/tmp/packstack/20140702-152003-VVOe1r/manifests/192.168.11.6_mysql.pp.log*
Please help ASAP. Thanks
--
Regards,
Sharad Aggarwal
+91 9999 197 992
10 years, 5 months
[Rdo-list] Nested RDO Icehouse nova-compute KVM / QEMU issues due to -cpu host
by Steven Ellis
So I'm having issues nesting RDO on my T440s laptop (Intel(R) Core(TM)
i7-4600U CPU @ 2.10GHz), and I'm hoping someone on the list can help
My Physical Host (L0) is Fedora 19 running 3.14.4-100.fc19.x86_64 with
nesting turned on
My OpenStack Host is RHEL 6.5 or RHEL 7 (L1)
My Guest is Cirros (L2)
I'm installing RDO Icehouse under RHEL via
packstack --allinone --os-neutron-install=n
I then try to startup a Cirros guest (L2) and the guest never spawns
Taking a look at the qemu command line it looks as follows
/usr/libexec/qemu-kvm \
-global virtio-blk-pci.scsi=off \
-nodefconfig \
-nodefaults \
-nographic \
-machine accel=kvm:tcg \
-cpu host,+kvmclock \
-m 500 \
-no-reboot \
-kernel /var/tmp/.guestfs-497/kernel.2647 \
-initrd /var/tmp/.guestfs-497/initrd.2647 \
-device virtio-scsi-pci,id=scsi \
-drive file=/var/lib/nova/instances/3ae072b4-f4bf-42cf-b3ea-27d9768bc4df/disk,cache=none,format=qcow2,id=hd0,if=none \
-device scsi-hd,drive=hd0 \
-drive file=/var/tmp/.guestfs-497/root.2647,snapshot=on,id=appliance,if=none,cache=unsafe \
-device scsi-hd,drive=appliance \
-device virtio-serial \
-serial stdio \
-device sga \
-chardev socket,path=/tmp/libguestfsKGbB3D/guestfsd.sock,id=channel0 \
-device virtserialport,chardev=channel0,name=org.libguestfs.channel.0 \
-append panic=1 console=ttyS0 udevtimeout=600 no_timer_check acpi=off printk.time=1 cgroup_disable=memory root=/dev/sdb selinux=0 TERM=linux
The issue appears to be running with "-cpu host" with this nesting
combination.
Now if I run the qemu command directly on RHEL7 (L1) I get this error
KVM: entry failed, hardware error 0x7
Under RHEL 6.5 (L1) it is similar but not identical
kvm: unhandled exit 7
In both cases on my Fedora physical host (L0) I see
nested_vmx_run: VMCS MSR_{LOAD,STORE} unsupported
There does appear to be a Red Hat bugzilla for RHEL7 relating to this
but not for RHEL6
- https://bugzilla.redhat.com/show_bug.cgi?id=1038427
I can reproduce this issue using both RHEL 6.5 and RHEL 7 as my
OpenStack Host (L1). Has anyone else hit this issue?
Next I tried a work around of editing the /etc/nova/nova.conf file and
forcing the CPU type for my guests under OpenStack
#cpu_mode=none
cpu_mode=custom
# Set to a named libvirt CPU model (see names listed in
# /usr/share/libvirt/cpu_map.xml). Only has effect if
# cpu_mode="custom" and virt_type="kvm|qemu" (string value)
# Deprecated group;name - DEFAULT;libvirt_cpu_model
#cpu_model=<None>
cpu_model=Conroe
Problem is qemu is still run with "-cpu host,+kvmclock"
So am I hitting a secondary bug with nova-compute or is there another
way to force OpenStack to select a particular CPU subset for Nova?
Steve
--
Steven Ellis
Solution Architect - Red Hat New Zealand <http://www.redhat.co.nz/>
*E:* sellis(a)redhat.com <mailto:sellis@redhat.com>
10 years, 5 months