Hi,
Can you check /var/log/nova/nova-scheduler.log to see if it's got any
indication on why it's filing to start?
Thanks
On Fri, Aug 5, 2016 at 4:27 PM, Gunjan, Milind [CTO]
<Milind.Gunjan(a)sprint.com> wrote:
Hi Marius,
This is what I see when I ran the puppet script in debug mode:
Debug: Executing '/bin/systemctl start neutron-server'
Error: Could not start Service[neutron-server]: Execution of '/bin/systemctl start
neutron-server' returned 1: Job for neutron-server.service failed because a timeout
was exceeded. See "systemctl status neutron-server.service" and "journalctl
-xe" for details.
Wrapped exception:
Execution of '/bin/systemctl start neutron-server' returned 1: Job for
neutron-server.service failed because a timeout was exceeded. See "systemctl status
neutron-server.service" and "journalctl -xe" for details.
Error: /Stage[main]/Neutron::Server/Service[neutron-server]/ensure: change from stopped
to running failed: Could not start Service[neutron-server]: Execution of
'/bin/systemctl start neutron-server' returned 1: Job for neutron-server.service
failed because a timeout was exceeded. See "systemctl status
neutron-server.service" and "journalctl -xe" for details.
Notice: /Stage[main]/Neutron::Agents::Ml2::Ovs/Service[neutron-ovs-agent-service]:
Dependency Service[neutron-server] has failures: true
Warning: /Stage[main]/Neutron::Agents::Ml2::Ovs/Service[neutron-ovs-agent-service]:
Skipping because of failed dependencies
Notice: /Stage[main]/Neutron::Agents::Dhcp/Service[neutron-dhcp-service]: Dependency
Service[neutron-server] has failures: true
Warning: /Stage[main]/Neutron::Agents::Dhcp/Service[neutron-dhcp-service]: Skipping
because of failed dependencies
Notice: /Stage[main]/Neutron::Agents::L3/Service[neutron-l3]: Dependency
Service[neutron-server] has failures: true
Warning: /Stage[main]/Neutron::Agents::L3/Service[neutron-l3]: Skipping because of failed
dependencies
Notice: /Stage[main]/Neutron::Agents::Metadata/Service[neutron-metadata]: Dependency
Service[neutron-server] has failures: true
Warning: /Stage[main]/Neutron::Agents::Metadata/Service[neutron-metadata]: Skipping
because of failed dependencies
Aslo, the script is stuck at present at this step :
Debug: Executing '/bin/systemctl start openstack-nova-scheduler'
Best Regards,
Milind
-----Original Message-----
From: Marius Cornea [mailto:marius@remote-lab.net]
Sent: Thursday, August 04, 2016 4:26 AM
To: Gunjan, Milind [CTO] <Milind.Gunjan(a)sprint.com>
Cc: rdo-list(a)redhat.com
Subject: Re: [rdo-list] RDO TripleO Mitaka Overcloud Failing
OK, I don't actually see an error in the logs, the last thing that shows up is:
on controller-0:
[DEBUG] Running /var/lib/heat-config/hooks/puppet <
/var/lib/heat-config/deployed/c989f58d-cd38-4813-a174-7e42c82bcb6f.json
on compute-0:
[DEBUG] Running /var/lib/heat-config/hooks/puppet <
/var/lib/heat-config/deployed/c5265c58-96ae-49d5-9c1e-a38041e2b130.json
I suspect these steps are timing out so let's try running them manually to figure out
what's going on:
Running the commands manually will output a puppet apply command, showing one from my
environment as an example:
# /var/lib/heat-config/hooks/puppet <
/var/lib/heat-config/deployed/d41cefd9-b70e-4e22-9e86-9a5cf5de5bff.json
[2016-08-04 08:12:21,609] (heat-config) [DEBUG] Running
FACTER_heat_outputs_path="/var/run/heat-config/heat-config-puppet/d41cefd9-b70e-4e22-9e86-9a5cf5de5bff"
FACTER_fqdn="overcloud-controller-0.localdomain"
FACTER_deploy_config_name="ControllerOvercloudServicesDeployment_Step4"
puppet apply --detailed-exitcodes
/var/lib/heat-config/heat-config-puppet/d41cefd9-b70e-4e22-9e86-9a5cf5de5bff.pp
Next step is to stop it(ctrl+c), copy the puppet apply command, add --debug and run it:
#
FACTER_heat_outputs_path="/var/run/heat-config/heat-config-puppet/d41cefd9-b70e-4e22-9e86-9a5cf5de5bff"
FACTER_fqdn="overcloud-controller-0.localdomain"
FACTER_deploy_config_name="ControllerOvercloudServicesDeployment_Step4"
puppet apply --detailed-exitcodes
/var/lib/heat-config/heat-config-puppet/d41cefd9-b70e-4e22-9e86-9a5cf5de5bff.pp
--debug
This should output puppet debug info that might lead us to where it gets stuck. Please
paste the output so we can investigate further.
Thanks
On Thu, Aug 4, 2016 at 3:22 AM, Gunjan, Milind [CTO] <Milind.Gunjan(a)sprint.com>
wrote:
> Thanks a lot Christopher for the suggestions.
>
> Marius: Thanks a lot for helping me out. I am attaching the requested logs.
>
> I tried to redeploy overcloud with 3 controller but the issue remains the same.
Overcloud stack deployment is failing at Post-deployment configuration steps as before.
When I was going to /var/log/messages for different services, it seems there is issue with
haproxy service. Neutron service is failing too and the service endpoints being configured
through puppet are not reachable for all failed service. I have attached os-collect-config
journals from all four nodes.
>
>
> Please let me know if there is any other logs or any other troubleshooting steps
which I can implement.
>
> Best Regards,
> Milind
>
> -----Original Message-----
> From: Marius Cornea [mailto:marius@remote-lab.net]
> Sent: Wednesday, August 03, 2016 4:00 PM
> To: Gunjan, Milind [CTO] <Milind.Gunjan(a)sprint.com>
> Cc: rdo-list(a)redhat.com
> Subject: Re: [rdo-list] RDO TripleO Mitaka Non-HA Overcloud Failing
>
> Hi,
>
> Could you please ssh to the nodes, gather the os-collect-config journals (journalctl
-l -u os-collect-config) and attach them here?
>
> Thank you,
> Marius
>
> On Wed, Aug 3, 2016 at 8:40 PM, Gunjan, Milind [CTO] <Milind.Gunjan(a)sprint.com>
wrote:
>> Hi All,
>>
>>
>>
>> I am currently working on Tripleo Mitaka Openstack deployment on
>> baremetal
>> servers:
>>
>> Undercloud – 1 baremetal server with 2 NIC (1 for provisioning and
>> 2nd for external network connectivity)
>>
>> Controller – 1 baremetal server ( 6 NICs with each openstack VLANs on
>> separate NIC)
>>
>> Compute – 1 baremetal server
>>
>>
>>
>> I followed Graeme's instructions here :
>>
https://www.redhat.com/archives/rdo-list/2016-June/msg00049.html to
>> set up Undercloud . Undercloud deployment was successful and all the
>> images required for overcloud deployment was properly built as per the
instruction.
>> I would like to mention that I used libvirt tools to modify the root
>> password on overcloud-full.qcow2 and we also modified the grub file
>> to include “net.ifnames=0 biosdevname=0” to restore old interface naming.
>>
>>
>>
>> I was able to successfully introspect 2 serves to be used for
>> controller and compute nodes. Also , we added the serial device
>> discovered during introspection as root device:
>>
>> ironic node-update 604f7dfc-38af-4fe0-8986-4c8ac5f956e2 add
>> properties/root_device='{"serial":
"618e728372833010c79bead9066f0f9e"}'
>>
>> ironic node-update afcfbee3-3108-48da-a6da-aba8f422642c add
>> properties/root_device='{"serial":
"618e7283728347101f2107b511603adc"}'
>>
>>
>>
>> Next, we added compute and control tag to respective introspected
>> node with local boot option:
>>
>>
>>
>> ironic node-update 604f7dfc-38af-4fe0-8986-4c8ac5f956e2 add
>> properties/capabilities='profile:control,boot_option:local'
>>
>> ironic node-update afcfbee3-3108-48da-a6da-aba8f422642c add
>> properties/capabilities='profile:compute,boot_option:local'
>>
>>
>>
>> We used multiple NIC templates for control and compute node which has
>> been attached along with network-environment.yaml file. Default
>> network isolation template file has been used.
>>
>>
>>
>>
>>
>> Deployment script looks like this :
>>
>> #!/bin/bash
>>
>> DIR="$( cd "$( dirname "${BASH_SOURCE[0]}" )" &&
pwd )"
>>
>> template_base_dir="$DIR"
>>
>> ntpserver=<sprint.ntp.server.ip> #Sprint LAB
>>
>> openstack overcloud deploy --templates \
>>
>> -e
>> /usr/share/openstack-tripleo-heat-templates/environments/network-isol
>> a
>> tion.yaml
>> \
>>
>> -e ${template_base_dir}/environments/network-environment.yaml \
>>
>> --control-flavor control --compute-flavor compute \
>>
>> --control-scale 1 --compute-scale 1 \
>>
>> --ntp-server $ntpserver \
>>
>> --neutron-network-type vxlan --neutron-tunnel-types vxlan --debug
>>
>>
>>
>> Heat stack deployment goes on more really long time (more than 4
>> hours) and gets stuck at postdeployment configurations. Please find
>> below the capture during install :
>>
>>
>>
>>
>>
>> Every 2.0s: ironic node-list && nova list && heat stack-list
&& heat
>> resource-list -n5 overcloud | grep -vi complete Wed Aug 3 17:33:37
>> 2016
>>
>>
>>
>>
+--------------------------------------+------+--------------------------------------+-------------+--------------------+-------------+
>>
>> | UUID | Name | Instance UUID
>> | Power State | Provisioning State | Maintenance |
>>
>>
+--------------------------------------+------+--------------------------------------+-------------+--------------------+-------------+
>>
>> | 604f7dfc-38af-4fe0-8986-4c8ac5f956e2 | None |
>> 9e7aae15-cabc-4489-a1b2-778915a78df2 | power on | active |
>> False |
>>
>> | afcfbee3-3108-48da-a6da-aba8f422642c | None |
>> c1ab52a9-461a-4a11-a13e-e57ff0a3ae2a | power on | active |
>> False |
>>
>>
+--------------------------------------+------+--------------------------------------+-------------+--------------------+-------------+
>>
>>
+--------------------------------------+-------------------------+--------+------------+-------------+------------------------+
>>
>> | ID | Name | Status |
>> Task State | Power State | Networks |
>>
>>
+--------------------------------------+-------------------------+--------+------------+-------------+------------------------+
>>
>> | 9e7aae15-cabc-4489-a1b2-778915a78df2 | overcloud-controller-0 |
>> | ACTIVE |
>> - | Running | ctlplane=192.168.149.9 |
>>
>> | c1ab52a9-461a-4a11-a13e-e57ff0a3ae2a | overcloud-novacompute-0 |
>> | ACTIVE |
>> - | Running | ctlplane=192.168.149.8 |
>>
>>
+--------------------------------------+-------------------------+--------+------------+-------------+------------------------+
>>
>>
+--------------------------------------+------------+---------------+---------------------+--------------+
>>
>> | id | stack_name | stack_status |
>> creation_time | updated_time |
>>
>>
+--------------------------------------+------------+---------------+---------------------+--------------+
>>
>> | 26ee0150-4cfa-4268-9107-8bfbf6712913 | overcloud | CREATE_FAILED |
>> 2016-08-03T08:11:34 | None |
>>
>>
+--------------------------------------+------------+---------------+---------------------+--------------+
>>
>>
+---------------------------------------------+-----------------------------------------------+------------------------------------------------------------------------
>>
>>
---------+--------------------+---------------------+---------------------------------------------------------------------------------------------------------------+
>>
>> | resource_name | physical_resource_id
>> | resource_type
>>
>> | resource_status | updated_time | stack_name
>> |
>>
>>
+---------------------------------------------+-----------------------------------------------+------------------------------------------------------------------------
>>
>>
---------+--------------------+---------------------+---------------------------------------------------------------------------------------------------------------+
>>
>> | ComputeNodesPostDeployment |
>> 3797aec6-e543-4dda-9cd1-c7261e827a64 |
>> OS::TripleO::ComputePostDeployment
>>
>> | CREATE_FAILED | 2016-08-03T08:11:35 | overcloud
>> |
>>
>> | ControllerNodesPostDeployment |
>> 6ad9f88c-5c55-4125-97f1-eb0e33329d16 |
>> OS::TripleO::ControllerPostDeployment
>>
>> | CREATE_FAILED | 2016-08-03T08:11:35 | overcloud
>> |
>>
>> | ComputePuppetDeployment |
>> 8b199f85-e4f9-48ad-9aee-b1cdf4900b9f |
>> OS::Heat::StructuredDeployments
>>
>> | CREATE_FAILED | 2016-08-03T08:29:19 |
>> overcloud-ComputeNodesPostDeployment-6vxfu2g2qucy
>> |
>>
>> | ControllerOvercloudServicesDeployment_Step4 |
>> 15509f59-ff28-43af-95dd-6247a6a32c2d |
>> OS::Heat::StructuredDeployments
>>
>> | CREATE_FAILED | 2016-08-03T08:29:20 |
>> overcloud-ControllerNodesPostDeployment-35y7uafngfwj
>> |
>>
>> | 0 |
>> 7cd0aa3d-742f-4e78-99ca-b2a575913f8e |
>> OS::Heat::StructuredDeployment
>>
>> | CREATE_IN_PROGRESS | 2016-08-03T08:30:04 |
>> overcloud-ComputeNodesPostDeployment-6vxfu2g2qucy-ComputePuppetDeploy
>> m
>> ent-cpahcct3tfw3
>> |
>>
>> | 0 |
>> 5e9308f7-c3a9-4a94-a017-e1acb694c036 |
>> OS::Heat::StructuredDeployment
>>
>>
>>
>>
>>
>> [stack@mitaka-uc ~]$ openstack software deployment show
>> 5e9308f7-c3a9-4a94-a017-e1acb694c036
>>
>> +---------------+--------------------------------------+
>>
>> | Field | Value |
>>
>> +---------------+--------------------------------------+
>>
>> | id | 5e9308f7-c3a9-4a94-a017-e1acb694c036 |
>>
>> | server_id | 9e7aae15-cabc-4489-a1b2-778915a78df2 |
>>
>> | config_id | 86d49e66-2f25-4cb1-b623-5ae87b01bb64 |
>>
>> | creation_time | 2016-08-03T08:32:10 |
>>
>> | updated_time | |
>>
>> | status | IN_PROGRESS |
>>
>> | status_reason | Deploy data available |
>>
>> | input_values | {} |
>>
>> | action | CREATE |
>>
>> +---------------+--------------------------------------+
>>
>>
>>
>> [stack@mitaka-uc ~]$ openstack software deployment show --long
>> 5e9308f7-c3a9-4a94-a017-e1acb694c036
>>
>> +---------------+--------------------------------------+
>>
>> | Field | Value |
>>
>> +---------------+--------------------------------------+
>>
>> | id | 5e9308f7-c3a9-4a94-a017-e1acb694c036 |
>>
>> | server_id | 9e7aae15-cabc-4489-a1b2-778915a78df2 |
>>
>> | config_id | 86d49e66-2f25-4cb1-b623-5ae87b01bb64 |
>>
>> | creation_time | 2016-08-03T08:32:10 |
>>
>> | updated_time | |
>>
>> | status | IN_PROGRESS |
>>
>> | status_reason | Deploy data available |
>>
>> | input_values | {} |
>>
>> | action | CREATE |
>>
>> | output_values | None |
>>
>> +---------------+--------------------------------------+
>>
>>
>>
>> [stack@mitaka-uc ~]$ openstack stack resource list
>> 3797aec6-e543-4dda-9cd1-c7261e827a64
>>
>>
+-------------------------+--------------------------------------+-------------------------------------------------+-----------------+---------------------+
>>
>> | resource_name | physical_resource_id |
>> resource_type | resource_status |
>> updated_time |
>>
>>
+-------------------------+--------------------------------------+-------------------------------------------------+-----------------+---------------------+
>>
>> | ComputeArtifactsConfig | a33cd04d-61ab-4429-8565-182409c2b97f |
>> file:///usr/share/openstack-tripleo-heat- | CREATE_COMPLETE |
>> 2016-08-03T08:29:19 |
>>
>> | | |
>> templates/puppet/deploy-artifacts.yaml | |
>> |
>>
>> | ComputePuppetConfig | 5bb712b0-5358-46c7-a444-f9adedfedd50 |
>> OS::Heat::SoftwareConfig | CREATE_COMPLETE |
>> 2016-08-03T08:29:19 |
>>
>> | ComputePuppetDeployment | 8b199f85-e4f9-48ad-9aee-b1cdf4900b9f |
>> OS::Heat::StructuredDeployments | CREATE_FAILED |
>> 2016-08-03T08:29:19 |
>>
>> | ComputeArtifactsDeploy | 1d13bf34-fc66-4bf1-a3b7-1dd815f58f5a |
>> OS::Heat::StructuredDeployments | CREATE_COMPLETE |
>> 2016-08-03T08:29:19 |
>>
>> | ExtraConfig | |
>> OS::TripleO::NodeExtraConfigPost | INIT_COMPLETE |
>> 2016-08-03T08:29:19 |
>>
>>
+-------------------------+--------------------------------------+-------------------------------------------------+-----------------+---------------------+
>>
>>
>>
>> [stack@mitaka-uc ~]$ openstack stack resource list
>> 8b199f85-e4f9-48ad-9aee-b1cdf4900b9f
>>
>>
+---------------+--------------------------------------+--------------------------------+--------------------+---------------------+
>>
>> | resource_name | physical_resource_id | resource_type
>> | resource_status | updated_time |
>>
>>
+---------------+--------------------------------------+--------------------------------+--------------------+---------------------+
>>
>> | 0 | 7cd0aa3d-742f-4e78-99ca-b2a575913f8e |
>> OS::Heat::StructuredDeployment | CREATE_IN_PROGRESS |
>> 2016-08-03T08:30:04 |
>>
>>
+---------------+--------------------------------------+--------------------------------+--------------------+---------------------+
>>
>> [stack@mitaka-uc ~]$ openstack software deployment show
>> 7cd0aa3d-742f-4e78-99ca-b2a575913f8e
>>
>> +---------------+--------------------------------------+
>>
>> | Field | Value |
>>
>> +---------------+--------------------------------------+
>>
>> | id | 7cd0aa3d-742f-4e78-99ca-b2a575913f8e |
>>
>> | server_id | c1ab52a9-461a-4a11-a13e-e57ff0a3ae2a |
>>
>> | config_id | 24e5c0db-f84f-4a94-8f8e-8e38e73ccc86 |
>>
>> | creation_time | 2016-08-03T08:30:05 |
>>
>> | updated_time | |
>>
>> | status | IN_PROGRESS |
>>
>> | status_reason | Deploy data available |
>>
>> | input_values | {} |
>>
>> | action | CREATE |
>>
>> +---------------+--------------------------------------+
>>
>>
>>
>> Keystonerc file was not generated. Please find below openstack status
>> command result on controller and compute.
>>
>>
>>
>> [heat-admin@overcloud-controller-0 ~]$ openstack-status
>>
>> == Nova services ==
>>
>> openstack-nova-api: active
>>
>> openstack-nova-compute: inactive (disabled on boot)
>>
>> openstack-nova-network: inactive (disabled on boot)
>>
>> openstack-nova-scheduler: activating(disabled on boot)
>>
>> openstack-nova-cert: active
>>
>> openstack-nova-conductor: active
>>
>> openstack-nova-console: inactive (disabled on boot)
>>
>> openstack-nova-consoleauth: active
>>
>> openstack-nova-xvpvncproxy: inactive (disabled on boot)
>>
>> == Glance services ==
>>
>> openstack-glance-api: active
>>
>> openstack-glance-registry: active
>>
>> == Keystone service ==
>>
>> openstack-keystone: inactive (disabled on boot)
>>
>> == Horizon service ==
>>
>> openstack-dashboard: uncontactable
>>
>> == neutron services ==
>>
>> neutron-server: failed (disabled on boot)
>>
>> neutron-dhcp-agent: inactive (disabled on boot)
>>
>> neutron-l3-agent: inactive (disabled on boot)
>>
>> neutron-metadata-agent: inactive (disabled on boot)
>>
>> neutron-lbaas-agent: inactive (disabled on boot)
>>
>> neutron-openvswitch-agent: inactive (disabled on boot)
>>
>> neutron-metering-agent: inactive (disabled on boot)
>>
>> == Swift services ==
>>
>> openstack-swift-proxy: active
>>
>> openstack-swift-account: active
>>
>> openstack-swift-container: active
>>
>> openstack-swift-object: active
>>
>> == Cinder services ==
>>
>> openstack-cinder-api: active
>>
>> openstack-cinder-scheduler: active
>>
>> openstack-cinder-volume: active
>>
>> openstack-cinder-backup: inactive (disabled on boot)
>>
>> == Ceilometer services ==
>>
>> openstack-ceilometer-api: active
>>
>> openstack-ceilometer-central: active
>>
>> openstack-ceilometer-compute: inactive (disabled on boot)
>>
>> openstack-ceilometer-collector: active
>>
>> openstack-ceilometer-notification: active
>>
>> == Heat services ==
>>
>> openstack-heat-api: inactive (disabled on boot)
>>
>> openstack-heat-api-cfn: active
>>
>> openstack-heat-api-cloudwatch: inactive (disabled on boot)
>>
>> openstack-heat-engine: inactive (disabled on boot)
>>
>> == Sahara services ==
>>
>> openstack-sahara-api: active
>>
>> openstack-sahara-engine: active
>>
>> == Support services ==
>>
>> libvirtd: active
>>
>> openvswitch: active
>>
>> dbus: active
>>
>> target: active
>>
>> rabbitmq-server: active
>>
>> memcached: active
>>
>>
>>
>>
>>
>> [heat-admin@overcloud-novacompute-0 ~]$ openstack-status
>>
>> == Nova services ==
>>
>> openstack-nova-api: inactive (disabled on boot)
>>
>> openstack-nova-compute: activating(disabled on boot)
>>
>> openstack-nova-network: inactive (disabled on boot)
>>
>> openstack-nova-scheduler: inactive (disabled on boot)
>>
>> openstack-nova-cert: inactive (disabled on boot)
>>
>> openstack-nova-conductor: inactive (disabled on boot)
>>
>> openstack-nova-console: inactive (disabled on boot)
>>
>> openstack-nova-consoleauth: inactive (disabled on boot)
>>
>> openstack-nova-xvpvncproxy: inactive (disabled on boot)
>>
>> == Glance services ==
>>
>> openstack-glance-api: inactive (disabled on boot)
>>
>> openstack-glance-registry: inactive (disabled on boot)
>>
>> == Keystone service ==
>>
>> openstack-keystone: inactive (disabled on boot)
>>
>> == Horizon service ==
>>
>> openstack-dashboard: uncontactable
>>
>> == neutron services ==
>>
>> neutron-server: inactive (disabled on boot)
>>
>> neutron-dhcp-agent: inactive (disabled on boot)
>>
>> neutron-l3-agent: inactive (disabled on boot)
>>
>> neutron-metadata-agent: inactive (disabled on boot)
>>
>> neutron-lbaas-agent: inactive (disabled on boot)
>>
>> neutron-openvswitch-agent: active
>>
>> neutron-metering-agent: inactive (disabled on boot)
>>
>> == Swift services ==
>>
>> openstack-swift-proxy: inactive (disabled on boot)
>>
>> openstack-swift-account: inactive (disabled on boot)
>>
>> openstack-swift-container: inactive (disabled on boot)
>>
>> openstack-swift-object: inactive (disabled on boot)
>>
>> == Cinder services ==
>>
>> openstack-cinder-api: inactive (disabled on boot)
>>
>> openstack-cinder-scheduler: inactive (disabled on boot)
>>
>> openstack-cinder-volume: inactive (disabled on boot)
>>
>> openstack-cinder-backup: inactive (disabled on boot)
>>
>> == Ceilometer services ==
>>
>> openstack-ceilometer-api: inactive (disabled on boot)
>>
>> openstack-ceilometer-central: inactive (disabled on boot)
>>
>> openstack-ceilometer-compute: inactive (disabled on boot)
>>
>> openstack-ceilometer-collector: inactive (disabled on boot)
>>
>> openstack-ceilometer-notification: inactive (disabled on boot)
>>
>> == Heat services ==
>>
>> openstack-heat-api: inactive (disabled on boot)
>>
>> openstack-heat-api-cfn: inactive (disabled on boot)
>>
>> openstack-heat-api-cloudwatch: inactive (disabled on boot)
>>
>> openstack-heat-engine: inactive (disabled on boot)
>>
>> == Sahara services ==
>>
>> openstack-sahara-all: inactive (disabled on boot)
>>
>> == Support services ==
>>
>> libvirtd: active
>>
>> openvswitch: active
>>
>> dbus: active
>>
>> rabbitmq-server: inactive (disabled on boot)
>>
>> memcached: inactive (disabled on boot)
>>
>>
>>
>>
>>
>>
>>
>> Please let me know if there is any other logs which I can provide
>> that can help in troubleshooting.
>>
>>
>>
>>
>>
>> Thanks a lot in Advance for your help and support.
>>
>>
>>
>> Best Regards,
>>
>> Milind Gunjan
>>
>>
>>
>>
>> ________________________________
>>
>> This e-mail may contain Sprint proprietary information intended for
>> the sole use of the recipient(s). Any use by others is prohibited. If
>> you are not the intended recipient, please contact the sender and
>> delete all copies of the message.
>>
>> _______________________________________________
>> rdo-list mailing list
>> rdo-list(a)redhat.com
>>
https://www.redhat.com/mailman/listinfo/rdo-list
>>
>> To unsubscribe: rdo-list-unsubscribe(a)redhat.com
>
> ________________________________
>
> This e-mail may contain Sprint proprietary information intended for the sole use of
the recipient(s). Any use by others is prohibited. If you are not the intended recipient,
please contact the sender and delete all copies of the message.