Hi James,
Here are the details of the failed instance. Interestingly enough, when I
re-ran the
overcloud deployment, this time, swift didn't fail, it was actually a
notcompute instance.
Here are the details:
[stack@localhost ~]$ nova list
+--------------------------------------+--------------------------------------+--------+------------+-------------+---------------------+
| ID | Name
| Status | Task State | Power State | Networks |
+--------------------------------------+--------------------------------------+--------+------------+-------------+---------------------+
| b815d299-4817-4491-b352-d09ab618bd77 |
overcloud-BlockStorage0-zwfiycn67hpc | ACTIVE | - | Running |
ctlplane=192.0.2.15 |
| 9b646a2a-d4b3-438b-94de-4e14bdbe1432 |
overcloud-NovaCompute0-ubp6vlfjjepu | ACTIVE | - | Running |
ctlplane=192.0.2.14 |
| c0ad2f8f-ef91-4a6d-b229-c52b4a89bedd |
overcloud-SwiftStorage0-xns634un3z7k | ACTIVE | - | Running |
ctlplane=192.0.2.16 |
| 11d91a42-4244-48d7-8c4b-92db3a9c43b6 | overcloud-notCompute0-bw4mq7v2sh5y
| ERROR | - | NOSTATE | |
+--------------------------------------+--------------------------------------+--------+------------+-------------+---------------------+
[stack@localhost ~]$ nova show 11d91a42-4244-48d7-8c4b-92db3a9c43b6
+--------------------------------------+--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
| Property | Value
|
+--------------------------------------+--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
| OS-DCF:diskConfig | MANUAL
|
| OS-EXT-AZ:availability_zone | nova
|
| OS-EXT-SRV-ATTR:host | -
|
| OS-EXT-SRV-ATTR:hypervisor_hostname | -
|
| OS-EXT-SRV-ATTR:instance_name | instance-00000016
|
| OS-EXT-STS:power_state | 0
|
| OS-EXT-STS:task_state | -
|
| OS-EXT-STS:vm_state | error
|
| OS-SRV-USG:launched_at | -
|
| OS-SRV-USG:terminated_at | -
|
| accessIPv4 |
|
| accessIPv6 |
|
| config_drive |
|
| created | 2014-07-21T11:18:32Z
|
| fault | {"message": "No valid host was
found. ", "code": 500, "details": " File
\"/usr/lib/python2.7/site-packages/nova/scheduler/filter_scheduler.py\",
line 108, in schedule_run_instance |
| | raise exception.NoValidHost
(reason=\"\")
|
| | ", "created":
"2014-07-21T11:18:32Z"}
|
| flavor | baremetal
(b564fd03-bc8d-42f4-8c5b-264cfa62a655)
|
| hostId |
|
| id |
11d91a42-4244-48d7-8c4b-92db3a9c43b6
|
| image | overcloud-control
(cb938db4-2f1a-44d6-96f9-016c8cc7b406)
|
| key_name | default
|
| metadata | {}
|
| name | overcloud-notCompute0-bw4mq7v2sh5y
|
| os-extended-volumes:volumes_attached | []
|
| status | ERROR
|
| tenant_id | ae8b85d781ad443792f2a3516f38ed88
|
| updated | 2014-07-21T11:18:32Z
|
| user_id | 921abf3732ce40d0b1502e9aa13c6c2a
|
+--------------------------------------+--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
The only relevant log I could found is with nova scheduler:
2014-07-21 04:25:41.473 31078 WARNING nova.scheduler.driver
[req-a00295d2-7308-4bb9-a40d-0740fe852bf5 921abf3732ce40d0b1502e9aa13c6c2a
ae8b85d781ad443792f2a3516f38ed88] [instance:
b3b3f6f9-25fc-45db-b14e-f7daae1f3216] Setting instance to ERROR state.
Though I would like to point out here that I am seeing 5 VMs in my setup
instead of 4.
[stack@devstack ~]$ virsh list --all
Id Name State
----------------------------------------------------
2 instack running
9 baremetal_1 running
10 baremetal_2 running
11 baremetal_3 running
- baremetal_0 shut off
Regards,
Peeyush Gupta
From: James Slagle <jslagle(a)redhat.com
To: Pradeep Kumar Surisetty <psuriset(a)linux.vnet.ibm.com
Cc: rdo-list(a)redhat.com, deepthi(a)linux.vnet.ibm.com, Peeyush
Gupta/India/IBM@IBMIN, Pradeep K Surisetty/India/IBM@IBMIN,
anantyog(a)linux.vnet.ibm.com
Date: 07/21/2014 09:50 PM
Subject: Re: [Rdo-list] [RDO][Instack] heat is not able to create stack
with instack
On Mon, Jul 21, 2014 at 05:59:25PM +0530, Pradeep Kumar Surisetty wrote:
Hi All
I have been trying to set instack with RDO. I have successfully
installed
undercloud and moving on to overcloud. Now, when I run
"instack-deploy-overcloud", I get the following error:
+ OVERCLOUD_YAML_PATH=overcloud.yaml
+ heat stack-create -f overcloud.yaml -P
AdminToken=b003d63242f5db3e1ad4864ae66911e02ba19bcb -P
AdminPassword=7bfe4d4a18280752ad07f259a69a3ed00db2ab44 -P
CinderPassword=df0893b4355f3511a6d67538dd592d02d1bc11d3 -P
GlancePassword=066f65f878157b438a916ccbd44e0b7037ee!
118f -P HeatPassword=58fda0e4d6708e0164167b11fe6fca6ab6b35ec6 -P
NeutronPassword=80853ad029feb77bb7c60d035542f21aa5c24177 -P
NovaPassword=331474580be53b78e40c91dfdfc2323578a035e7 -P
NeutronPublicInterface=eth0 -P
SwiftPassword=b0eca57b45ebf3dd5cae071dc3880888fb1d4840 -P
SwiftHashSuffix=a8d87f3952d6f91da589fbef801bb92141fd1461 -P
NovaComputeLibvirtType=qemu -P 'GlanceLogFile='\'''\'''
-P
NeutronDnsmasqOptions=dhcp-option-force=26,1400 overcloud
+--------------------------------------+------------+--------------------+----------------------+
| id | stack_name | stack_status
| creation_time |
+--------------------------------------+------------+--------------------+----------------------+
| 0ca028e7-682b-41ef-8af0-b2eb67bee272 | overcloud |
CREATE_IN_PROGRESS
| 2014-07-18T10:50:48Z |
+--------------------------------------+------------+--------------------+----------------------+
> + tripleo wait_for_stack_ready 220 10 overcloud
> Command output matched 'CREATE_FAILED'. Exiting...
> Now, i understand that the stack isn't being created.
So, I tried to
check out the state of the stack:
> [stack@localhost ~]$ heat stack-list
+--------------------------------------+------------+---------------+----------------------+
| id | stack_name | stack_status
|
creation_time |
+--------------------------------------+------------+---------------+----------------------+
| 0ca028e7-682b-41ef-8af0-b2eb67bee272 | overcloud | CREATE_FAILED
|
2014-07-18T10:50:48Z |
+--------------------------------------+------------+---------------+----------------------+
> i even tried to create stack
manually, but ended up getting the same
> error.
> Update: Here is the heat log:
> 2014-07-18 06:51:11.884 30750 WARNING
heat.common.keystoneclient [-]
stack_user_domain ID not set in heat.conf falling back to using default
2014-07-18 06:51:12.921 30750 WARNING heat.common.keystoneclient [-]
stack_user_domain ID not set in heat.conf falling back to using default
2014-07-18 06:51:16.058 30750 ERROR heat.engine.resource [-] CREATE
:
Server "SwiftStorage0" [07e42c3d-0f1b-4bb9-b980-ffbb74ac770d] Stack
"overcloud" [0ca028e7-682b-41ef-8af0-b2eb67bee272]
2014-07-18 06:51:16.058 30750 TRACE heat.engine.resource Traceback
(most
recent call last):
2014-07-18 06:51:16.058 30750 TRACE heat.engine.resource File
"/usr/lib/python2.7/site-packages/heat/engine/resource.py", line 420, in
_do_action
2014-07-18 06:51:16.058 30750 TRACE heat.engine.resource while
not
check(handle_data):
2014-07-18 06:51:16.058 30750 TRACE heat.engine.resource File
"/usr/lib/python2.7/site-packages/heat/engine/resources/server.py", line
545, in check_create_complete
2014-07-18 06:51:16.058 30750 TRACE heat.engine.resource return
self._check_active(server)
2014-07-18 06:51:16.058 30750 TRACE heat.engine.resource File
"/usr/lib/python2.7/site-packages/heat/engine/resources/server.py", line
561, in _check_active
2014-07-18 06:51:16.058 30750 TRACE heat.engine.resource raise
exc
2014-07-18 06:51:16.058 30750 TRACE heat.engine.resource Error: Creation
of server
overcloud-SwiftStorage0-qdjqbif6peva failed.
2014-07-18 06:51:16.058 30750 TRACE heat.engine.resource
2014-07-18 06:51:16.255 30750 WARNING heat.common.keystoneclient [-]
stack_user_domain ID not set in heat.conf falling back to using default
2014-07-18 06:51:16.939 30750 WARNING heat.common.keystoneclient [-]
stack_user_domain ID not set in heat.conf falling back to using default
2014-07-18 06:51:17.368 30750 WARNING heat.common.keystoneclient [-]
stack_user_domain ID not set in heat.conf falling back to using default
2014-07-18 06:51:17.638 30750 WARNING heat.common.keystoneclient [-]
stack_user_domain ID not set in heat.conf falling back to using default
2014-07-18 06:51:18.158 30750 WARNING heat.common.keystoneclient [-]
stack_user_domain ID not set in heat.conf falling back to using default
2014-07-18 06:51:18.613 30750 WARNING heat.common.keystoneclient [-]
stack_user_domain ID not set in heat.conf falling back ...
Hi Pradeep,
Can you run a "nova show <instance-id>" on the failed instance? And also
provide any tracebacks or errors from the nova compute log
under /var/log/nova?
--
-- James Slagle
--