Can you do ironic node-show for your ironic nodes and post the results? Also check the
following suggestion if you're experiencing the same issue:
From: "Esra Celik" <celik.esra(a)tubitak.gov.tr>
To: "Marius Cornea" <mcornea(a)redhat.com>
Cc: "Ignacio Bravo" <ibravo(a)ltgfederal.com>, rdo-list(a)redhat.com
Sent: Wednesday, October 14, 2015 3:22:20 PM
Subject: Re: [Rdo-list] OverCloud deploy fails with error "No valid host was
found"
Well in the early stage of the introspection I can see Client IP of nodes
(screenshot attached). But then I see continuous ironic-python-agent errors
(screenshot-2 attached). Errors repeat after time out.. And the nodes are
not powered off.
Seems like I am stuck in introspection stage..
I can use ipmitool command to successfully power on/off the nodes
[stack@undercloud ~]$ ipmitool -I lanplus -H 192.168.0.19 -L ADMINISTRATOR -U
root -R 3 -N 5 -P <password> power status
Chassis Power is on
[stack@undercloud ~]$ ipmitool -H 192.168.0.18 -I lanplus -U root -P
<password> chassis power status
Chassis Power is on
[stack@undercloud ~]$ ipmitool -H 192.168.0.18 -I lanplus -U root -P
<password> chassis power off
Chassis Power Control: Down/Off
[stack@undercloud ~]$ ipmitool -H 192.168.0.18 -I lanplus -U root -P
<password> chassis power status
Chassis Power is off
[stack@undercloud ~]$ ipmitool -H 192.168.0.18 -I lanplus -U root -P
<password> chassis power on
Chassis Power Control: Up/On
[stack@undercloud ~]$ ipmitool -H 192.168.0.18 -I lanplus -U root -P
<password> chassis power status
Chassis Power is on
Esra ÇELİK
TÜBİTAK BİLGEM
www.bilgem.tubitak.gov.tr
celik.esra(a)tubitak.gov.tr
----- Orijinal Mesaj -----
Kimden: "Marius Cornea" <mcornea(a)redhat.com>
Kime: "Esra Celik" <celik.esra(a)tubitak.gov.tr>
Kk: "Ignacio Bravo" <ibravo(a)ltgfederal.com>, rdo-list(a)redhat.com
Gönderilenler: 14 Ekim Çarşamba 2015 14:59:30
Konu: Re: [Rdo-list] OverCloud deploy fails with error "No valid host was
found"
----- Original Message -----
> From: "Esra Celik" <celik.esra(a)tubitak.gov.tr>
> To: "Marius Cornea" <mcornea(a)redhat.com>
> Cc: "Ignacio Bravo" <ibravo(a)ltgfederal.com>, rdo-list(a)redhat.com
> Sent: Wednesday, October 14, 2015 10:49:01 AM
> Subject: Re: [Rdo-list] OverCloud deploy fails with error "No valid host
> was found"
>
>
> Well today I started with re-installing the OS and nothing seems wrong with
> undercloud installation, then;
>
>
>
>
>
>
> I see an error during image build
>
>
> [stack@undercloud ~]$ openstack overcloud image build --all
> ...
> a lot of log
> ...
> ++ cat /etc/dib_dracut_drivers
> + dracut -N --install ' curl partprobe lsblk targetcli tail head awk
> ifconfig
> cut expr route ping nc wget tftp grep' --kernel-cmdline 'rd.shell rd.debug
> rd.neednet=1 rd.driver.pre=ahci' --include /var/tmp/image.YVhwuArQ/mnt/ /
> --kver 3.10.0-229.14.1.el7.x86_64 --add-drivers ' virtio virtio_net
> virtio_blk target_core_mod iscsi_target_mod target_core_iblock
> target_core_file target_core_pscsi configfs' -o 'dash plymouth'
> /tmp/ramdisk
> cat: write error: Broken pipe
> + cp /boot/vmlinuz-3.10.0-229.14.1.el7.x86_64 /tmp/kernel
> + chmod o+r /tmp/kernel
> + trap EXIT
> + target_tag=99-build-dracut-ramdisk
> + date +%s.%N
> + output '99-build-dracut-ramdisk completed'
> ...
> a lot of log
> ...
You can ignore that afaik, if you end up having all the required images it
should be ok.
>
> Then, during introspection stage I see ironic-python-agent errors on nodes
> (screenshot attached) and the following warnings
>
That looks odd. Is it showing up in the early stage of the introspection? At
some point it should receive an address by DHCP and the Network is
unreachable error should disappear. Does the introspection complete and the
nodes are turned off?
>
>
> [root@localhost ~]# journalctl -fl -u openstack-ironic-conductor.service |
> grep -i "warning\|error"
> Oct 14 10:30:12 undercloud.rdo ironic-conductor[619]: 2015-10-14
> 10:30:12.119
> 619 WARNING oslo_config.cfg [req-eccf8cb5-6e93-4d8f-9a05-0e8c8d2aab7b ]
> Option "http_url" from group "pxe" is deprecated. Use option
"http_url"
> from
> group "deploy".
> Oct 14 10:30:12 undercloud.rdo ironic-conductor[619]: 2015-10-14
> 10:30:12.119
> 619 WARNING oslo_config.cfg [req-eccf8cb5-6e93-4d8f-9a05-0e8c8d2aab7b ]
> Option "http_root" from group "pxe" is deprecated. Use option
"http_root"
> from group "deploy".
>
>
> Before deployment ironic node-list:
>
This is odd too as I'm expecting the nodes to be powered off before running
deployment.
>
>
> [stack@undercloud ~]$ ironic node-list
>
+--------------------------------------+------+---------------+-------------+--------------------+-------------+
> | UUID | Name | Instance UUID | Power State | Provisioning State |
> | Maintenance |
>
+--------------------------------------+------+---------------+-------------+--------------------+-------------+
> | acfc1bb4-469d-479a-af70-c0bdd669b32d | None | None | power on | available
> | |
> | False |
> | b5811c06-d5d1-41f1-87b3-2fd55ae63553 | None | None | power on | available
> | |
> | False |
>
+--------------------------------------+------+---------------+-------------+--------------------+-------------+
>
> During deployment I get following errors
>
> [root@localhost ~]# journalctl -fl -u openstack-ironic-conductor.service |
> grep -i "warning\|error"
> Oct 14 11:29:01 undercloud.rdo ironic-conductor[619]: 2015-10-14
> 11:29:01.739
> 619 ERROR ironic.drivers.modules.ipmitool [-] IPMI Error while attempting
> "ipmitool -I lanplus -H 192.168.0.19 -L ADMINISTRATOR -U root -R 3 -N 5 -f
> /tmp/tmpSCKHIv power status"for node b5811c06-d5d1-41f1-87b3-2fd55ae63553.
> Error: Unexpected error while running command.
> Oct 14 11:29:01 undercloud.rdo ironic-conductor[619]: 2015-10-14
> 11:29:01.739
> 619 WARNING ironic.drivers.modules.ipmitool [-] IPMI power status failed
> for
> node b5811c06-d5d1-41f1-87b3-2fd55ae63553 with error: Unexpected error
> while
> running command.
> Oct 14 11:29:01 undercloud.rdo ironic-conductor[619]: 2015-10-14
> 11:29:01.740
> 619 WARNING ironic.conductor.manager [-] During sync_power_state, could not
> get power state for node b5811c06-d5d1-41f1-87b3-2fd55ae63553, attempt 1 of
> 3. Error: IPMI call failed: power status..
>
This looks like an ipmi error, can you try to manually run commands using the
ipmitool and see if you get any success? It's also worth filing a bug with
details such as the ipmitool version, server model, drac firmware version.
>
>
>
>
>
> Thanks a lot
>
>
>
> ----- Orijinal Mesaj -----
>
> Kimden: "Marius Cornea" <mcornea(a)redhat.com>
> Kime: "Esra Celik" <celik.esra(a)tubitak.gov.tr>
> Kk: "Ignacio Bravo" <ibravo(a)ltgfederal.com>, rdo-list(a)redhat.com
> Gönderilenler: 13 Ekim Salı 2015 21:16:14
> Konu: Re: Yan: Re: [Rdo-list] OverCloud deploy fails with error "No valid
> host was found"
>
>
> ----- Original Message -----
> > From: "Esra Celik" <celik.esra(a)tubitak.gov.tr>
> > To: "Marius Cornea" <mcornea(a)redhat.com>
> > Cc: "Ignacio Bravo" <ibravo(a)ltgfederal.com>,
rdo-list(a)redhat.com
> > Sent: Tuesday, October 13, 2015 5:02:09 PM
> > Subject: Yan: Re: [Rdo-list] OverCloud deploy fails with error "No valid
> > host was found"
> >
> > During deployment they are powering on and deploying the images. I see
> > lot
> > of
> > connection error messages about ironic-python-agent but ignore them as
> > mentioned here
> > (
https://www.redhat.com/archives/rdo-list/2015-October/msg00052.html)
>
> That was referring to the introspection stage. From what I can tell you are
> experiencing issues during deployment as it fails to provision the nova
> instances, can you check if during that stage the nodes get powered on?
>
> Make sure that before overcloud deploy the ironic nodes are available for
> provisioning (ironic node-list and check the provisioning state column).
> Also check that you didn't miss any step in the docs in regards to kernel
> and ramdisk assignment, introspection, flavor creation(so it matches the
> nodes resources)
>
https://repos.fedorapeople.org/repos/openstack-m/rdo-manager-docs/liberty...
>
>
> > In instackenv.json file I do not need to add the undercloud node, or do
> > I?
>
> No, the nodes details should be enough.
>
> > And which log files should I watch during deployment?
>
> You can check the openstack-ironic-conductor logs(journalctl -fl -u
> openstack-ironic-conductor.service) and the logs in /var/log/nova.
>
> > Thanks
> > Esra
> >
> >
> > ----- Orijinal Mesaj -----Kimden: Marius Cornea
<mcornea(a)redhat.com>Kime:
> > Esra Celik <celik.esra(a)tubitak.gov.tr>Kk: Ignacio Bravo
> > <ibravo(a)ltgfederal.com>, rdo-list(a)redhat.comG&ouml;nderilenler: Tue,
13
> > Oct
> > 2015 17:25:00 +0300 (EEST)Konu: Re: [Rdo-list] OverCloud deploy fails
> > with
> > error "No valid host was found"
> >
> > ----- Original Message -----> From: "Esra Celik"
> > <celik.esra(a)tubitak.gov.tr>>
> > To: "Ignacio Bravo" <ibravo(a)ltgfederal.com>> Cc:
rdo-list(a)redhat.com>
> > Sent:
> > Tuesday, October 13, 2015 3:47:57 PM> Subject: Re: [Rdo-list] OverCloud
> > deploy fails with error "No valid host was found"> > > >
Actually I
> > re-installed the OS for Undercloud before deploying. However I did> not
> > re-install the OS in Compute and Controller nodes.. I will reinstall>
> > basic
> > OS for them too, and retry..
> >
> > You don't need to reinstall the OS on the controller and compute, they
> > will
> > get the image served by the undercloud. I'd recommend that during
> > deployment
> > you watch the servers console and make sure they get powered on, pxe
> > boot,
> > and actually get the image deployed.
> >
> > Thanks
> >
> > > Thanks> > > > Esra ÇELİK> TÜBİTAK
BİLGEM>
> > >
www.bilgem.tubitak.gov.tr> celik.esra(a)tubitak.gov.tr> > >
Kimden:
> > > "Ignacio
> > > Bravo" <ibravo(a)ltgfederal.com>> Kime: "Esra
Celik"
> > > <celik.esra(a)tubitak.gov.tr>> Kk: rdo-list(a)redhat.com>
> > > Gönderilenler:
> > > 13 Ekim Salı 2015 16:36:06> Konu: Re: [Rdo-list] OverCloud deploy
fails
> > > with error "No valid host was> found"> > Esra,> >
I encountered the
> > > same
> > > problem after deleting the stack and re-deploying.> > It turns out
that
> > > 'heat stack-delete overcloud’ does remove the nodes
from>
> > > ‘nova list’ and one would assume that the baremetal
servers
> > > are now ready to> be used for the next stack, but when redeploying, I
> > > get
> > > the same message of> not enough hosts available.> > You can look
into
> > > the
> > > nova logs and it mentions something about ‘node xxx is>
already
> > > associated with UUID yyyy’ and ‘I tried 3 times and
> > > I’m
> > > giving up’.> The issue is that the UUID yyyy belonged to a
prior
> > > unsuccessful deployment.> > I’m now redeploying the basic
OS to
> > > start from scratch again.> > IB> > __> Ignacio Bravo>
LTG Federal, Inc>
> > >
www.ltgfederal.com> Office: (703) 951-7760> > > > On Oct
13, 2015, at
> > > 9:25
> > > AM, Esra Celik < celik.esra(a)tubitak.gov.tr > wrote:> > > Hi
all,> >
> > > OverCloud deploy fails with error "No valid host was found">
>
> > > [stack@undercloud ~]$ openstack overcloud deploy --templates>
Deploying
> > > templates in the directory>
> > > /usr/share/openstack-tripleo-heat-templates>
> > > Stack failed with status: Resource CREATE failed: resources.Compute:>
> > > ResourceInError: resources[0].resources.NovaCompute: Went to status
> > > ERROR>
> > > due to "Message: No valid host was found. There are not enough
hosts>
> > > available., Code: 500"> Heat Stack create failed.> > Here
are some
> > > logs:>
> > > > Every 2.0s: heat resource-list -n 5 overcloud | grep -v COMPLETE Tue
> > > > Oct
> > > 13> 16:18:17 2015> >
> > >
+-------------------------------------------+-----------------------------------------------+---------------------------------------------------+--------------------+---------------------+---------------------------------------------------------------------------------+>
> > > | resource_name | physical_resource_id | resource_type |
> > > | resource_status
> > > |> | updated_time | stack_name |>
> > >
+-------------------------------------------+-----------------------------------------------+---------------------------------------------------+--------------------+---------------------+---------------------------------------------------------------------------------+>
> > > | Compute | e33b6b1e-8740-4ded-ad7f-720617a03393 |
> > > | OS::Heat::ResourceGroup
> > > |> | CREATE_FAILED | 2015-10-13T10:20:36 | overcloud |> | Controller
|
> > > 116c57ff-debb-4c12-92e1-e4163b67dc17 | OS::Heat::ResourceGroup> | |
> > > CREATE_FAILED | 2015-10-13T10:20:36 | overcloud |> | 0 |
> > > 342a9023-de8f-4b5b-b3ec-498d99b56dc4 | OS::TripleO::Controller |> |
> > > CREATE_IN_PROGRESS | 2015-10-13T10:20:52 |> |
> > > overcloud-Controller-45bbw24xxhxs |> | 0 |
> > > e420a7bd-86f8-4cc1-b6a0-5ba8d1412453 | OS::TripleO::Compute |> |
> > > CREATE_FAILED | 2015-10-13T10:20:54 | overcloud-Compute-vqk632ysg64r
|>
> > > |
> > > Controller | 2e9ac712-0566-49b5-958f-c3e151bb24d7 | OS::Nova::Server
|>
> > > |
> > > CREATE_IN_PROGRESS | 2015-10-13T10:20:54 |> |
> > > overcloud-Controller-45bbw24xxhxs-0-3vyhjiak2rsk |> | NovaCompute |
> > > 96efee56-81cb-46af-beef-84f4a3af761a | OS::Nova::Server |> |
> > > CREATE_FAILED
> > > | 2015-10-13T10:20:56 |> |
> > > | overcloud-Compute-vqk632ysg64r-0-32nalzkofmef
> > > |>
> > >
+-------------------------------------------+-----------------------------------------------+---------------------------------------------------+--------------------+---------------------+---------------------------------------------------------------------------------+>
> > > > > [stack@undercloud ~]$ heat resource-show overcloud Compute>
> > >
+------------------------+------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+>
> > > | Property | Value |>
> > >
+------------------------+------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+>
> > > | attributes | { |> | | "attributes": null, |> | |
"refs": null |> | |
> > > | }
> > > |> | creation_time | 2015-10-13T10:20:36 |> | description | |> |
links
> > > |> | |>
> > > |
> > >
http://192.0.2.1:8004/v1/3c5942d315bc445c8f1f7b32bd445de5/stacks/overclou...
> > > | (self) |> | |
> > >
http://192.0.2.1:8004/v1/3c5942d315bc445c8f1f7b32bd445de5/stacks/overclou...
> > > | | (stack) |> | |
> > >
http://192.0.2.1:8004/v1/3c5942d315bc445c8f1f7b32bd445de5/stacks/overclou...
> > > | | (nested) |> | logical_resource_id | Compute |> |
> > > | | physical_resource_id
> > > | e33b6b1e-8740-4ded-ad7f-720617a03393 |> | required_by |
> > > ComputeAllNodesDeployment |> | | ComputeNodesPostDeployment |> | |
> > > ComputeCephDeployment |> | | ComputeAllNodesValidationDeployment |>
| |
> > > AllNodesExtraConfig |> | | allNodesConfig |> | resource_name |
Compute
> > > |>
> > > | resource_status | CREATE_FAILED |> | resource_status_reason |
> > > resources.Compute: ResourceInError:> |
> > > resources[0].resources.NovaCompute:
> > > Went to status ERROR due to "Message:> | No valid host was found.
There
> > > are not enough hosts available., Code: 500"> | |> |
resource_type |
> > > OS::Heat::ResourceGroup |> | updated_time | 2015-10-13T10:20:36 |>
> > >
+------------------------+------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+>
> > > > > > This is my instackenv.json for 1 compute and 1 control node
to be
> > > deployed.> > {> "nodes": [> {>
"pm_type":"pxe_ipmitool",> "mac":[>
> > > "08:9E:01:58:CC:A1"> ],>
"cpu":"4",> "memory":"8192",>
"disk":"10",>
> > > "arch":"x86_64",>
"pm_user":"root",> "pm_password":"calvin",>
> > > "pm_addr":"192.168.0.18"> },> {>
"pm_type":"pxe_ipmitool",> "mac":[>
> > > "08:9E:01:58:D0:3D"> ],>
"cpu":"4",> "memory":"8192",>
"disk":"100",>
> > > "arch":"x86_64",>
"pm_user":"root",> "pm_password":"calvin",>
> > > "pm_addr":"192.168.0.19"> }> ]> }> >
> Any ideas? Thanks in advance> >
> > > >
> > > Esra ÇELİK> TÜBİTAK BİLGEM>
www.bilgem.tubitak.gov.tr>
> > > celik.esra(a)tubitak.gov.tr> >
> > > _______________________________________________> Rdo-list mailing
list>
> > > Rdo-list(a)redhat.com>
https://www.redhat.com/mailman/listinfo/rdo-list>
> > > >
> > > To unsubscribe: rdo-list-unsubscribe(a)redhat.com> > > >
> > > _______________________________________________> Rdo-list mailing
list>
> > > Rdo-list(a)redhat.com>
https://www.redhat.com/mailman/listinfo/rdo-list>
> > > >
> > > To unsubscribe: rdo-list-unsubscribe(a)redhat.com
> >
>
>