Hi Sasha,
I have 3 nodes, 1 Undercloud, 1 Overcloud-Controller, 1 Overcloud-Compute
This is my undercloud.conf file:
image_path = .
local_ip = 192.0.2.1/24
local_interface = em2
masquerade_network = 192.0.2.0/24
dhcp_start = 192.0.2.5
dhcp_end = 192.0.2.24
network_cidr = 192.0.2.0/24
network_gateway = 192.0.2.1
inspection_interface = br-ctlplane
inspection_iprange = 192.0.2.100,192.0.2.120
inspection_runbench = false
undercloud_debug = true
enable_tuskar = false
enable_tempest = false
IP configuration for the Undercloud is as follows:
stack@undercloud ~]$ ip addr
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
valid_lft forever preferred_lft forever
inet6 ::1/128 scope host
valid_lft forever preferred_lft forever
2: em1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq state UP qlen 1000
link/ether 08:9e:01:50:8a:21 brd ff:ff:ff:ff:ff:ff
inet 10.1.34.81/24 brd 10.1.34.255 scope global em1
valid_lft forever preferred_lft forever
inet6 fe80::a9e:1ff:fe50:8a21/64 scope link
valid_lft forever preferred_lft forever
3: em2: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq master ovs-system state
UP qlen 1000
link/ether 08:9e:01:50:8a:22 brd ff:ff:ff:ff:ff:ff
4: ovs-system: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN
link/ether 9a:a8:0f:ec:42:15 brd ff:ff:ff:ff:ff:ff
5: br-ctlplane: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state
UNKNOWN
link/ether 08:9e:01:50:8a:22 brd ff:ff:ff:ff:ff:ff
inet 192.0.2.1/24 brd 192.0.2.255 scope global br-ctlplane
valid_lft forever preferred_lft forever
inet6 fe80::a9e:1ff:fe50:8a22/64 scope link
valid_lft forever preferred_lft forever
6: br-int: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN
link/ether fa:85:ac:92:f5:41 brd ff:ff:ff:ff:ff:ff
And I attached two screenshots showing the boot stage for overcloud nodes
Is there a way to login the overcloud nodes to see their IP configuration?
Thanks
Esra ÇELİK
TÜBİTAK BİLGEM
Kimden: "Sasha Chuzhoy" <sasha(a)redhat.com>
Kime: "Esra Celik" <celik.esra(a)tubitak.gov.tr>
Kk: "Marius Cornea" <mcornea(a)redhat.com>, rdo-list(a)redhat.com
Gönderilenler: 15 Ekim Perşembe 2015 16:58:41
Konu: Re: [Rdo-list] OverCloud deploy fails with error "No valid host was
found"
Just my 2 cents.
Did you make sure that all the registered nodes are configured to boot off
the right NIC first?
Can you watch the console and see what happens on the problematic nodes upon
boot?
Best regards,
Sasha Chuzhoy.
----- Original Message -----
> From: "Esra Celik" <celik.esra(a)tubitak.gov.tr>
> To: "Marius Cornea" <mcornea(a)redhat.com>
> Cc: rdo-list(a)redhat.com
> Sent: Thursday, October 15, 2015 4:40:46 AM
> Subject: Re: [Rdo-list] OverCloud deploy fails with error "No valid host
> was found"
>
>
> Sorry for the late reply
>
> ironic node-show results are below. I have my nodes power on after
> introspection bulk start. And I get the following warning
> Introspection didn't finish for nodes
> 5b28998f-4dc8-42aa-8a51-521e20b1e5ed,6f35ac24-135d-4b99-8a24-fa2b731bd218
>
> Doesn't seem to be the same issue with
>
https://www.redhat.com/archives/rdo-list/2015-October/msg00174.html
>
>
>
>
> [stack@undercloud ~]$ ironic node-list
>
+--------------------------------------+------+---------------+-------------+--------------------+-------------+
> | UUID | Name | Instance UUID | Power State | Provisioning State |
> | Maintenance |
>
+--------------------------------------+------+---------------+-------------+--------------------+-------------+
> | 5b28998f-4dc8-42aa-8a51-521e20b1e5ed | None | None | power on | available
> | |
> | False |
> | 6f35ac24-135d-4b99-8a24-fa2b731bd218 | None | None | power on | available
> | |
> | False |
>
+--------------------------------------+------+---------------+-------------+--------------------+-------------+
>
>
> [stack@undercloud ~]$ ironic node-show 5b28998f-4dc8-42aa-8a51-521e20b1e5ed
>
+------------------------+-------------------------------------------------------------------------+
> | Property | Value |
>
+------------------------+-------------------------------------------------------------------------+
> | target_power_state | None |
> | extra | {} |
> | last_error | None |
> | updated_at | 2015-10-15T08:26:42+00:00 |
> | maintenance_reason | None |
> | provision_state | available |
> | clean_step | {} |
> | uuid | 5b28998f-4dc8-42aa-8a51-521e20b1e5ed |
> | console_enabled | False |
> | target_provision_state | None |
> | provision_updated_at | 2015-10-15T08:26:42+00:00 |
> | maintenance | False |
> | inspection_started_at | None |
> | inspection_finished_at | None |
> | power_state | power on |
> | driver | pxe_ipmitool |
> | reservation | None |
> | properties | {u'memory_mb': u'8192', u'cpu_arch':
u'x86_64', u'local_gb':
> | u'10', |
> | | u'cpus': u'4', u'capabilities':
u'boot_option:local'} |
> | instance_uuid | None |
> | name | None |
> | driver_info | {u'ipmi_password': u'******',
u'ipmi_address':
> | u'192.168.0.18', |
> | | u'ipmi_username': u'root', u'deploy_kernel':
u'49a2c8d4-a283-4bdf-8d6f-
> | | |
> | | e83ae28da047', u'deploy_ramdisk': u'3db3dbed- |
> | | 0d88-4632-af98-8defb05ca6e2'} |
> | created_at | 2015-10-15T07:49:08+00:00 |
> | driver_internal_info | {u'clean_steps': None} |
> | chassis_uuid | |
> | instance_info | {} |
>
+------------------------+-------------------------------------------------------------------------+
>
>
> [stack@undercloud ~]$ ironic node-show 6f35ac24-135d-4b99-8a24-fa2b731bd218
>
+------------------------+-------------------------------------------------------------------------+
> | Property | Value |
>
+------------------------+-------------------------------------------------------------------------+
> | target_power_state | None |
> | extra | {} |
> | last_error | None |
> | updated_at | 2015-10-15T08:26:42+00:00 |
> | maintenance_reason | None |
> | provision_state | available |
> | clean_step | {} |
> | uuid | 6f35ac24-135d-4b99-8a24-fa2b731bd218 |
> | console_enabled | False |
> | target_provision_state | None |
> | provision_updated_at | 2015-10-15T08:26:42+00:00 |
> | maintenance | False |
> | inspection_started_at | None |
> | inspection_finished_at | None |
> | power_state | power on |
> | driver | pxe_ipmitool |
> | reservation | None |
> | properties | {u'memory_mb': u'8192', u'cpu_arch':
u'x86_64', u'local_gb':
> | u'100', |
> | | u'cpus': u'4', u'capabilities':
u'boot_option:local'} |
> | instance_uuid | None |
> | name | None |
> | driver_info | {u'ipmi_password': u'******',
u'ipmi_address':
> | u'192.168.0.19', |
> | | u'ipmi_username': u'root', u'deploy_kernel':
u'49a2c8d4-a283-4bdf-8d6f-
> | | |
> | | e83ae28da047', u'deploy_ramdisk': u'3db3dbed- |
> | | 0d88-4632-af98-8defb05ca6e2'} |
> | created_at | 2015-10-15T07:49:08+00:00 |
> | driver_internal_info | {u'clean_steps': None} |
> | chassis_uuid | |
> | instance_info | {} |
>
+------------------------+-------------------------------------------------------------------------+
> [stack@undercloud ~]$
>
>
>
>
>
>
>
>
>
> And below I added my history for the stack user. I don't think I am doing
> something other than
>
https://repos.fedorapeople.org/repos/openstack-m/rdo-manager-docs/liberty
> doc
>
>
>
>
>
>
>
> 1 vi instackenv.json
> 2 sudo yum -y install epel-release
> 3 sudo curl -o /etc/yum.repos.d/delorean.repo
>
http://trunk.rdoproject.org/centos7-liberty/current-passed-ci/delorean.repo
> 4 sudo curl -o /etc/yum.repos.d/delorean-current.repo
>
http://trunk.rdoproject.org/centos7-liberty/current/delorean.repo
> 5 sudo sed -i 's/\[delorean\]/\[delorean-current\]/'
> /etc/yum.repos.d/delorean-current.repo
> 6 sudo /bin/bash -c "cat
<<EOF>>/etc/yum.repos.d/delorean-current.repo
>
>
includepkgs=diskimage-builder,openstack-heat,instack,instack-undercloud,openstack-ironic,openstack-ironic-inspector,os-cloud-config,os-net-config,python-ironic-inspector-client,python-tripleoclient,tripleo-common,openstack-tripleo-heat-templates,openstack-tripleo-image-elements,openstack-tuskar-ui-extras,openstack-puppet-modules
> EOF"
> 7 sudo curl -o /etc/yum.repos.d/delorean-deps.repo
>
http://trunk.rdoproject.org/centos7-liberty/delorean-deps.repo
> 8 sudo yum -y install yum-plugin-priorities
> 9 sudo yum install -y python-tripleoclient
> 10 cp /usr/share/instack-undercloud/undercloud.conf.sample
> ~/undercloud.conf
> 11 vi undercloud.conf
> 12 export DIB_INSTALLTYPE_puppet_modules=source
> 13 openstack undercloud install
> 14 source stackrc
> 15 export NODE_DIST=centos7
> 16 export DIB_YUM_REPO_CONF="/etc/yum.repos.d/delorean.repo
> /etc/yum.repos.d/delorean-deps.repo"
> 17 export DIB_INSTALLTYPE_puppet_modules=source
> 18 openstack overcloud image build --all
> 19 ls
> 20 openstack overcloud image upload
> 21 openstack baremetal import --json instackenv.json
> 22 openstack baremetal configure boot
> 23 ironic node-list
> 24 openstack baremetal introspection bulk start
> 25 ironic node-list
> 26 ironic node-show 5b28998f-4dc8-42aa-8a51-521e20b1e5ed
> 27 ironic node-show 6f35ac24-135d-4b99-8a24-fa2b731bd218
> 28 history
>
>
>
>
>
>
>
> Thanks
>
>
>
> Esra ÇELİK
> TÜBİTAK BİLGEM
>
www.bilgem.tubitak.gov.tr
> celik.esra(a)tubitak.gov.tr
>
>
> Kimden: "Marius Cornea" <mcornea(a)redhat.com>
> Kime: "Esra Celik" <celik.esra(a)tubitak.gov.tr>
> Kk: "Ignacio Bravo" <ibravo(a)ltgfederal.com>, rdo-list(a)redhat.com
> Gönderilenler: 14 Ekim Çarşamba 2015 19:40:07
> Konu: Re: [Rdo-list] OverCloud deploy fails with error "No valid host was
> found"
>
> Can you do ironic node-show for your ironic nodes and post the results?
> Also
> check the following suggestion if you're experiencing the same issue:
>
https://www.redhat.com/archives/rdo-list/2015-October/msg00174.html
>
> ----- Original Message -----
> > From: "Esra Celik" <celik.esra(a)tubitak.gov.tr>
> > To: "Marius Cornea" <mcornea(a)redhat.com>
> > Cc: "Ignacio Bravo" <ibravo(a)ltgfederal.com>,
rdo-list(a)redhat.com
> > Sent: Wednesday, October 14, 2015 3:22:20 PM
> > Subject: Re: [Rdo-list] OverCloud deploy fails with error "No valid host
> > was found"
> >
> >
> >
> > Well in the early stage of the introspection I can see Client IP of nodes
> > (screenshot attached). But then I see continuous ironic-python-agent
> > errors
> > (screenshot-2 attached). Errors repeat after time out.. And the nodes are
> > not powered off.
> >
> > Seems like I am stuck in introspection stage..
> >
> > I can use ipmitool command to successfully power on/off the nodes
> >
> >
> >
> > [stack@undercloud ~]$ ipmitool -I lanplus -H 192.168.0.19 -L
> > ADMINISTRATOR
> > -U
> > root -R 3 -N 5 -P <password> power status
> > Chassis Power is on
> >
> >
> > [stack@undercloud ~]$ ipmitool -H 192.168.0.18 -I lanplus -U root -P
> > <password> chassis power status
> > Chassis Power is on
> > [stack@undercloud ~]$ ipmitool -H 192.168.0.18 -I lanplus -U root -P
> > <password> chassis power off
> > Chassis Power Control: Down/Off
> > [stack@undercloud ~]$ ipmitool -H 192.168.0.18 -I lanplus -U root -P
> > <password> chassis power status
> > Chassis Power is off
> > [stack@undercloud ~]$ ipmitool -H 192.168.0.18 -I lanplus -U root -P
> > <password> chassis power on
> > Chassis Power Control: Up/On
> > [stack@undercloud ~]$ ipmitool -H 192.168.0.18 -I lanplus -U root -P
> > <password> chassis power status
> > Chassis Power is on
> >
> >
> > Esra ÇELİK
> > TÜBİTAK BİLGEM
> >
www.bilgem.tubitak.gov.tr
> > celik.esra(a)tubitak.gov.tr
> >
> >
> > ----- Orijinal Mesaj -----
> >
> > Kimden: "Marius Cornea" <mcornea(a)redhat.com>
> > Kime: "Esra Celik" <celik.esra(a)tubitak.gov.tr>
> > Kk: "Ignacio Bravo" <ibravo(a)ltgfederal.com>,
rdo-list(a)redhat.com
> > Gönderilenler: 14 Ekim Çarşamba 2015 14:59:30
> > Konu: Re: [Rdo-list] OverCloud deploy fails with error "No valid host was
> > found"
> >
> >
> > ----- Original Message -----
> > > From: "Esra Celik" <celik.esra(a)tubitak.gov.tr>
> > > To: "Marius Cornea" <mcornea(a)redhat.com>
> > > Cc: "Ignacio Bravo" <ibravo(a)ltgfederal.com>,
rdo-list(a)redhat.com
> > > Sent: Wednesday, October 14, 2015 10:49:01 AM
> > > Subject: Re: [Rdo-list] OverCloud deploy fails with error "No valid
> > > host
> > > was found"
> > >
> > >
> > > Well today I started with re-installing the OS and nothing seems wrong
> > > with
> > > undercloud installation, then;
> > >
> > >
> > >
> > >
> > >
> > >
> > > I see an error during image build
> > >
> > >
> > > [stack@undercloud ~]$ openstack overcloud image build --all
> > > ...
> > > a lot of log
> > > ...
> > > ++ cat /etc/dib_dracut_drivers
> > > + dracut -N --install ' curl partprobe lsblk targetcli tail head awk
> > > ifconfig
> > > cut expr route ping nc wget tftp grep' --kernel-cmdline 'rd.shell
> > > rd.debug
> > > rd.neednet=1 rd.driver.pre=ahci' --include
/var/tmp/image.YVhwuArQ/mnt/
> > > /
> > > --kver 3.10.0-229.14.1.el7.x86_64 --add-drivers ' virtio virtio_net
> > > virtio_blk target_core_mod iscsi_target_mod target_core_iblock
> > > target_core_file target_core_pscsi configfs' -o 'dash
plymouth'
> > > /tmp/ramdisk
> > > cat: write error: Broken pipe
> > > + cp /boot/vmlinuz-3.10.0-229.14.1.el7.x86_64 /tmp/kernel
> > > + chmod o+r /tmp/kernel
> > > + trap EXIT
> > > + target_tag=99-build-dracut-ramdisk
> > > + date +%s.%N
> > > + output '99-build-dracut-ramdisk completed'
> > > ...
> > > a lot of log
> > > ...
> >
> > You can ignore that afaik, if you end up having all the required images
> > it
> > should be ok.
> >
> > >
> > > Then, during introspection stage I see ironic-python-agent errors on
> > > nodes
> > > (screenshot attached) and the following warnings
> > >
> >
> > That looks odd. Is it showing up in the early stage of the introspection?
> > At
> > some point it should receive an address by DHCP and the Network is
> > unreachable error should disappear. Does the introspection complete and
> > the
> > nodes are turned off?
> >
> > >
> > >
> > > [root@localhost ~]# journalctl -fl -u
> > > openstack-ironic-conductor.service
> > > |
> > > grep -i "warning\|error"
> > > Oct 14 10:30:12 undercloud.rdo ironic-conductor[619]: 2015-10-14
> > > 10:30:12.119
> > > 619 WARNING oslo_config.cfg [req-eccf8cb5-6e93-4d8f-9a05-0e8c8d2aab7b ]
> > > Option "http_url" from group "pxe" is deprecated. Use
option "http_url"
> > > from
> > > group "deploy".
> > > Oct 14 10:30:12 undercloud.rdo ironic-conductor[619]: 2015-10-14
> > > 10:30:12.119
> > > 619 WARNING oslo_config.cfg [req-eccf8cb5-6e93-4d8f-9a05-0e8c8d2aab7b ]
> > > Option "http_root" from group "pxe" is deprecated. Use
option
> > > "http_root"
> > > from group "deploy".
> > >
> > >
> > > Before deployment ironic node-list:
> > >
> >
> > This is odd too as I'm expecting the nodes to be powered off before
> > running
> > deployment.
> >
> > >
> > >
> > > [stack@undercloud ~]$ ironic node-list
> > >
+--------------------------------------+------+---------------+-------------+--------------------+-------------+
> > > | UUID | Name | Instance UUID | Power State | Provisioning State |
> > > | Maintenance |
> > >
+--------------------------------------+------+---------------+-------------+--------------------+-------------+
> > > | acfc1bb4-469d-479a-af70-c0bdd669b32d | None | None | power on |
> > > | available
> > > | |
> > > | False |
> > > | b5811c06-d5d1-41f1-87b3-2fd55ae63553 | None | None | power on |
> > > | available
> > > | |
> > > | False |
> > >
+--------------------------------------+------+---------------+-------------+--------------------+-------------+
> > >
> > > During deployment I get following errors
> > >
> > > [root@localhost ~]# journalctl -fl -u
> > > openstack-ironic-conductor.service
> > > |
> > > grep -i "warning\|error"
> > > Oct 14 11:29:01 undercloud.rdo ironic-conductor[619]: 2015-10-14
> > > 11:29:01.739
> > > 619 ERROR ironic.drivers.modules.ipmitool [-] IPMI Error while
> > > attempting
> > > "ipmitool -I lanplus -H 192.168.0.19 -L ADMINISTRATOR -U root -R 3 -N
5
> > > -f
> > > /tmp/tmpSCKHIv power status"for node
> > > b5811c06-d5d1-41f1-87b3-2fd55ae63553.
> > > Error: Unexpected error while running command.
> > > Oct 14 11:29:01 undercloud.rdo ironic-conductor[619]: 2015-10-14
> > > 11:29:01.739
> > > 619 WARNING ironic.drivers.modules.ipmitool [-] IPMI power status
> > > failed
> > > for
> > > node b5811c06-d5d1-41f1-87b3-2fd55ae63553 with error: Unexpected error
> > > while
> > > running command.
> > > Oct 14 11:29:01 undercloud.rdo ironic-conductor[619]: 2015-10-14
> > > 11:29:01.740
> > > 619 WARNING ironic.conductor.manager [-] During sync_power_state, could
> > > not
> > > get power state for node b5811c06-d5d1-41f1-87b3-2fd55ae63553, attempt
> > > 1
> > > of
> > > 3. Error: IPMI call failed: power status..
> > >
> >
> > This looks like an ipmi error, can you try to manually run commands using
> > the
> > ipmitool and see if you get any success? It's also worth filing a bug
> > with
> > details such as the ipmitool version, server model, drac firmware
> > version.
> >
> > >
> > >
> > >
> > >
> > >
> > > Thanks a lot
> > >
> > >
> > >
> > > ----- Orijinal Mesaj -----
> > >
> > > Kimden: "Marius Cornea" <mcornea(a)redhat.com>
> > > Kime: "Esra Celik" <celik.esra(a)tubitak.gov.tr>
> > > Kk: "Ignacio Bravo" <ibravo(a)ltgfederal.com>,
rdo-list(a)redhat.com
> > > Gönderilenler: 13 Ekim Salı 2015 21:16:14
> > > Konu: Re: Yan: Re: [Rdo-list] OverCloud deploy fails with error "No
> > > valid
> > > host was found"
> > >
> > >
> > > ----- Original Message -----
> > > > From: "Esra Celik" <celik.esra(a)tubitak.gov.tr>
> > > > To: "Marius Cornea" <mcornea(a)redhat.com>
> > > > Cc: "Ignacio Bravo" <ibravo(a)ltgfederal.com>,
rdo-list(a)redhat.com
> > > > Sent: Tuesday, October 13, 2015 5:02:09 PM
> > > > Subject: Yan: Re: [Rdo-list] OverCloud deploy fails with error
"No
> > > > valid
> > > > host was found"
> > > >
> > > > During deployment they are powering on and deploying the images. I
> > > > see
> > > > lot
> > > > of
> > > > connection error messages about ironic-python-agent but ignore them
> > > > as
> > > > mentioned here
> > > >
(
https://www.redhat.com/archives/rdo-list/2015-October/msg00052.html)
> > >
> > > That was referring to the introspection stage. From what I can tell you
> > > are
> > > experiencing issues during deployment as it fails to provision the nova
> > > instances, can you check if during that stage the nodes get powered on?
> > >
> > > Make sure that before overcloud deploy the ironic nodes are available
> > > for
> > > provisioning (ironic node-list and check the provisioning state
> > > column).
> > > Also check that you didn't miss any step in the docs in regards to
> > > kernel
> > > and ramdisk assignment, introspection, flavor creation(so it matches
> > > the
> > > nodes resources)
> > >
https://repos.fedorapeople.org/repos/openstack-m/rdo-manager-docs/liberty...
> > >
> > >
> > > > In instackenv.json file I do not need to add the undercloud node, or
> > > > do
> > > > I?
> > >
> > > No, the nodes details should be enough.
> > >
> > > > And which log files should I watch during deployment?
> > >
> > > You can check the openstack-ironic-conductor logs(journalctl -fl -u
> > > openstack-ironic-conductor.service) and the logs in /var/log/nova.
> > >
> > > > Thanks
> > > > Esra
> > > >
> > > >
> > > > ----- Orijinal Mesaj -----Kimden: Marius Cornea
> > > > <mcornea(a)redhat.com>Kime:
> > > > Esra Celik <celik.esra(a)tubitak.gov.tr>Kk: Ignacio Bravo
> > > > <ibravo(a)ltgfederal.com>,
rdo-list(a)redhat.comG&ouml;nderilenler: Tue,
> > > > 13
> > > > Oct
> > > > 2015 17:25:00 +0300 (EEST)Konu: Re: [Rdo-list] OverCloud deploy
fails
> > > > with
> > > > error "No valid host was found"
> > > >
> > > > ----- Original Message -----> From: "Esra Celik"
> > > > <celik.esra(a)tubitak.gov.tr>>
> > > > To: "Ignacio Bravo" <ibravo(a)ltgfederal.com>> Cc:
rdo-list(a)redhat.com>
> > > > Sent:
> > > > Tuesday, October 13, 2015 3:47:57 PM> Subject: Re: [Rdo-list]
> > > > OverCloud
> > > > deploy fails with error "No valid host was found"> >
> > Actually I
> > > > re-installed the OS for Undercloud before deploying. However I
did>
> > > > not
> > > > re-install the OS in Compute and Controller nodes.. I will
reinstall>
> > > > basic
> > > > OS for them too, and retry..
> > > >
> > > > You don't need to reinstall the OS on the controller and
compute,
> > > > they
> > > > will
> > > > get the image served by the undercloud. I'd recommend that
during
> > > > deployment
> > > > you watch the servers console and make sure they get powered on, pxe
> > > > boot,
> > > > and actually get the image deployed.
> > > >
> > > > Thanks
> > > >
> > > > > Thanks> > > > Esra ÇELİK>
TÜBİTAK BİLGEM>
> > > > >
www.bilgem.tubitak.gov.tr> celik.esra(a)tubitak.gov.tr> >
> Kimden:
> > > > > "Ignacio
> > > > > Bravo" <ibravo(a)ltgfederal.com>> Kime: "Esra
Celik"
> > > > > <celik.esra(a)tubitak.gov.tr>> Kk:
rdo-list(a)redhat.com>
> > > > > Gönderilenler:
> > > > > 13 Ekim Salı 2015 16:36:06> Konu: Re: [Rdo-list] OverCloud
deploy
> > > > > fails
> > > > > with error "No valid host was> found"> >
Esra,> > I encountered the
> > > > > same
> > > > > problem after deleting the stack and re-deploying.> > It
turns out
> > > > > that
> > > > > 'heat stack-delete overcloud’ does remove the
nodes from>
> > > > > ‘nova list’ and one would assume that the
baremetal
> > > > > servers
> > > > > are now ready to> be used for the next stack, but when
redeploying,
> > > > > I
> > > > > get
> > > > > the same message of> not enough hosts available.> > You
can look
> > > > > into
> > > > > the
> > > > > nova logs and it mentions something about ‘node xxx
is>
> > > > > already
> > > > > associated with UUID yyyy’ and ‘I tried 3
times and
> > > > > I’m
> > > > > giving up’.> The issue is that the UUID yyyy
belonged to a
> > > > > prior
> > > > > unsuccessful deployment.> > I’m now redeploying
the basic OS
> > > > > to
> > > > > start from scratch again.> > IB> > __> Ignacio
Bravo> LTG Federal,
> > > > > Inc>
> > > > >
www.ltgfederal.com> Office: (703) 951-7760> > > >
On Oct 13, 2015,
> > > > > at
> > > > > 9:25
> > > > > AM, Esra Celik < celik.esra(a)tubitak.gov.tr > wrote:>
> > Hi all,> >
> > > > > OverCloud deploy fails with error "No valid host was
found"> >
> > > > > [stack@undercloud ~]$ openstack overcloud deploy
--templates>
> > > > > Deploying
> > > > > templates in the directory>
> > > > > /usr/share/openstack-tripleo-heat-templates>
> > > > > Stack failed with status: Resource CREATE failed:
> > > > > resources.Compute:>
> > > > > ResourceInError: resources[0].resources.NovaCompute: Went to
status
> > > > > ERROR>
> > > > > due to "Message: No valid host was found. There are not
enough
> > > > > hosts>
> > > > > available., Code: 500"> Heat Stack create failed.>
> Here are some
> > > > > logs:>
> > > > > > Every 2.0s: heat resource-list -n 5 overcloud | grep -v
COMPLETE
> > > > > > Tue
> > > > > > Oct
> > > > > 13> 16:18:17 2015> >
> > > > >
+-------------------------------------------+-----------------------------------------------+---------------------------------------------------+--------------------+---------------------+---------------------------------------------------------------------------------+>
> > > > > | resource_name | physical_resource_id | resource_type |
> > > > > | resource_status
> > > > > |> | updated_time | stack_name |>
> > > > >
+-------------------------------------------+-----------------------------------------------+---------------------------------------------------+--------------------+---------------------+---------------------------------------------------------------------------------+>
> > > > > | Compute | e33b6b1e-8740-4ded-ad7f-720617a03393 |
> > > > > | OS::Heat::ResourceGroup
> > > > > |> | CREATE_FAILED | 2015-10-13T10:20:36 | overcloud |> |
> > > > > |> | Controller
> > > > > |> | |
> > > > > 116c57ff-debb-4c12-92e1-e4163b67dc17 |
OS::Heat::ResourceGroup> | |
> > > > > CREATE_FAILED | 2015-10-13T10:20:36 | overcloud |> | 0 |
> > > > > 342a9023-de8f-4b5b-b3ec-498d99b56dc4 | OS::TripleO::Controller
|> |
> > > > > CREATE_IN_PROGRESS | 2015-10-13T10:20:52 |> |
> > > > > overcloud-Controller-45bbw24xxhxs |> | 0 |
> > > > > e420a7bd-86f8-4cc1-b6a0-5ba8d1412453 | OS::TripleO::Compute
|> |
> > > > > CREATE_FAILED | 2015-10-13T10:20:54 |
> > > > > overcloud-Compute-vqk632ysg64r
> > > > > |>
> > > > > |
> > > > > Controller | 2e9ac712-0566-49b5-958f-c3e151bb24d7 |
> > > > > OS::Nova::Server
> > > > > |>
> > > > > |
> > > > > CREATE_IN_PROGRESS | 2015-10-13T10:20:54 |> |
> > > > > overcloud-Controller-45bbw24xxhxs-0-3vyhjiak2rsk |> |
NovaCompute |
> > > > > 96efee56-81cb-46af-beef-84f4a3af761a | OS::Nova::Server |> |
> > > > > CREATE_FAILED
> > > > > | 2015-10-13T10:20:56 |> |
> > > > > | overcloud-Compute-vqk632ysg64r-0-32nalzkofmef
> > > > > |>
> > > > >
+-------------------------------------------+-----------------------------------------------+---------------------------------------------------+--------------------+---------------------+---------------------------------------------------------------------------------+>
> > > > > > > [stack@undercloud ~]$ heat resource-show overcloud
Compute>
> > > > >
+------------------------+------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+>
> > > > > | Property | Value |>
> > > > >
+------------------------+------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+>
> > > > > | attributes | { |> | | "attributes": null, |> |
| "refs": null |>
> > > > > | |
> > > > > | |
> > > > > | }
> > > > > |> | creation_time | 2015-10-13T10:20:36 |> | description
| |> |
> > > > > |> | links
> > > > > |> | |>
> > > > > |
> > > > >
http://192.0.2.1:8004/v1/3c5942d315bc445c8f1f7b32bd445de5/stacks/overclou...
> > > > > | (self) |> | |
> > > > >
http://192.0.2.1:8004/v1/3c5942d315bc445c8f1f7b32bd445de5/stacks/overclou...
> > > > > | | (stack) |> | |
> > > > >
http://192.0.2.1:8004/v1/3c5942d315bc445c8f1f7b32bd445de5/stacks/overclou...
> > > > > | | (nested) |> | logical_resource_id | Compute |> |
> > > > > | | physical_resource_id
> > > > > | e33b6b1e-8740-4ded-ad7f-720617a03393 |> | required_by |
> > > > > ComputeAllNodesDeployment |> | | ComputeNodesPostDeployment
|> | |
> > > > > ComputeCephDeployment |> | |
ComputeAllNodesValidationDeployment |>
> > > > > |
> > > > > |
> > > > > AllNodesExtraConfig |> | | allNodesConfig |> |
resource_name |
> > > > > Compute
> > > > > |>
> > > > > | resource_status | CREATE_FAILED |> | resource_status_reason
|
> > > > > resources.Compute: ResourceInError:> |
> > > > > resources[0].resources.NovaCompute:
> > > > > Went to status ERROR due to "Message:> | No valid host
was found.
> > > > > There
> > > > > are not enough hosts available., Code: 500"> | |> |
resource_type |
> > > > > OS::Heat::ResourceGroup |> | updated_time |
2015-10-13T10:20:36 |>
> > > > >
+------------------------+------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+>
> > > > > > > > This is my instackenv.json for 1 compute and 1
control node
> > > > > > > > to
> > > > > > > > be
> > > > > deployed.> > {> "nodes": [> {>
"pm_type":"pxe_ipmitool",> "mac":[>
> > > > > "08:9E:01:58:CC:A1"> ],>
"cpu":"4",> "memory":"8192",>
> > > > > "disk":"10",>
> > > > > "arch":"x86_64",>
"pm_user":"root",> "pm_password":"calvin",>
> > > > > "pm_addr":"192.168.0.18"> },> {>
"pm_type":"pxe_ipmitool",>
> > > > > "mac":[>
> > > > > "08:9E:01:58:D0:3D"> ],>
"cpu":"4",> "memory":"8192",>
> > > > > "disk":"100",>
> > > > > "arch":"x86_64",>
"pm_user":"root",> "pm_password":"calvin",>
> > > > > "pm_addr":"192.168.0.19"> }> ]>
}> > > Any ideas? Thanks in
> > > > > advance>
> > > > > >
> > > > > >
> > > > > Esra ÇELİK> TÜBİTAK BİLGEM>
www.bilgem.tubitak.gov.tr>
> > > > > celik.esra(a)tubitak.gov.tr> >
> > > > > _______________________________________________> Rdo-list
mailing
> > > > > list>
> > > > > Rdo-list(a)redhat.com>
> > > > >
https://www.redhat.com/mailman/listinfo/rdo-list>
> > > > > >
> > > > > To unsubscribe: rdo-list-unsubscribe(a)redhat.com> > >
>
> > > > > _______________________________________________> Rdo-list
mailing
> > > > > list>
> > > > > Rdo-list(a)redhat.com>
> > > > >
https://www.redhat.com/mailman/listinfo/rdo-list>
> > > > > >
> > > > > To unsubscribe: rdo-list-unsubscribe(a)redhat.com
> > > >
> > >
> > >
> >
> >
>
>
> _______________________________________________
> Rdo-list mailing list
> Rdo-list(a)redhat.com
>
https://www.redhat.com/mailman/listinfo/rdo-list
>
> To unsubscribe: rdo-list-unsubscribe(a)redhat.com