TripleO Deployment in KVM Virtual Lab
by Pradeep Antil
Hi Folks,
I am trying to deploy tripleO in my KVM virtual environment. while
importing the nodes in undercloud or director i am getting the following
error:
(undercloud) [stack@director ~]$ openstack baremetal import --json
antil/instak.json
This command is deprecated. Please use "openstack overcloud node import" to
register nodes instead.
Started Mistral Workflow tripleo.baremetal.v1.register_or_update. Execution
ID: dff34178-905a-4865-9895-69d1d371d888
Waiting for messages on queue 'tripleo' with no timeout.
*Invalid node data: unknown pm_type (ironic driver to use): pxe_ssh*
{u'status': u'FAILED', u'message': u'Invalid node data: unknown pm_type
(ironic driver to use): pxe_ssh', u'result': None}
Exception registering nodes: {u'status': u'FAILED', u'message': u'Invalid
node data: unknown pm_type (ironic driver to use): pxe_ssh', u'result':
None}
(undercloud) [stack@director ~]$
It seems like pxe_ssh ironic driver is not enabled, can anyone tell me how
to enable this driver. My deployment is got stuck because of their error.
*Below is the my setup:*
- KVM hypervisor = CentOS 7 ( 152 GB RAM, 72 CPU and 1 TB disk) & Nested
virtualization enabled
- Director or Under Cloud VM = CentOS 7 ( 40 GB RAM, 12 VPCU and 200 GB
HDD)
- Three other VMs are also created for overcloud , compute1, compute2 &
Controller.
- Two VLAN are used - External (NAT) and Provision ( Host only Network)
Thanks in advance !!!!
--
Best Regards
Pradeep Kumar
6 years, 10 months
Rolling updates and reboots: Patching kernels with recent CPU CVEs
by David Moreau Simard
Hi,
The updated CentOS kernel packages with fixes for the recent CPU CVEs have
been made available and as such we will proceed with updating all the
servers in RDO's infrastructure.
There might be a brief moment on unavailability as certain servers reboot.
Please reach out to us in #rdo if you notice any problems.
Thanks,
David Moreau Simard
Senior Software Engineer | OpenStack RDO
dmsimard = [irc, github, twitter]
6 years, 10 months
ppc64le and erlang-sd_notify
by Tony Breeds
Hi All,
When trying to install/test RDO on a ppc64le system we hit an issue
that with
---
Error: Package: erlang-sd_notify-0.1-9.el7.ppc64le (delorean-queens-testing)
Requires: erlang(erl_nif_version) = 2.11
Available: erlang-erts-R16B-03.18.el7.ppc64le (epel)
erlang(erl_nif_version) = 2.4
Available: erlang-erts-18.3.4.5-4.el7.ppc64le (delorean-queens-testing)
erlang(erl_nif_version) = 2.10
Error: Package: erlang-sd_notify-0.1-9.el7.ppc64le (delorean-queens-testing)
Requires: erlang(erl_nif_version) = 2.11
Available: erlang-erts-R16B-03.18.el7.ppc64le (epel)
erlang(erl_nif_version) = 2.4
Installing: erlang-erts-18.3.4.5-4.el7.ppc64le (delorean-queens-testing)
erlang(erl_nif_version) = 2.10
You could try using --skip-broken to work around the problem
You could try running: rpm -Va --nofiles --nodigest
---
It seems that at the time erlang-sd_notify there was a divergence in the
nif_version provided by ppc64le. You can see that here:
erlang-sd_notify-0.1-9.el7.aarch64.rpm erlang(erl_nif_version) = 2.10
erlang-sd_notify-0.1-9.el7.ppc64le.rpm erlang(erl_nif_version) = 2.11
erlang-sd_notify-0.1-9.el7.x86_64.rpm erlang(erl_nif_version) = 2.10
I've rebuilt erlang-sd_notify (https://cbs.centos.org/koji/taskinfo?taskID=270238)
and now the nif_version is consistent:
erlang-sd_notify-0.1-10.el7.aarch64.rpm erlang(erl_nif_version) = 2.10
erlang-sd_notify-0.1-10.el7.ppc64le.rpm erlang(erl_nif_version) = 2.10
erlang-sd_notify-0.1-10.el7.x86_64.rpm erlang(erl_nif_version) = 2.10
Here's where I continue to be confused about how I get this into
queens-testing.
I think the right thing to do is update the
cloud7-openstack-queens-testing version in deps.yaml which I thinkI've
done with https://review.rdoproject.org/r/11040
So am I on the right track to fix this, if not let me know where I've
gone wrong.
Yours Tony.
6 years, 10 months
[infra][outage] Nodepool outage on review.rdoproject.org, December 2
by Javier Pena
Hi all,
We had another nodepool outage this morning. Around 9:00 UTC, amoralej noticed that no new jobs were being processed. He restarted nodepool, and I helped him later with some stale node cleanup. Nodepool started creating VMs successfully around 10:00 UTC.
On a first look at the logs, we see no new messages after 7:30 (not even DEBUG logs), but I was unable to run more troubleshooting steps because the service was already restarted.
We will go through the logs on Monday to investigate what happened during the outage.
Regards,
Javier
6 years, 10 months