Re: [rdo-list] [Arm-dev] next steps for altarch with ceph and openstack
by Marcin Juszkiewicz
W dniu 05.07.2016 o 16:28, Jim Perrin pisze:
> With the work that Marcin has done for openstack on aarch64, as well
> as the ceph builds, we need to begin working a plan for publishing
> these builds to a mirror. I'd like to identify the steps needed, and
> the ownership for those steps.
>
> Thomas please correct/update these as needed
Added few people to Cc: from pre-RH summit threads.
> 1. new tag in cbs for aarch64 build, with modified dist for mitaka.
> Will ceph need this as well?
> 2. If we wish to re-use the centos-release-openstack-mitaka noarch,
> dependencies of centos-release-ceph-hammer, centos-release-qemu-ev,
> centos-release-storage-common, and centos-release-virt-common need
> to be addressed.
It would be good to populate all those repositories. I opened bugs for
all tags:
https://bugs.centos.org/view.php?id=11084 (openstack)
https://bugs.centos.org/view.php?id=11085 (ceph)
https://bugs.centos.org/view.php?id=11086 (virt/kvm-common)
I looked at requirements of OpenStack Mitaka packages and turns out
that none of them require ceph. But I could be wrong.
The only requirement is from centos-release-openstack-mitaka one which
depends on centos-release-ceph one.
> Alternately we could create a
> centos-release-openstack-mitaka-experimental for now if it can work
> stand-alone.
> 3. Mirror locations on buildlogs and mirrors need to be created and
> tested.
> Possible adjustments to existing -release files due to the /altarch/
> vs centos path differences.
centos-release-* files would need changes to cover altarch paths.
> 4. Package signing. This should already work with existing infra, but
> need to test and validate.
8 years, 3 months
[rdo-list] Unanswered 'RDO' questions on ask.openstack.org
by Rich Bowen
41 unanswered questions:
RDO all in one memory usage and slowness issue
https://ask.openstack.org/en/question/94279/rdo-all-in-one-memory-usage-a...
Tags: slowrdo
How to set quota for domain and have it shared with all the
projects/tenants in domain
https://ask.openstack.org/en/question/94105/how-to-set-quota-for-domain-a...
Tags: domainquotadriver
rdo tripleO liberty undercloud install failing
https://ask.openstack.org/en/question/94023/rdo-tripleo-liberty-underclou...
Tags: rdo, rdo-manager, liberty, undercloud, instack
Are following links considered by RH as official guide lines in meantime
for TripleO instack-virt-setup RDO Liberty Stable?
https://ask.openstack.org/en/question/93751/are-following-links-considere...
Tags: rdo, tripleo, instack
Add new compute node for TripleO deployment in virtual environment
https://ask.openstack.org/en/question/93703/add-new-compute-node-for-trip...
Tags: compute, tripleo, liberty, virtual, baremetal
Unable to start Ceilometer services
https://ask.openstack.org/en/question/93600/unable-to-start-ceilometer-se...
Tags: ceilometer, ceilometer-api
Adding hard drive space to RDO installation
https://ask.openstack.org/en/question/93412/adding-hard-drive-space-to-rd...
Tags: cinder, openstack, space, add
AWS Ec2 inst Eth port loses IP when attached to linux bridge in Openstack
https://ask.openstack.org/en/question/92271/aws-ec2-inst-eth-port-loses-i...
Tags: openstack, networking, aws
ceilometer: I've installed openstack mitaka. but swift stops working
when i configured the pipeline and ceilometer filter
https://ask.openstack.org/en/question/92035/ceilometer-ive-installed-open...
Tags: ceilometer, openstack-swift, mitaka
Fail on installing the controller on Cent OS 7
https://ask.openstack.org/en/question/92025/fail-on-installing-the-contro...
Tags: installation, centos7, controller
the error of service entity and API endpoints
https://ask.openstack.org/en/question/91702/the-error-of-service-entity-a...
Tags: service, entity, and, api, endpoints
Running delorean fails: Git won't fetch sources
https://ask.openstack.org/en/question/91600/running-delorean-fails-git-wo...
Tags: delorean, rdo
Liberty RDO: stack resource topology icons are pink
https://ask.openstack.org/en/question/91347/liberty-rdo-stack-resource-to...
Tags: stack, resource, topology, dashboard
Build of instance aborted: Block Device Mapping is Invalid.
https://ask.openstack.org/en/question/91205/build-of-instance-aborted-blo...
Tags: cinder, lvm, centos7
No handlers could be found for logger "oslo_config.cfg" while syncing
the glance database
https://ask.openstack.org/en/question/91169/no-handlers-could-be-found-fo...
Tags: liberty, glance, install-openstack
how to use chef auto manage openstack in RDO?
https://ask.openstack.org/en/question/90992/how-to-use-chef-auto-manage-o...
Tags: chef, rdo
Separate Cinder storage traffic from management
https://ask.openstack.org/en/question/90405/separate-cinder-storage-traff...
Tags: cinder, separate, nic, iscsi
Openstack installation fails using packstack, failure is in installation
of openstack-nova-compute. Error: Dependency Package[nova-compute] has
failures
https://ask.openstack.org/en/question/88993/openstack-installation-fails-...
Tags: novacompute, rdo, packstack, dependency, failure
CentOS OpenStack - compute node can't talk
https://ask.openstack.org/en/question/88989/centos-openstack-compute-node...
Tags: rdo
How to setup SWIFT_PROXY_NODE and SWIFT_STORAGE_NODEs separately on
RDO Liberty ?
https://ask.openstack.org/en/question/88897/how-to-setup-swift_proxy_node...
Tags: rdo, liberty, swift, ha
VM and container can't download anything from internet
https://ask.openstack.org/en/question/88338/vm-and-container-cant-downloa...
Tags: rdo, neutron, network, connectivity
Fedora22, Liberty, horizon VNC console and keymap=sv with ; and/
https://ask.openstack.org/en/question/87451/fedora22-liberty-horizon-vnc-...
Tags: keyboard, map, keymap, vncproxy, novnc
OpenStack-Docker driver failed
https://ask.openstack.org/en/question/87243/openstack-docker-driver-failed/
Tags: docker, openstack, liberty
Can't create volume with cinder
https://ask.openstack.org/en/question/86670/cant-create-volume-with-cinder/
Tags: cinder, glusterfs, nfs
Sahara SSHException: Error reading SSH protocol banner
https://ask.openstack.org/en/question/84710/sahara-sshexception-error-rea...
Tags: sahara, icehouse, ssh, vanila
Error Sahara create cluster: 'Error attach volume to instance
https://ask.openstack.org/en/question/84651/error-sahara-create-cluster-e...
Tags: sahara, attach-volume, vanila, icehouse
Creating Sahara cluster: Error attach volume to instance
https://ask.openstack.org/en/question/84650/creating-sahara-cluster-error...
Tags: sahara, attach-volume, hadoop, icehouse, vanilla
Routing between two tenants
https://ask.openstack.org/en/question/84645/routing-between-two-tenants/
Tags: kilo, fuel, rdo, routing
redhat RDO enable access to swift via S3
https://ask.openstack.org/en/question/83607/redhat-rdo-enable-access-to-s...
Tags: swift, s3
--
Rich Bowen - rbowen(a)redhat.com
RDO Community Liaison
http://rdocommunity.org
@RDOCommunity
8 years, 3 months
[rdo-list] RDO AiO on Fedora 24
by Daniel Messer
Hi rdo-list,
I was wondering what is the best way to get an all-in-one install of RDO
Mitaka stood up on Fedora 24. I am looking to setup RDO on my workstation
as an automation interface instead of plain libvirt so it will stay there
for a while.
Hence I pursued the following routes:
1. packstack - currently broken as it seems on Fedora because there is
no CI/CD testing (only exists for RHEL/CentOS) - the answer file is
interpreted differently (CONFIG_AMQP_ENABLE_SSL=y needed for proper
self-signed certificates, errors because of puppet4 incompatibilities etc
etc)
2. openstack-ansible - quite elaborate and no true AiO install - it
always seems to come with the 3-way galera cluster which is not easily
surviving a system reboot event
3. openstack-kolla - looks promising but apparently not supported
currently on anything newer than F22 because of supermin defficients for
compressed kernel modules in the CentOS Containers
Any thoughts?
Regards,
Daniel
8 years, 4 months
[rdo-list] Testing 3xNode Controller (HA) + 2xCompute as of 07/02/16
by Boris Derzhavets
************************
Status on 07/02/16
************************
Overcloud deployment :-
Error as of 06/30/16 is back again
CREATE_FAILED ResourceInError: resources.NovaCompute: Went to status ERROR due to "Message: No valid host was found. There are not enough hosts available., Code: 500"
Overcoud deployment crashes again
***********************
Status on 07/01/16
***********************
Template for deployment by QuickStart
# Deploy an HA openstack environment.
#
control_memory: 6144
compute_memory: 6144
undercloud_memory: 8192
# Giving the undercloud additional CPUs can greatly improve heat's
# performance (and result in a shorter deploy time).
undercloud_vcpu: 4
# Create three controller nodes and one compute node.
overcloud_nodes:
- name: control_0
flavor: control
- name: control_1
flavor: control
- name: control_2
flavor: control
- name: compute_0
flavor: compute
- name: compute_1
flavor: compute
# We don't need introspection in a virtual environment (because we are
# creating all the "hardware" we really know the necessary
# information).
introspect: false
# Tell tripleo about our environment.
network_isolation: true
extra_args: >-
--control-scale 3 --compute-scale 2 --neutron-network-type vxlan
--neutron-tunnel-types vxlan
--ntp-server pool.ntp.org
test_tempest: false
test_ping: true
enable_pacemaker: true
VIRTHOST was under pressure during business day running on each Compute Node per
one VM ( VMs F24 and U1604 getting 5 GB ISOS from the Net )
Top and `nova list` snapshots are attached. Might be pressure was not high enough.
Thanks.
Boris
[cid:b5f8e245-9d7f-4dea-9b68-1357aff576c6][cid:22aa15df-5ebc-4791-affa-cd8c2819b725]
8 years, 4 months
Re: [rdo-list] Redeploying UnderCloud for baremetal triple-o deployment
by Gunjan, Milind [CTO]
Hi Dan,
Thanks a lot for your response.
Even after properly updating the undercloud.conf file and checking the network configuration, undercloud deployment still fails.
To recreate the issue , I am mentioning all the configuration steps:
1. Installed CentOS Linux release 7.2.1511 (Core) image on baremetal.
2. created stack user and provided required permission to stack user .
3. setting hostname
sudo hostnamectl set-hostname rdo-undercloud.mydomain
sudo hostnamectl set-hostname --transient rdo-undercloud.mydomain
[stack@rdo-undercloud etc]$ cat /etc/hosts
127.0.0.1 localhost localhost.localdomain localhost4 localhost4.localdomain4
::1 localhost localhost.localdomain localhost6 localhost6.localdomain6
192.0.2.1 rdo-undercloud undercloud-rdo.mydomain
4. enable required repositories
sudo yum -y install epel-release
sudo curl -o /etc/yum.repos.d/delorean-liberty.repo https://trunk.rdoproject.org/centos7-liberty/current/delorean.repo
sudo curl -o /etc/yum.repos.d/delorean-deps-liberty.repo http://trunk.rdoproject.org/centos7-liberty/delorean-deps.repo
5. install repos
sudo yum -y install yum-plugin-priorities
sudo yum install -y python-tripleoclient
6. update undercloud.conf
[stack@rdo-undercloud ~]$ cat undercloud.conf
[DEFAULT]
local_ip = 192.0.2.1/24
undercloud_public_vip = 192.0.2.2
undercloud_admin_vip = 192.0.2.3
local_interface = enp6s0
masquerade_network = 192.0.2.0/24
dhcp_start = 192.0.2.150
dhcp_end = 192.0.2.199
network_cidr = 192.0.2.0/24
network_gateway = 192.0.2.1
discovery_iprange = 192.0.2.200,192.0.2.230
discovery_runbench = false
[auth]
7. install undercloud
openstack undercloud install
install ends in error:
Error: /Stage[main]/Neutron::Keystone::Auth/Keystone::Resource::Service_identity[neutron]/Keystone_user[neutron]: Could not evaluate: Execution of '/bin/openstack domain show --format shell Default' returned 1: Could not find resource Default
Error: /Stage[main]/Heat::Keystone::Auth/Keystone::Resource::Service_identity[heat]/Keystone_user[heat]: Could not evaluate: Execution of '/bin/openstack domain show --format shell Default' returned 1: Could not find resource Default
Error: Could not prefetch keystone_service provider 'openstack': Execution of '/bin/openstack service list --quiet --format csv --long' returned 1: Gateway Timeout (HTTP 504)
Error: Not managing Keystone_service[glance] due to earlier Keystone API failures.
Error: /Stage[main]/Glance::Keystone::Auth/Keystone::Resource::Service_identity[glance]/Keystone_service[glance::image]/ensure: change from absent to present failed: Not managing Keystone_service[glance] due to earlier Keystone API failures.
Error: Could not prefetch keystone_role provider 'openstack': Execution of '/bin/openstack role list --quiet --format csv' returned 1: Gateway Timeout (HTTP 504)
Error: Not managing Keystone_role[ResellerAdmin] due to earlier Keystone API failures.
Error: /Stage[main]/Ceilometer::Keystone::Auth/Keystone_role[ResellerAdmin]/ensure: change from absent to present failed: Not managing Keystone_role[ResellerAdmin] due to earlier Keystone API failures.
Error: Not managing Keystone_service[ironic] due to earlier Keystone API failures.
Error: /Stage[main]/Ironic::Keystone::Auth/Keystone::Resource::Service_identity[ironic]/Keystone_service[ironic::baremetal]/ensure: change from absent to present failed: Not managing Keystone_service[ironic] due to earlier Keystone API failures.
Error: /Stage[main]/Nova::Keystone::Auth/Keystone::Resource::Service_identity[nova service, user nova]/Keystone_user[nova]: Could not evaluate: Execution of '/bin/openstack domain show --format shell Default' returned 1: Could not find resource Default
Error: /Stage[main]/Glance::Keystone::Auth/Keystone::Resource::Service_identity[glance]/Keystone_user[glance]: Could not evaluate: Execution of '/bin/openstack domain show --format shell Default' returned 1: Could not find resource Default
Error: Not managing Keystone_service[novav3] due to earlier Keystone API failures.
Error: /Stage[main]/Nova::Keystone::Auth/Keystone::Resource::Service_identity[nova v3 service, user novav3]/Keystone_service[novav3::computev3]/ensure: change from absent to present failed: Not managing Keystone_service[novav3] due to earlier Keystone API failures.
Error: Not managing Keystone_role[heat_stack_user] due to earlier Keystone API failures.
Error: /Stage[main]/Heat::Keystone::Auth/Keystone_role[heat_stack_user]/ensure: change from absent to present failed: Not managing Keystone_role[heat_stack_user] due to earlier Keystone API failures.
Error: /Stage[main]/Ironic::Keystone::Auth/Keystone::Resource::Service_identity[ironic]/Keystone_user[ironic]: Could not evaluate: Execution of '/bin/openstack domain show --format shell Default' returned 1: Could not find resource Default
Error: Not managing Keystone_service[nova] due to earlier Keystone API failures.
Error: /Stage[main]/Nova::Keystone::Auth/Keystone::Resource::Service_identity[nova service, user nova]/Keystone_service[nova::compute]/ensure: change from absent to present failed: Not managing Keystone_service[nova] due to earlier Keystone API failures.
Error: Not managing Keystone_role[swiftoperator] due to earlier Keystone API failures.
Error: /Stage[main]/Swift::Keystone::Auth/Keystone_role[swiftoperator]/ensure: change from absent to present failed: Not managing Keystone_role[swiftoperator] due to earlier Keystone API failures.
Error: /Stage[main]/Ceilometer::Keystone::Auth/Keystone::Resource::Service_identity[ceilometer]/Keystone_user[ceilometer]: Could not evaluate: Execution of '/bin/openstack domain show --format shell Default' returned 1: Could not find resource Default
Error: Not managing Keystone_service[neutron] due to earlier Keystone API failures.
Error: /Stage[main]/Neutron::Keystone::Auth/Keystone::Resource::Service_identity[neutron]/Keystone_service[neutron::network]/ensure: change from absent to present failed: Not managing Keystone_service[neutron] due to earlier Keystone API failures.
Error: Not managing Keystone_service[ceilometer] due to earlier Keystone API failures.
Error: /Stage[main]/Ceilometer::Keystone::Auth/Keystone::Resource::Service_identity[ceilometer]/Keystone_service[ceilometer::metering]/ensure: change from absent to present failed: Not managing Keystone_service[ceilometer] due to earlier Keystone API failures.
Error: Not managing Keystone_service[swift] due to earlier Keystone API failures.
Error: /Stage[main]/Swift::Keystone::Auth/Keystone::Resource::Service_identity[swift]/Keystone_service[swift::object-store]/ensure: change from absent to present failed: Not managing Keystone_service[swift] due to earlier Keystone API failures.
Error: Not managing Keystone_service[keystone] due to earlier Keystone API failures.
Error: /Stage[main]/Keystone::Endpoint/Keystone::Resource::Service_identity[keystone]/Keystone_service[keystone::identity]/ensure: change from absent to present failed: Not managing Keystone_service[keystone] due to earlier Keystone API failures.
Error: Not managing Keystone_service[heat] due to earlier Keystone API failures.
Error: /Stage[main]/Heat::Keystone::Auth/Keystone::Resource::Service_identity[heat]/Keystone_service[heat::orchestration]/ensure: change from absent to present failed: Not managing Keystone_service[heat] due to earlier Keystone API failures.
Error: Could not prefetch keystone_endpoint provider 'openstack': Execution of '/bin/openstack endpoint list --quiet --format csv' returned 1: Gateway Timeout (HTTP 504)
Error: /Stage[main]/Swift::Keystone::Auth/Keystone::Resource::Service_identity[swift]/Keystone_user[swift]: Could not evaluate: Execution of '/bin/openstack domain show --format shell Default' returned 1: Could not find resource Default
Error: Could not prefetch keystone_tenant provider 'openstack': Execution of '/bin/openstack project list --quiet --format csv --long' returned 1: Gateway Timeout (HTTP 504)
Error: Not managing Keystone_tenant[service] due to earlier Keystone API failures.
Error: /Stage[main]/Keystone::Roles::Admin/Keystone_tenant[service]/ensure: change from absent to present failed: Not managing Keystone_tenant[service] due to earlier Keystone API failures.
Error: Not managing Keystone_tenant[admin] due to earlier Keystone API failures.
Error: /Stage[main]/Keystone::Roles::Admin/Keystone_tenant[admin]/ensure: change from absent to present failed: Not managing Keystone_tenant[admin] due to earlier Keystone API failures.
Error: Not managing Keystone_role[admin] due to earlier Keystone API failures.
Error: /Stage[main]/Keystone::Roles::Admin/Keystone_role[admin]/ensure: change from absent to present failed: Not managing Keystone_role[admin] due to earlier Keystone API failures.
Error: /Stage[main]/Keystone::Roles::Admin/Keystone_user[admin]: Could not evaluate: Execution of '/bin/openstack domain show --format shell Default' returned 1: Could not find resource Default
Error: Could not prefetch keystone_domain provider 'openstack': Execution of '/bin/openstack domain list --quiet --format csv' returned 1: Gateway Timeout (HTTP 504)
Notice: /Stage[main]/Heat::Db::Sync/Exec[heat-dbsync]/returns: ERROR: (pymysql.err.OperationalError) (1045, u"Access denied for user 'heat'@'rdo-undercloud' (using password: YES)")
Error: /Stage[main]/Heat::Db::Sync/Exec[heat-dbsync]: Failed to call refresh: heat-manage --config-file /etc/heat/heat.conf db_sync returned 1 instead of one of [0]
Error: /Stage[main]/Heat::Db::Sync/Exec[heat-dbsync]: heat-manage --config-file /etc/heat/heat.conf db_sync returned 1 instead of one of [0]
[2016-06-27 18:54:04,092] (os-refresh-config) [ERROR] during configure phase. [Command '['dib-run-parts', '/usr/libexec/os-refresh-config/configure.d']' returned non-zero exit status 1]
Notice: /Stage[main]/Heat::Deps/Anchor[heat::service::end]: Triggered 'refresh' from 4 events
Notice: Finished catalog run in 5259.44 seconds
+ rc=6
+ set -e
+ echo 'puppet apply exited with exit code 6'
puppet apply exited with exit code 6
+ '[' 6 '!=' 2 -a 6 '!=' 0 ']'
+ exit 6
[2016-06-27 18:54:04,092] (os-refresh-config) [ERROR] during configure phase. [Command '['dib-run-parts', '/usr/libexec/os-refresh-config/configure.d']' returned non-zero exit status 1]
[2016-06-27 18:54:04,093] (os-refresh-config) [ERROR] Aborting...
Traceback (most recent call last):
File "<string>", line 1, in <module>
File "/usr/lib/python2.7/site-packages/instack_undercloud/undercloud.py", line 815, in install
_run_orc(instack_env)
File "/usr/lib/python2.7/site-packages/instack_undercloud/undercloud.py", line 699, in _run_orc
_run_live_command(args, instack_env, 'os-refresh-config')
File "/usr/lib/python2.7/site-packages/instack_undercloud/undercloud.py", line 370, in _run_live_command
raise RuntimeError('%s failed. See log for details.' % name)
RuntimeError: os-refresh-config failed. See log for details.
Command 'instack-install-undercloud' returned non-zero exit status 1
I am not able to understand the exact cause of undercloud install failure. It would be really helpful if you guys can point be in direction to understand the exact cause of issue and any possible resolution.
Thanks a lot.
Best Regards,
Milind
Best Regards,
Milind
-----Original Message-----
From: Dan Sneddon [mailto:dsneddon@redhat.com]
Sent: Monday, June 27, 2016 12:40 PM
To: Gunjan, Milind [CTO] <Milind.Gunjan(a)sprint.com>; rdo-list(a)redhat.com
Subject: Re: [rdo-list] Redeploying UnderCloud
On 06/27/2016 06:41 AM, Gunjan, Milind [CTO] wrote:
> Hi All,
>
> Greeting.
>
>
>
> This is my first post and I am fairly new to RDO OpenStack. I am
> working on RDO Triple-O deployment on baremetal. Due to incorrect
> values in undercloud.conf file , my undercloud deployment failed. I
> would like to redeploy undercloud and I am trying to understand what
> steps has to be taken before redeploying undercloud. All the
> deployment was under stack user . So first step will be to delete
> stack user. I am not sure what has to be done regarding the networking
> configuration autogenerated by os-net-config during the older install.
>
> Please advise.
>
>
>
> Best Regards,
>
> Milind
No, definitely you don't want to delete the stack user, especially not as your first step! That would get rid of the configuration files, etc.
that are in ~stack, and generally make your life harder than it needs to be.
Anyway, it isn't necessary. You can do a procedure very much like what you do when upgrading the undercloud, with a couple of extra steps.
As the stack user, edit your undercloud.conf, and make sure there are no more typos.
If the typos were in the network configuration, you should delete the bridge and remove the ifcfg files:
$ sudo ifdown br-ctlplane
$ sudo ovs-vsctl del-br br-ctlplane
$ sudo rm /etc/sysconfig/network-scripts/*br-ctlplane
Next, run the underclound installation again:
$ sudo yum update -y # Reboot after if kernel or core packages updated $ openstack undercloud install
Then proceed with the rest of the instructions. You may find that if you already uploaded disk images or imported nodes that they will still be in the database. That's OK, or you can delete and reimport.
--
Dan Sneddon | Principal OpenStack Engineer
dsneddon(a)redhat.com | redhat.com/openstack
650.254.4025 | dsneddon:irc @dxs:twitter
________________________________
Learn more on how to switch to Sprint and save 50% on most Verizon, AT&T or T-Mobile rates. See sprint.com/50off<http://sprint.com/50off> for details.
________________________________
This e-mail may contain Sprint proprietary information intended for the sole use of the recipient(s). Any use by others is prohibited. If you are not the intended recipient, please contact the sender and delete all copies of the message.
8 years, 4 months
[rdo-list] Replacing the tripleo-quickstart HA job with a single controller pacemaker job
by John Trowbridge
Howdy folks,
Just wanted to give a heads up that I plan to replace the
"high-availability" tripleo-quickstart job in the CI promotion
pipeline[1], with a job with a lower footprint. In CI, we get a virthost
with 32G of RAM and a mediocre CPU. It is really hard to fit 5 really
active VMs on that, and we have never had the HA job stable enough to
use as a gate for that reason.
Instead, we will test the pacemaker code path in tripleo by using a
single controller setup with pacemaker enabled. We were never actually
testing HA (ie failover scenarios) in the current job, so this should be
a pretty minimal loss in coverage.
Since this allows us to drop two CPU intensive nodes from the deploy, we
can add a ceph node to that job. This will end up with more code
coverage then the current HA job, and will hopefully will end up being
stable enough to use as a gate as well.
Longer term, it would be good to restore an actual HA job, maybe even
adding some failure scenario tests to the job. I have a couple of ideas
about how we could do this, but none are feasible in the short term.
1. Use pre-existing servers for deploying[2]
This would allow running the HA job against any cloud, where we could
size the nodes appropriately to make the job stable.
2. Use an OVB cloud for the HA job.
Soon we should have an OVB (openstack virtual baremetal) cloud to run
tests in. OVB would have all of the benefits of the solution above
(unrestricted VM size), and would also provide us a way to test Ironic
in a more realistic way since it mocks IPMI rather than our current
method of using a fake ironic driver (which just does virsh commands
over SSH).
3. Add a feature to tripleo-quickstart to bridge multiple virthosts
If we could deploy our virtual machines across 2 different hosts, we
would then have much more room to deploy the HA job.
If anyone has some better ideas, they are very welcome!
-- trown
[1] https://ci.centos.org/view/rdo/view/promotion-pipeline/
[2] https://review.openstack.org/#/c/324777/
8 years, 4 months
[rdo-list] [tripleo] Baremetal introspection failing; missing required 'local_gb'
by Gerard Braad
Hi All,
Donwloaded a new undercloud image for Mitaka, and performed a
quickstart install to target two baremetal nodes. When performing
introspection, the step fails with:
+ openstack baremetal configure boot
+ openstack baremetal introspection bulk start
Setting nodes for introspection to manageable...
Starting introspection of node: 7499cdf4-0f89-4250-a930-d7f927683ea6
Starting introspection of node: 07b94ab4-e4eb-4ea9-8b7a-e11983063135
Waiting for introspection to finish...
Introspection for UUID 7499cdf4-0f89-4250-a930-d7f927683ea6 finished
with error: The following required parameters are missing:
['local_gb']
Introspection for UUID 07b94ab4-e4eb-4ea9-8b7a-e11983063135 finished
with error: The following required parameters are missing:
['local_gb']
Setting manageable nodes to available...
Introspection completed with errors:
7499cdf4-0f89-4250-a930-d7f927683ea6: The following required
parameters are missing: ['local_gb']
07b94ab4-e4eb-4ea9-8b7a-e11983063135: The following required
parameters are missing: ['local_gb']
This used to work previously...
I created a bug for this at launchpad [1].
regards,
Gerard
[1] https://bugs.launchpad.net/tripleo-quickstart/+bug/1597982
--
Gerard Braad | http://gbraad.nl
[ Doing Open Source Matters ]
8 years, 4 months