[Rdo-list] What should be RDO Definition of Done?
by Haïkel
Hello,
In an effort to improve RDO release process, we came accross the idea
of having a defined definition of done.
What are the criteria to decide if a release of RDO is DONE?
* RDO installs w/ packstack
* RDO installs w/ RDO Manager
* Documentation is up to date
etc ....
I added the topic to the RDO meeting agenda, but I'd like to enlarge
the discussion outside the pool of people coming
to the meetings and even technical contributors.
Regards,
H.
8 years, 8 months
[Rdo-list] Overcloud deploy stuck for a long time
by Tzach Shefi
Hi,
Server running centos 7.1, vm running for undercloud got up to overcloud
deploy stage.
It looks like its stuck nothing advancing for a while.
Ideas, what to check?
[stack@instack ~]$ openstack overcloud deploy --templates
Deploying templates in the directory
/usr/share/openstack-tripleo-heat-templates
[91665.696658] device vnet2 entered promiscuous mode
[91665.781346] device vnet3 entered promiscuous mode
[91675.260324] kvm [71183]: vcpu0 disabled perfctr wrmsr: 0xc1 data 0xffff
[91675.291232] kvm [71200]: vcpu0 disabled perfctr wrmsr: 0xc1 data 0xffff
[91767.799404] kvm: zapping shadow pages for mmio generation wraparound
[91767.880480] kvm: zapping shadow pages for mmio generation wraparound
[91768.957761] device vnet2 left promiscuous mode
[91769.799446] device vnet3 left promiscuous mode
[91771.223273] device vnet3 entered promiscuous mode
[91771.232996] device vnet2 entered promiscuous mode
[91773.733967] kvm [72245]: vcpu0 disabled perfctr wrmsr: 0xc1 data 0xffff
[91801.270510] device vnet2 left promiscuous mode
Thanks
Tzach
8 years, 10 months
[Rdo-list] Slow network performance on Kilo?
by Erich Weiler
Hi Y'all,
I've seen several folks on the net with this problem, but I'm still
flailing a bit as to what is really going on.
We are running RHEL 7 with RDO OpenStack Kilo.
We are setting this environment up still, not quite done yet. But in
our testing, we are experiencing very slow network performance when
downloading or uploading to and from VMs. We get like 300Kb/s or so.
We are using Neutron, MTU 9000 everywhere. I've tried disabling GSO,
LRO, TSO, GRO on the neutron interfaces, as well as the VM server
interfaces, still no improvement. I've tried lowing the VM MTU to 1500,
still no improvement. It's really strange. We do get connectivity, I
can ssh to the instances, but the network performance is just really,
really slow. It appears the instances can talk to each other very
quickly however. They just get slow network to the internet (i.e. when
packets go through the network node).
We are using VLAN tenant network isolation.
Can anyone point me in the right direction? I've been beating my head
against a wall and googling without avail for a week...
Many thanks,
erich
8 years, 10 months
[Rdo-list] Reminder: RDO test day tomorrow
by Rich Bowen
A reminder that we'll be holding the RDO Mitaka 2 test day Tomorrow and
Thursday - January 27-28. Details, and test instructions, may be found
here: https://www.rdoproject.org/testday/mitaka/milestone2/
I will be traveling tomorrow, so I request that folks be particularly
aware on #rdo, so that beginners and others new to RDO have the support
that they need when things go wrong.
Thank you all, in advance, for the time that you're willing to invest to
make RDO better for everyone.
--
Rich Bowen - rbowen(a)redhat.com
OpenStack Community Liaison
http://rdoproject.org/
8 years, 10 months
[Rdo-list] Tempest Config Problem Mikata
by Brandon James
Hello,
I have been able to successfully install my over and under cloud via the
trippleo quick start method. I am having issues however when running the
config tempest portion of the overcloud validation. I would like to
complete this and run the required test after. I have listed the error I
am seeing below. I made sure I ran the command source ~/overcloudrc prior
to running this command so I am unsure what is causing this issue. I am
also using the latest Mikata version.
tools/config_tempest.py --out etc/tempest.conf --network-id $public_net_id
--deployer-input ~/tempest-deployer-input.conf --debug --create
identity.uri $OS_AUTH_URL identity.admin_password $OS_PASSWORD
network.tenant_network_cidr 192.168.0.0/24 object-storage.operator_role
swiftoperator orchestration.stack_owner_role heat_stack_owner
2016-01-28 20:14:55.286 1479 INFO tempest [-] Using tempest config file
/etc/tempest/tempest.conf
2016-01-28 20:14:55.365 1479 INFO __main__ [-] Reading defaults from file
'/home/stack/tempest/etc/default-overrides.conf'
2016-01-28 20:14:55.367 1479 INFO __main__ [-] Adding options from
deployer-input file '/home/stack/tempest-deployer-input.conf'
2016-01-28 20:14:55.367 1479 DEBUG __main__ [-] Setting
[compute-feature-enabled] console_output = false set
tools/config_tempest.py:403
2016-01-28 20:14:55.367 1479 DEBUG __main__ [-] Setting [object-storage]
operator_role = swiftoperator set tools/config_tempest.py:403
2016-01-28 20:14:55.368 1479 DEBUG __main__ [-] Setting [orchestration]
stack_owner_role = heat_stack_user set tools/config_tempest.py:403
2016-01-28 20:14:55.368 1479 DEBUG __main__ [-] Setting [volume]
backend1_name = tripleo_iscsi set tools/config_tempest.py:403
2016-01-28 20:14:55.368 1479 DEBUG __main__ [-] Setting
[volume-feature-enabled] bootable = true set tools/config_tempest.py:403
2016-01-28 20:14:55.368 1479 DEBUG __main__ [-] Setting [identity] uri =
http://192.0.2.6:5000/v2.0 set tools/config_tempest.py:403
2016-01-28 20:14:55.368 1479 DEBUG __main__ [-] Setting [identity]
admin_password = UVbm3YJsqjWRGUsFzhjcrf498 set tools/config_tempest.py:403
2016-01-28 20:14:55.368 1479 DEBUG __main__ [-] Setting [network]
tenant_network_cidr = 192.168.0.0/24 set tools/config_tempest.py:403
2016-01-28 20:14:55.369 1479 DEBUG __main__ [-] Setting [object-storage]
operator_role = swiftoperator set tools/config_tempest.py:403
2016-01-28 20:14:55.369 1479 DEBUG __main__ [-] Setting [orchestration]
stack_owner_role = heat_stack_owner set tools/config_tempest.py:403
2016-01-28 20:14:55.369 1479 DEBUG __main__ [-] Setting [identity] uri_v3 =
http://192.0.2.6:5000/v3 set tools/config_tempest.py:403
2016-01-28 20:14:55.490 1479 INFO tempest_lib.common.rest_client
[req-70031732-c6fa-4968-b163-a154bfee6881 ] Request (main): 200 POST
http://192.0.2.6:5000/v2.0/tokens
2016-01-28 20:14:55.516 1479 INFO tempest_lib.common.rest_client
[req-99efe4e1-e698-469d-9119-8dd25dc2f076 ] Request (main): 200 GET
http://192.0.2.6:35357/v2.0/tenants 0.025s
2016-01-28 20:14:55.516 1479 DEBUG __main__ [-] Setting [identity]
admin_tenant_id = 9eab7137a4cd4857b8419e608cf75639 set
tools/config_tempest.py:403
2016-01-28 20:14:55.524 1479 CRITICAL tempest [-] ServiceError: Request on
service 'compute' with url '
http://192.0.2.6:8774/v2/9eab7137a4cd4857b8419e608cf75639/extensions'
failed with code 503
2016-01-28 20:14:55.524 1479 ERROR tempest Traceback (most recent call
last):
2016-01-28 20:14:55.524 1479 ERROR tempest File
"tools/config_tempest.py", line 772, in <module>
2016-01-28 20:14:55.524 1479 ERROR tempest main()
2016-01-28 20:14:55.524 1479 ERROR tempest File
"tools/config_tempest.py", line 149, in main
2016-01-28 20:14:55.524 1479 ERROR tempest
object_store_discovery=conf.get_bool_value(swift_discover))
2016-01-28 20:14:55.524 1479 ERROR tempest File
"/home/stack/tempest/tempest/common/api_discovery.py", line 157, in discover
2016-01-28 20:14:55.524 1479 ERROR tempest services[name]['extensions']
= service.get_extensions()
2016-01-28 20:14:55.524 1479 ERROR tempest File
"/home/stack/tempest/tempest/common/api_discovery.py", line 75, in
get_extensions
2016-01-28 20:14:55.524 1479 ERROR tempest body =
self.do_get(self.service_url + '/extensions')
2016-01-28 20:14:55.524 1479 ERROR tempest File
"/home/stack/tempest/tempest/common/api_discovery.py", line 53, in do_get
2016-01-28 20:14:55.524 1479 ERROR tempest " with code %d" % (self.name,
url, r.status))
2016-01-28 20:14:55.524 1479 ERROR tempest ServiceError: Request on service
'compute' with url '
http://192.0.2.6:8774/v2/9eab7137a4cd4857b8419e608cf75639/extensions'
failed with code 503
2016-01-28 20:14:55.524 1479 ERROR tempest
--
Thanks,
Brandon J
8 years, 10 months
[Rdo-list] RDO Manager :: Ceph OSDs on the Compute Nodes
by Dan Radez
I was asked to post this to the list when I started this,
Here's the first draft:
https://review.openstack.org/#/c/273754/
It needs a bit of work still, but it's a start. The OSD will provision
correctly as long as the compute OSD configuration happens after the
controller ceph configurations, which is rarely.
A rerun of puppet on the compute nodes after over cloud deployment will
register the OSDs on the compute nodes into the ceph cluster.
Working with OOO folks to sort out the right way to make a dependency on
the controller ceph configuration to complete before the compute ceph
configuration fires.
Radez
8 years, 10 months
[Rdo-list] Mitaka: Overcloud installation failed code 6
by Ido Ovadia
Hello,
I deployed Mitaka with rdo-manager on a virtual setup (udercloud, ceph, compute, 3*controller) according to the
instructions in https://www.rdoproject.org/rdo-manager/
Overcloud deployment failed with code: 6
Need some guide to solve this......
openstack overcloud deploy --templates --control-scale 3 --compute-scale 1 --ceph-storage-scale 1 --ntp-server clock.redhat.com -e /usr/share/openstack-tripleo-heat-templates/environments/puppet-pacemaker.yaml -e /usr/share/openstack-tripleo-heat-templates/environments/storage-environment.yaml --libvirt-type qemu
.......
2016-01-28 14:22:37 [overcloud-ControllerNodesPostDeployment-5b55h3l77m7y-ControllerServicesBaseDeployment_Step2-s72xsdjtuh2a]: CREATE_IN_PROGRESS Stack CREATE started
2016-01-28 14:22:37 [1]: CREATE_IN_PROGRESS state changed
2016-01-28 14:22:37 [1]: SIGNAL_COMPLETE Unknown
2016-01-28 14:22:37 [2]: SIGNAL_COMPLETE Unknown
2016-01-28 14:22:38 [2]: SIGNAL_COMPLETE Unknown
2016-01-28 14:22:38 [ControllerDeployment]: SIGNAL_COMPLETE Unknown
2016-01-28 14:22:38 [0]: CREATE_IN_PROGRESS state changed
2016-01-28 14:22:38 [1]: SIGNAL_COMPLETE Unknown
2016-01-28 14:22:39 [1]: SIGNAL_COMPLETE Unknown
2016-01-28 14:22:39 [ControllerDeployment]: SIGNAL_COMPLETE Unknown
2016-01-28 14:22:39 [NetworkDeployment]: SIGNAL_COMPLETE Unknown
2016-01-28 14:22:39 [2]: SIGNAL_COMPLETE Unknown
2016-01-28 14:22:39 [2]: CREATE_IN_PROGRESS state changed
2016-01-28 14:22:40 [NetworkDeployment]: SIGNAL_COMPLETE Unknown
2016-01-28 14:22:40 [1]: SIGNAL_COMPLETE Unknown
2016-01-28 14:23:45 [2]: SIGNAL_IN_PROGRESS Signal: deployment failed (6)
2016-01-28 14:23:46 [2]: SIGNAL_COMPLETE Unknown
2016-01-28 14:23:47 [2]: SIGNAL_COMPLETE Unknown
2016-01-28 14:23:47 [2]: SIGNAL_COMPLETE Unknown
2016-01-28 14:23:47 [2]: SIGNAL_COMPLETE Unknown
2016-01-28 14:23:48 [2]: SIGNAL_COMPLETE Unknown
2016-01-28 14:23:48 [ControllerDeployment]: SIGNAL_COMPLETE Unknown
2016-01-28 14:23:49 [NetworkDeployment]: SIGNAL_COMPLETE Unknown
2016-01-28 14:23:49 [2]: SIGNAL_COMPLETE Unknown
2016-01-28 14:23:58 [1]: SIGNAL_IN_PROGRESS Signal: deployment failed (6)
2016-01-28 14:23:58 [1]: CREATE_FAILED Error: resources[1]: Deployment to server failed: deploy_status_code : Deployment exited with non-zero status code: 6
2016-01-28 14:23:59 [1]: SIGNAL_COMPLETE Unknown
2016-01-28 14:23:59 [1]: SIGNAL_COMPLETE Unknown
2016-01-28 14:24:00 [1]: SIGNAL_COMPLETE Unknown
2016-01-28 14:24:00 [1]: SIGNAL_COMPLETE Unknown
2016-01-28 14:24:01 [ControllerDeployment]: SIGNAL_COMPLETE Unknown
2016-01-28 14:24:02 [NetworkDeployment]: SIGNAL_COMPLETE Unknown
2016-01-28 14:24:02 [1]: SIGNAL_COMPLETE Unknown
2016-01-28 14:24:15 [0]: SIGNAL_IN_PROGRESS Signal: deployment failed (6)
2016-01-28 14:24:15 [0]: CREATE_FAILED Error: resources[0]: Deployment to server failed: deploy_status_code : Deployment exited with non-zero status code: 6
2016-01-28 14:24:16 [overcloud-ControllerNodesPostDeployment-5b55h3l77m7y-ControllerServicesBaseDeployment_Step2-s72xsdjtuh2a]: CREATE_FAILED Resource CREATE failed: Error: resources[2]: Deployment to server failed: deploy_status_code : Deployment exited with non-zero status code: 6
2016-01-28 14:24:16 [0]: SIGNAL_COMPLETE Unknown
2016-01-28 14:24:17 [0]: SIGNAL_COMPLETE Unknown
2016-01-28 14:24:17 [ControllerServicesBaseDeployment_Step2]: CREATE_FAILED Error: resources.ControllerServicesBaseDeployment_Step2.resources[2]: Deployment to server failed: deploy_status_code: Deployment exited with non-zero status code: 6
2016-01-28 14:24:17 [0]: SIGNAL_COMPLETE Unknown
2016-01-28 14:24:18 [ControllerDeployment]: SIGNAL_COMPLETE Unknown
2016-01-28 14:24:18 [overcloud-ControllerNodesPostDeployment-5b55h3l77m7y]: CREATE_FAILED Resource CREATE failed: Error: resources.ControllerServicesBaseDeployment_Step2.resources[2]: Deployment to server failed: deploy_status_code: Deployment exited with non-zero status code: 6
2016-01-28 14:24:18 [0]: SIGNAL_COMPLETE Unknown
2016-01-28 14:24:19 [NetworkDeployment]: SIGNAL_COMPLETE Unknown
2016-01-28 14:24:19 [0]: SIGNAL_COMPLETE Unknown
2016-01-28 14:28:19 [ComputeNodesPostDeployment]: CREATE_FAILED CREATE aborted
2016-01-28 14:28:19 [overcloud]: CREATE_FAILED Resource CREATE failed: Error: resources.ControllerNodesPostDeployment.resources.ControllerServicesBaseDeployment_Step2.resources[2]: Deployment to server failed: deploy_status_code: Deployment exited with non-zero status code: 6
Stack overcloud CREATE_FAILED
Deployment failed: Heat Stack create failed.
------------------------------------------------------------------
more info
=========
heat deployment-show 91352911-6df8-4797-be2b-2789b3b5e066 output http://pastebin.test.redhat.com/344518
-----------------------------------------------------------------
8 years, 10 months
[Rdo-list] Troubleshooting services after reboot of the overcloud
by Udi Kalifon
Hello.
I rebooted all my overcloud nodes. This is a Mitaka installation with
rdo-manager on a virtual environment. The keystone service is not
answering any more, and I have no clue what to do about it now that
it's running under Apache. The httpd service itself is running.
How do I troubleshoot this?
Thanks,
Udi.
8 years, 10 months