Re: [rdo-list] Packstack refactor and future ideas
by Javier Pena
----- Original Message -----
> On Wed, Jun 8, 2016 at 6:33 PM, Ivan Chavero <ichavero(a)redhat.com> wrote:
> > I can be wrong but right now Packstack can already do this stuff,
> > more command line options are needed or it might need little tweaks to the
> > code but this is not far from current Packstack options.
>
> Right now Packstack has a lot of code and logic to connect to
> additional nodes and do things.
To be honest, the amount of code is not that big (at least to me).
On a quick check over the refactored version, I see https://github.com/javierpena/packstack/blob/feature/manifest_refactor/pa... could be simplified (maybe removed), then https://github.com/javierpena/packstack/blob/feature/manifest_refactor/pa... would need to be rewritten, to support a single node. Everything else is small simplifications on the plugins to assume all hosts are the same.
> Packstack, itself, connects to compute hosts to install nova, same
> with the other kind of hosts.
>
> What I am saying is that Packstack should only ever be able to install
> (efficiently) services on "localhost".
>
> Hence, me, as a user (with Ansible or manually), could do something
> like I mentioned before:
> - Login to Server 1 and run "packstack --install-rabbitmq=y
> --install-mariadb=y"
> - Login to Server 2 and run "packstack --install-keystone=y
> --rabbitmq-server=server1 --database-server=server1"
> - Login to Server 3 and run "packstack --install-glance=y
> --keystone-server=server2 --database-server=server1
> --rabbitmq-server=server1"
> - Login to Server 4 and run "packstack --install-nova=y
> --keystone-server=server2 --database-server=server1
> --rabbitmq-server=server1"
> (etc)
>
> This would work, allow multi node without having all the multi node
> logic embedded and handled by Packstack itself.
Doing this would require adding a similar layer of complexity, but in the puppet code instead of python. Right now, we assume that every API service is running on config['CONTROLLER_HOST'], with your proposal we should have the current host, and separate variables (and Hiera processing in python) to have a single variable per service. I think it's a good idea anyway, but I think it wouldn't reduce complexity or any associated CI coverage concerns.
We could take an easier way and assume we only have 3 roles, as in the current refactored code: controller, network, compute. The logic would then be:
- By default we install everything, so all in one
- If our host is not CONFIG_CONTROLLER_HOST but is part of CONFIG_NETWORK_HOSTS, we apply the network manifest
- Same as above if our host is part of CONFIG_COMPUTE_HOSTS
Of course, the last two options would assume a first server is installed as controller.
This would allow us to reuse the same answer file on all runs (one per host as you proposed), eliminate the ssh code as we are always running locally, and make some assumptions in the python code, like expecting OPM to be deployed and such. A contributed ansible wrapper to automate the runs would be straightforward to create.
What do you think? Would it be worth the effort?
Regards,
Javier
>
> David Moreau Simard
> Senior Software Engineer | Openstack RDO
>
> dmsimard = [irc, github, twitter]
>
8 years, 5 months
[rdo-list] [Minute] RDO meeting (2016-06-22) Minutes
by Ivan Chavero
==============================
#rdo: RDO meeting (2016-06-22)
==============================
Meeting started by imcsk8 at 15:00:45 UTC. The full logs are available
at
https://meetbot.fedoraproject.org/rdo/2016-06-22/rdo_meeting_(2016-06-22)...
.
Meeting summary
---------------
* LINK: https://etherpad.openstack.org/p/RDO-Meeting (chandankumar,
15:01:09)
* rollcall (apevec, 15:04:01)
* graylist review.rdoproject.org (apevec, 15:04:17)
* ACTION: apevec to followup remaining graylisting of
review.rdoproject.org with fbo jschlueter misc (apevec, 15:10:18)
* DLRN instance migration to ci.centos infra (apevec, 15:10:44)
* ACTION: dmsimard to re-sync the current-passed-ci symlinks (jpena,
15:16:16)
* ACTION: jpena to switch DNS entries for trunk.rdoproject.org on Thu
Jun 23 (jpena, 15:16:29)
* MM3 (mailman3) installation (apevec, 15:17:02)
* ACTION: number80 coordinate requirements for m-l migration in trello
(number80, 15:21:14)
* New release for openstack-utils (apevec, 15:27:01)
* LINK:
https://github.com/redhat-openstack/openstack-utils/commit/11c3e85609f168...
(number80, 15:29:46)
* LINK: https://github.com/redhat-openstack/openstack-utils/issues/13
should be fixed? (apevec, 15:30:01)
* ACTION: apevec and number80 do triage of open issue and release
openstack-utils 2016.1 (apevec, 15:31:09)
* Proposal to manage pinned packages (apevec, 15:31:35)
* ACTION: jpena to start thread on rdo-list about pinned packages
(jpena, 15:43:49)
* Add openstack-macros in CBS cloud SIG buildroot (apevec, 15:44:21)
* ACTION: number80 migrate rdo-rpm-macros to openstack-macros
(number80, 15:44:57)
* ACTION: apevec to review dlrn rpm-packaging support
https://review.rdoproject.org/r/1346 (apevec, 15:47:50)
* How to raise an alert when RDO Trunk repos are broken (apevec,
15:48:03)
* ACTION: dmsimard to suggest lightweight sensu probe for basic rdo
repo consistency check (apevec, 15:51:31)
* Test Day (apevec, 15:52:12)
* ACTION: rbowen to promote Newton Milestone 2 test day, July 21/22
(rbowen, 15:54:53)
* Chair for next meeting (apevec, 15:55:04)
* imcsk8 is chair June 29 (apevec, 15:56:17)
* chandankumar is chair July 6 (apevec, 15:56:34)
* Open Floor (apevec, 15:56:37)
Meeting ended at 16:00:32 UTC.
Action Items
------------
* apevec to followup remaining graylisting of review.rdoproject.org with
fbo jschlueter misc
* dmsimard to re-sync the current-passed-ci symlinks
* jpena to switch DNS entries for trunk.rdoproject.org on Thu Jun 23
* number80 coordinate requirements for m-l migration in trello
* apevec and number80 do triage of open issue and release
openstack-utils 2016.1
* jpena to start thread on rdo-list about pinned packages
* number80 migrate rdo-rpm-macros to openstack-macros
* apevec to review dlrn rpm-packaging support
https://review.rdoproject.org/r/1346
* dmsimard to suggest lightweight sensu probe for basic rdo repo
consistency check
* rbowen to promote Newton Milestone 2 test day, July 21/22
Action Items, by person
-----------------------
* apevec
* apevec to followup remaining graylisting of review.rdoproject.org
with fbo jschlueter misc
* apevec and number80 do triage of open issue and release
openstack-utils 2016.1
* apevec to review dlrn rpm-packaging support
https://review.rdoproject.org/r/1346
* dmsimard
* dmsimard to re-sync the current-passed-ci symlinks
* dmsimard to suggest lightweight sensu probe for basic rdo repo
consistency check
* jpena
* jpena to switch DNS entries for trunk.rdoproject.org on Thu Jun 23
* jpena to start thread on rdo-list about pinned packages
* misc
* apevec to followup remaining graylisting of review.rdoproject.org
with fbo jschlueter misc
* number80
* number80 coordinate requirements for m-l migration in trello
* apevec and number80 do triage of open issue and release
openstack-utils 2016.1
* number80 migrate rdo-rpm-macros to openstack-macros
* rbowen
* rbowen to promote Newton Milestone 2 test day, July 21/22
* **UNASSIGNED**
* (none)
People Present (lines said)
---------------------------
* apevec (137)
* number80 (31)
* dmsimard (28)
* rbowen (24)
* jpena (24)
* trown (21)
* imcsk8 (20)
* Duck (20)
* misc (17)
* zodbot (9)
* chandankumar (4)
* EmilienM (3)
* amoralej (3)
* mburned (1)
Generated by `MeetBot`_ 0.1.4
.. _`MeetBot`: http://wiki.debian.org/MeetBot
8 years, 5 months
[rdo-list] [tripleo] [oooq] Deployment to baremetal fails; "No valid host was found"
by Gerard Braad
Hi,
as mentioned in a previous email, I am deploying baremetal nodes using
the quickstart. At the moment I can introspect nodes correctly, but am
unable to deploy to them.
I performed the checks as mentioned in
/tripleo-docs/doc/source/troubleshooting/troubleshooting-overcloud.rst:
The flavor list I have is unchanged:
[stack@undercloud ~]$ openstack flavor list
+--------------------------------------+---------------+------+------+-----------+-------+-----------+
| ID | Name | RAM | Disk |
Ephemeral | VCPUs | Is Public |
+--------------------------------------+---------------+------+------+-----------+-------+-----------+
| 2e72ffb5-c6d7-46fd-ad75-448c0ad6855f | baremetal | 4096 | 40 |
0 | 1 | True |
| 6b8b37e4-618d-4841-b5e3-f556ef27fd4d | oooq_compute | 8192 | 49 |
0 | 1 | True |
| 973b58c3-8730-4b1f-96b2-fda253c15dbc | oooq_control | 8192 | 49 |
0 | 1 | True |
| e22dc516-f53f-4a71-9793-29c614999801 | oooq_ceph | 8192 | 49 |
0 | 1 | True |
| e3dce62a-ac8d-41ba-9f97-84554b247faa | block-storage | 4096 | 40 |
0 | 1 | True |
| f5fe9ba6-cf5c-4ef3-adc2-34f3b4381915 | control | 4096 | 40 |
0 | 1 | True |
| fabf81d8-44cb-4c25-8ed0-2afd124425db | compute | 4096 | 40 |
0 | 1 | True |
| fe512696-2294-40cb-9d20-12415f45c1a9 | ceph-storage | 4096 | 40 |
0 | 1 | True |
| ffc859af-dbfd-4e27-99fb-9ab02f4afa79 | swift-storage | 4096 | 40 |
0 | 1 | True |
+--------------------------------------+---------------+------+------+-----------+-------+-----------+
In instackenv.json the nodes have been assigned as:
[stack@undercloud ~]$ cat instackenv.json
{
"nodes":[
{
"_comment": "ooo1",
"pm_type":"pxe_ipmitool",
"mac": [
"00:26:9e:9b:c3:36"
],
"cpu": "16",
"memory": "65536",
"disk": "370",
"arch": "x86_64",
"pm_user":"root",
"pm_password":"admin",
"pm_addr":"10.0.108.126",
"capabilities": "profile:control,boot_option:local"
},
{
"_comment": "ooo2",
"pm_type":"pxe_ipmitool",
"mac": [
"00:26:9e:9c:38:a6"
],
"cpu": "16",
"memory": "65536",
"disk": "370",
"arch": "x86_64",
"pm_user":"root",
"pm_password":"admin",
"pm_addr":"10.0.108.127",
"capabilities": "profile:compute,boot_option:local"
}
]
}
[stack@undercloud ~]$ ironic node-list
+--------------------------------------+------+---------------+-------------+--------------------+-------------+
| UUID | Name | Instance UUID | Power
State | Provisioning State | Maintenance |
+--------------------------------------+------+---------------+-------------+--------------------+-------------+
| 0956df36-b642-44b8-a67f-0df88270372b | None | None | power
off | manageable | False |
| cc311355-f373-4e5c-99be-31ba3185639d | None | None | power
off | manageable | False |
+--------------------------------------+------+---------------+-------------+--------------------+-------------+
And manually I perform the introspection:
[stack@undercloud ~]$ openstack baremetal introspection bulk start
Setting nodes for introspection to manageable...
Starting introspection of node: 0956df36-b642-44b8-a67f-0df88270372b
Starting introspection of node: cc311355-f373-4e5c-99be-31ba3185639d
Waiting for introspection to finish...
Introspection for UUID 0956df36-b642-44b8-a67f-0df88270372b finished
successfully.
Introspection for UUID cc311355-f373-4e5c-99be-31ba3185639d finished
successfully.
Setting manageable nodes to available...
Node 0956df36-b642-44b8-a67f-0df88270372b has been set to available.
Node cc311355-f373-4e5c-99be-31ba3185639d has been set to available.
Introspection completed.
[stack@undercloud ~]$ ironic node-list
+--------------------------------------+------+---------------+-------------+--------------------+-------------+
| UUID | Name | Instance UUID | Power
State | Provisioning State | Maintenance |
+--------------------------------------+------+---------------+-------------+--------------------+-------------+
| 0956df36-b642-44b8-a67f-0df88270372b | None | None | power
off | available | False |
| cc311355-f373-4e5c-99be-31ba3185639d | None | None | power
off | available | False |
+--------------------------------------+------+---------------+-------------+--------------------+-------------+
After this, I start the deployment. I have defined the compute and
control flavor to be of the respective type.
[stack@undercloud ~]$ ./overcloud-deploy.sh
<snip>
+ openstack overcloud deploy --templates --timeout 60 --control-scale
1 --control-flavor control --compute-scale 1 --compute-flavor compute
--ntp-server pool.ntp.org -e /tmp/deploy_env.yaml
Deploying templates in the directory /usr/share/openstack-tripleo-heat-templates
2016-06-20 08:18:33 [overcloud]: CREATE_IN_PROGRESS Stack CREATE started
2016-06-20 08:18:33 [HorizonSecret]: CREATE_IN_PROGRESS state changed
2016-06-20 08:18:33 [RabbitCookie]: CREATE_IN_PROGRESS state changed
2016-06-20 08:18:33 [PcsdPassword]: CREATE_IN_PROGRESS state changed
2016-06-20 08:18:33 [MysqlClusterUniquePart]: CREATE_IN_PROGRESS state changed
2016-06-20 08:18:33 [MysqlRootPassword]: CREATE_IN_PROGRESS state changed
2016-06-20 08:18:33 [Networks]: CREATE_IN_PROGRESS state changed
2016-06-20 08:18:34 [VipConfig]: CREATE_IN_PROGRESS state changed
2016-06-20 08:18:34 [HeatAuthEncryptionKey]: CREATE_IN_PROGRESS state changed
2016-06-20 08:18:34 [overcloud-VipConfig-i4dgmk37z6hg]:
CREATE_IN_PROGRESS Stack CREATE started
2016-06-20 08:18:34 [overcloud-Networks-4pb3htxq7rkd]:
CREATE_IN_PROGRESS Stack CREATE started
<snip>
2016-06-20 08:19:06 [Controller]: CREATE_FAILED ResourceInError:
resources.Controller: Went to status ERROR due to "Message: No valid
host was found. There are not enough hosts available., Code: 500"
2016-06-20 08:19:06 [Controller]: DELETE_IN_PROGRESS state changed
2016-06-20 08:19:06 [NovaCompute]: CREATE_FAILED ResourceInError:
resources.NovaCompute: Went to status ERROR due to "Message: No valid
host was found. There are not enough hosts available., Code: 500"
2016-06-20 08:19:06 [NovaCompute]: DELETE_IN_PROGRESS state changed
2016-06-20 08:19:09 [Controller]: DELETE_COMPLETE state changed
2016-06-20 08:19:09 [NovaCompute]: DELETE_COMPLETE state changed
2016-06-20 08:19:12 [Controller]: CREATE_IN_PROGRESS state changed
2016-06-20 08:19:12 [NovaCompute]: CREATE_IN_PROGRESS state changed
2016-06-20 08:19:14 [Controller]: CREATE_FAILED ResourceInError:
resources.Controller: Went to status ERROR due to "Message: No valid
host was found. There are not enough hosts available., Code: 500"
2016-06-20 08:19:14 [Controller]: DELETE_IN_PROGRESS state changed
2016-06-20 08:19:14 [NovaCompute]: CREATE_FAILED ResourceInError:
resources.NovaCompute: Went to status ERROR due to "Message: No valid
host was found. There are not enough hosts available., Code: 500"
2016-06-20 08:19:14 [NovaCompute]: DELETE_IN_PROGRESS state changed
But as you can see, the deployment fails.
I check the introspection information and verify that the disk, local
memory and cpus are matching or exceeding the flavor:
[stack@undercloud ~]$ ironic node-show 0956df36-b642-44b8-a67f-0df88270372b
+------------------------+-------------------------------------------------------------------------+
| Property | Value
|
+------------------------+-------------------------------------------------------------------------+
| chassis_uuid |
|
| clean_step | {}
|
| console_enabled | False
|
| created_at | 2016-06-20T05:51:17+00:00
|
| driver | pxe_ipmitool
|
| driver_info | {u'ipmi_password': u'******',
u'ipmi_address': u'10.0.108.126', |
| | u'ipmi_username': u'root',
u'deploy_kernel': |
| | u'07c794a6-b427-4e75-ba58-7c555abbf2f8',
u'deploy_ramdisk': u'67a66b7b- |
| | 637f-4b25-bcef-ed39ae32a1f4'}
|
| driver_internal_info | {}
|
| extra | {u'hardware_swift_object':
u'extra_hardware-0956df36-b642-44b8-a67f- |
| | 0df88270372b'}
|
| inspection_finished_at | None
|
| inspection_started_at | None
|
| instance_info | {}
|
| instance_uuid | None
|
| last_error | None
|
| maintenance | False
|
| maintenance_reason | None
|
| name | None
|
| power_state | power off
|
| properties | {u'memory_mb': u'65536', u'cpu_arch':
u'x86_64', u'local_gb': u'371', |
| | u'cpus': u'16', u'capabilities':
u'profile:control,boot_option:local'} |
| provision_state | available
|
| provision_updated_at | 2016-06-20T07:32:46+00:00
|
| raid_config |
|
| reservation | None
|
| target_power_state | None
|
| target_provision_state | None
|
| target_raid_config |
|
| updated_at | 2016-06-20T07:32:46+00:00
|
| uuid | 0956df36-b642-44b8-a67f-0df88270372b
|
+------------------------+-------------------------------------------------------------------------+
And also the hypervisor stats are set, but only matching the node count.
[stack@undercloud ~]$ nova hypervisor-stats
+----------------------+-------+
| Property | Value |
+----------------------+-------+
| count | 2 |
| current_workload | 0 |
| disk_available_least | 0 |
| free_disk_gb | 0 |
| free_ram_mb | 0 |
| local_gb | 0 |
| local_gb_used | 0 |
| memory_mb | 0 |
| memory_mb_used | 0 |
| running_vms | 0 |
| vcpus | 0 |
| vcpus_used | 0 |
+----------------------+-------+
Registering the nodes as profile:baremetal has the same effect.
What other parameters are used in making the decision if a node can be
deployed to? I probably miss a small detail... what can I check to
make sure the deployment starts?
regards,
Gerard
--
Gerard Braad | http://gbraad.nl
[ Doing Open Source Matters ]
8 years, 5 months
[rdo-list] Unanswered 'RDO' questions on ask.openstack.org
by Rich Bowen
59 unanswered questions:
Unable to start Ceilometer services
https://ask.openstack.org/en/question/93600/unable-to-start-ceilometer-se...
Tags: ceilometer, ceilometer-api
Dashboard console - Keyboard and mouse issue in Linux graphical environmevt
https://ask.openstack.org/en/question/93583/dashboard-console-keyboard-an...
Tags: nova, nova-console
Adding hard drive space to RDO installation
https://ask.openstack.org/en/question/93412/adding-hard-drive-space-to-rd...
Tags: cinder, openstack, space, add
AWS Ec2 inst Eth port loses IP when attached to linux bridge in Openstack
https://ask.openstack.org/en/question/92271/aws-ec2-inst-eth-port-loses-i...
Tags: openstack, networking, aws
ceilometer: I've installed openstack mitaka. but swift stops working
when i configured the pipeline and ceilometer filter
https://ask.openstack.org/en/question/92035/ceilometer-ive-installed-open...
Tags: ceilometer, openstack-swift, mitaka
Fail on installing the controller on Cent OS 7
https://ask.openstack.org/en/question/92025/fail-on-installing-the-contro...
Tags: installation, centos7, controller
the error of service entity and API endpoints
https://ask.openstack.org/en/question/91702/the-error-of-service-entity-a...
Tags: service, entity, and, api, endpoints
Running delorean fails: Git won't fetch sources
https://ask.openstack.org/en/question/91600/running-delorean-fails-git-wo...
Tags: delorean, rdo
RDO Manager install issue - can't resolve trunk-mgt.rdoproject.org
https://ask.openstack.org/en/question/91533/rdo-manager-install-issue-can...
Tags: rdo-manager
Keystone authentication: Failed to contact the endpoint.
https://ask.openstack.org/en/question/91517/keystone-authentication-faile...
Tags: keystone, authenticate, endpoint, murano
adding computer node.
https://ask.openstack.org/en/question/91417/adding-computer-node/
Tags: rdo, openstack
Liberty RDO: stack resource topology icons are pink
https://ask.openstack.org/en/question/91347/liberty-rdo-stack-resource-to...
Tags: stack, resource, topology, dashboard
Build of instance aborted: Block Device Mapping is Invalid.
https://ask.openstack.org/en/question/91205/build-of-instance-aborted-blo...
Tags: cinder, lvm, centos7
No handlers could be found for logger "oslo_config.cfg" while syncing
the glance database
https://ask.openstack.org/en/question/91169/no-handlers-could-be-found-fo...
Tags: liberty, glance, install-openstack
how to use chef auto manage openstack in RDO?
https://ask.openstack.org/en/question/90992/how-to-use-chef-auto-manage-o...
Tags: chef, rdo
Separate Cinder storage traffic from management
https://ask.openstack.org/en/question/90405/separate-cinder-storage-traff...
Tags: cinder, separate, nic, iscsi
Openstack installation fails using packstack, failure is in installation
of openstack-nova-compute. Error: Dependency Package[nova-compute] has
failures
https://ask.openstack.org/en/question/88993/openstack-installation-fails-...
Tags: novacompute, rdo, packstack, dependency, failure
CentOS OpenStack - compute node can't talk
https://ask.openstack.org/en/question/88989/centos-openstack-compute-node...
Tags: rdo
How to setup SWIFT_PROXY_NODE and SWIFT_STORAGE_NODEs separately on
RDO Liberty ?
https://ask.openstack.org/en/question/88897/how-to-setup-swift_proxy_node...
Tags: rdo, liberty, swift, ha
VM and container can't download anything from internet
https://ask.openstack.org/en/question/88338/vm-and-container-cant-downloa...
Tags: rdo, neutron, network, connectivity
Fedora22, Liberty, horizon VNC console and keymap=sv with ; and/
https://ask.openstack.org/en/question/87451/fedora22-liberty-horizon-vnc-...
Tags: keyboard, map, keymap, vncproxy, novnc
OpenStack-Docker driver failed
https://ask.openstack.org/en/question/87243/openstack-docker-driver-failed/
Tags: docker, openstack, liberty
Can't create volume with cinder
https://ask.openstack.org/en/question/86670/cant-create-volume-with-cinder/
Tags: cinder, glusterfs, nfs
Sahara SSHException: Error reading SSH protocol banner
https://ask.openstack.org/en/question/84710/sahara-sshexception-error-rea...
Tags: sahara, icehouse, ssh, vanila
Error Sahara create cluster: 'Error attach volume to instance
https://ask.openstack.org/en/question/84651/error-sahara-create-cluster-e...
Tags: sahara, attach-volume, vanila, icehouse
Creating Sahara cluster: Error attach volume to instance
https://ask.openstack.org/en/question/84650/creating-sahara-cluster-error...
Tags: sahara, attach-volume, hadoop, icehouse, vanilla
Routing between two tenants
https://ask.openstack.org/en/question/84645/routing-between-two-tenants/
Tags: kilo, fuel, rdo, routing
RDO kilo installation metadata widget doesn't work
https://ask.openstack.org/en/question/83870/rdo-kilo-installation-metadat...
Tags: kilo, flavor, metadata
Not able to ssh into RDO Kilo instance
https://ask.openstack.org/en/question/83707/not-able-to-ssh-into-rdo-kilo...
Tags: rdo, instance-ssh
redhat RDO enable access to swift via S3
https://ask.openstack.org/en/question/83607/redhat-rdo-enable-access-to-s...
Tags: swift, s3
--
Rich Bowen - rbowen(a)redhat.com
RDO Community Liaison
http://rdocommunity.org
@RDOCommunity
8 years, 5 months
[rdo-list] LinuxCon Berlin, RDO ambassadors
by Rich Bowen
We have an opportunity to have an expo hall presence at LinuxCon Berlin
(October 4-6 - http://events.linuxfoundation.org/events/linuxcon-europe )
If you are either in that area, or are likely to attend LinuxCon
anyways, we're looking for volunteers to spend a shift in the RDO booth
to answer questions about RDO and OpenStack. The space is usually also
shared with other projects (CentOS, oVirt, Atomic, Ceph, Gluster, and
possibly others) so you won't be there alone.
If you are interested/willing, please get in touch with me. Thank you.
--Rich
--
Rich Bowen - rbowen(a)redhat.com
RDO Community Liaison
http://rdocommunity.org
@RDOCommunity
8 years, 5 months
[rdo-list] RDO bloggers, Jun 21 2016
by Rich Bowen
Here's what RDO enthusiasts have been blogging about in the last week.
Skydive plugin for devstack by Babu Shanmugam
Devstack is the most commonly used project for OpenStack development.
Wouldn’t it be cool to have a supporting software which analyzes the
network infrastructure and helps us troubleshoot and monitor the SDN
solution that Devstack is deploying?
… read more at http://tm3.org/7a
ANNOUNCE: libvirt switch to time based rules for updating version
numbers by Daniel P. Berrangé
Until today, libvirt has used a 3 digit version number for monthly
releases off the git master branch, and a 4 digit version number for
maintenance releases off stable branches. Henceforth all releases will
use 3 digits, and the next release will be 2.0.0, followed by 2.1.0,
2.2.0, etc, with stable releases incrementing the last digit (2.0.1,
2.0.2, etc) instead of appending yet another digit.
… read more at http://tm3.org/7b
Community Central at Red Hat Summit by Rich Bowen
OpenStack swims in a larger ecosystem of community projects. At the
upcoming Red Hat Summit in San Francisco, RDO will be sharing the
Community Central section of the show floor with various of these projects.
… read more at http://tm3.org/7c
Custom Overcloud Deploys by Adam Young
I’ve been using Tripleo Quickstart. I need custom deploys. Start with
modifying the heat templates. I’m doing a mitaka deploy
… read more at http://tm3.org/7d
Learning about the Overcloud Deploy Process by Adam Young
The process of deploying the overcloud goes through several
technologies. Here’s what I’ve learned about tracing it.
… read more at http://tm3.org/7e
The difference between auth_uri and auth_url in auth_token by Adam Young
Dramatis Personae:
Adam Young, Jamie Lennox: Keystone core.
Scene: #openstack-keystone chat room.
ayoung: I still don’t understand the difference between url and uri
… read more at http://tm3.org/7f
Scaling Magnum and Kubernetes: 2 million requests per second by Ricardo
Rocha
Two months ago, we described in this blog post how we deployed OpenStack
Magnum in the CERN cloud. It is available as a pre-production service
and we're steadily moving towards full production mode.
… read more at http://tm3.org/7g
Keystone Auth Entry Points by Adam Young
OpenStack libraries now use Authenication plugins from the keystoneauth1
library. One othe the plugins has disappered? Kerbersop. This used to be
in the python-keystoneclient-kerberos package, but that is not shipped
with Mitaka. What happened?
… read more at http://tm3.org/7h
OpenStack Days Budapest, OpenStack Days Prague by Eliska Malikova
It was the 4th OSD in Budapest, but at a brand new place, which was
absolutely brilliant. And by brilliant I mean - nice place to stay,
great location, enough options around, very good sound, well working AC
in rooms for talks and professional catering. I am not sure about number
of attendees, but it was pretty big and crowded - so awesome!
… read more at http://tm3.org/7i
--
Rich Bowen - rbowen(a)redhat.com
RDO Community Liaison
http://rdocommunity.org
@RDOCommunity
8 years, 5 months
[rdo-list] Issue with assigning multiple VFs to VM instance
by Chinmaya Dwibedy
Hi All,
I have installed open-stack openstack-mitaka release on CentO7 system . It
has two Intel QAT devices. There are 32 VF devices available per QAT
Device/DH895xCC device.
[root@localhost nova(keystone_admin)]# lspci -nn | grep 0435
83:00.0 Co-processor [0b40]: Intel Corporation DH895XCC Series QAT
[8086:0435]
88:00.0 Co-processor [0b40]: Intel Corporation DH895XCC Series QAT
[8086:0435]
[root@localhost nova(keystone_admin)]# cat
/sys/bus/pci/devices/0000\:88\:00.0/sriov_numvfs
32
[root@localhost nova(keystone_admin)]# cat
/sys/bus/pci/devices/0000\:83\:00.0/sriov_numvfs
32
[root@localhost nova(keystone_admin)]#
Changed the nova configuration (as stated below) for exposing VF ( via
PCI-passthrough) to the instances.
pci_alias = {"name": "QuickAssist", "product_id": "0443", "vendor_id":
"8086", "device_type": "type-VF"}
pci_passthrough_whitelist = [{"vendor_id":"8086","product_id":"0443"}}]
Restarted the nova compute, nova API and nova scheduler service
service openstack-nova-compute restart;service openstack-nova-api
restart;systemctl restart openstack-nova-scheduler;
scheduler_available_filters=nova.scheduler.filters.all_filters
scheduler_default_filters=RamFilter,ComputeFilter,AvailabilityZoneFilter,ComputeCapabilitiesFilter,ImagePropertiesFilter,PciPassthroughFilter
Thereafter it shows all the available VFs (64) in nova database upon
select * from pci_devices. Set the flavor 4 to allow passing two VFs to
instances.
[root@localhost nova(keystone_admin)]# nova flavor-show 4
+----------------------------+------------------------------------------------------------+
| Property |
Value |
+----------------------------+------------------------------------------------------------+
| OS-FLV-DISABLED:disabled |
False |
| OS-FLV-EXT-DATA:ephemeral |
0 |
| disk | 80
|
| extra_specs | {"pci_passthrough:alias": "QuickAssist:2"} |
| id |
4 |
| name |
m1.large |
| os-flavor-access:is_public |
True |
| ram |
8192 |
| rxtx_factor |
1.0 |
| swap
| |
| vcpus |
4 |
+----------------------------+------------------------------------------------------------+
[root@localhost nova(keystone_admin)]#
Also when I launch an instance using this new flavor, it goes into an
error state
nova boot --flavor 4 --key_name oskey1 --image
bc859dc5-103b-428b-814f-d36e59009454 --nic
net-id=e2ca118d-1f25-47de-8524-bb2a2635c4be TEST
Here goes the output of nova-conductor.log
2016-06-16 07:55:34.640 5094 WARNING nova.scheduler.utils
[req-6189d3c8-5587-4350-8cd3-704fd35cf2ad 266f5859848e4f39b9725203dda5c3f2
4bc608763cee41d9a8df26d3ef919825 - - -] Failed to
compute_task_build_instances: No valid host was found. There are not enough
hosts available.
Traceback (most recent call last):
File "/usr/lib/python2.7/site-packages/oslo_messaging/rpc/server.py",
line 150, in inner
return func(*args, **kwargs)
File "/usr/lib/python2.7/site-packages/nova/scheduler/manager.py", line
104, in select_destinations
dests = self.driver.select_destinations(ctxt, spec_obj)
File
"/usr/lib/python2.7/site-packages/nova/scheduler/filter_scheduler.py", line
74, in select_destinations
raise exception.NoValidHost(reason=reason)
NoValidHost: No valid host was found. There are not enough hosts available.
Here goes the output of nova-compute.log
2016-06-16 07:57:32.502 170789 INFO nova.compute.resource_tracker
[req-4529a2a8-390f-4620-98b3-d3eb77e077a3 - - - - -] Total usable vcpus:
36, total allocated vcpus: 16
2016-06-16 07:57:32.502 170789 INFO nova.compute.resource_tracker
[req-4529a2a8-390f-4620-98b3-d3eb77e077a3 - - - - -] Final resource view:
name=localhost phys_ram=128721MB used_ram=33280MB phys_disk=49GB
used_disk=320GB total_vcpus=36 used_vcpus=16
pci_stats=[PciDevicePool(count=0,numa_node=0,product_id='10fb',tags={dev_type='type-PF'},vendor_id='8086'),
PciDevicePool(count=63,numa_node=1,product_id='0443',tags={dev_type='type-VF'},vendor_id='8086')]
2016-06-16 07:57:33.803 170789 INFO nova.compute.resource_tracker
[req-4529a2a8-390f-4620-98b3-d3eb77e077a3 - - - - -] Compute_service record
updated for localhost:localhost
Here goes the output of nova-scheduler.log
2016-06-16 07:55:34.636 171018 WARNING nova.scheduler.host_manager
[req-6189d3c8-5587-4350-8cd3-704fd35cf2ad 266f5859848e4f39b9725203dda5c3f2
4bc608763cee41d9a8df26d3ef919825 - - -] Host localhost has more disk space
than database expected (-141 GB > -271 GB)
2016-06-16 07:55:34.637 171018 INFO nova.filters
[req-6189d3c8-5587-4350-8cd3-704fd35cf2ad 266f5859848e4f39b9725203dda5c3f2
4bc608763cee41d9a8df26d3ef919825 - - -] Filter PciPassthroughFilter
returned 0 hosts
2016-06-16 07:55:34.638 171018 INFO nova.filters
[req-6189d3c8-5587-4350-8cd3-704fd35cf2ad 266f5859848e4f39b9725203dda5c3f2
4bc608763cee41d9a8df26d3ef919825 - - -] Filtering removed all hosts for the
request with instance ID '4f68c680-5a17-4a38-a6df-5cdb6d76d75b'. Filter
results: ['RamFilter: (start: 1, end: 1)', 'ComputeFilter: (start: 1, end:
1)', 'AvailabilityZoneFilter: (start: 1, end: 1)',
'ComputeCapabilitiesFilter: (start: 1, end: 1)', 'ImagePropertiesFilter:
(start: 1, end: 1)', 'PciPassthroughFilter: (start: 1, end: 0)']
2016-06-16 07:56:14.743 171018 INFO nova.scheduler.host_manager
[req-64a8dc31-f2ab-4d93-8579-6b9f8210ece7 - - - - -] Successfully synced
instances from host 'localhost'.
2016-06-16 07:58:17.748 171018 INFO nova.scheduler.host_manager
[req-152ac777-1f77-433d-8493-6cd86ab3e0fc - - - - -] Successfully synced
instances from host 'localhost'.
Note that, If I set the flavor as (#nova flavor-key 4 set
"pci_passthrough:alias"="QuickAssist:1") , it assigns a single VF to VM
instance. I think, multiple PFs can be assigned per VM. Can anyone please
suggest , where I am wrong and the way to solve this ? Thank you in advance
for your support and help.
Regards,
Chinmaya
8 years, 5 months
Re: [rdo-list] mitaka installation
by Boris Derzhavets
Option 1
File bug against packstack , stable release mitaka to https://bugzilla.redhat.com
and wait for better times to come
Option 2
1. packstack --gen-answer-file answer1.txt
2. Edit answer1.txt
CONFIG_KEYSTONE_API_VERSION=v3
3.packstack --answer-file=./answer1.txt
It will crash running cinder's puppet , however
# systemctll | grep cinder
will look just fine right after crash
4. Update answer1.txt and set
CONFIG_CINDER_INSTALL=n
5. packstack --answer-file=./answer1.txt
Up on completion CINDER would work as far as I remember.
This is a hack
Option 3. Read and follow
https://www.linux.com/blog/backport-upstream-commits-stable-rdo-mitaka-re...
This is right way to go which demonstrates that you are understanding what you are doing.
Regards.
Boris.
-------------------------------------------------------------------------------------------------------------------------------------------------
From: Andrey Shevel <shevel.andrey(a)gmail.com>
Sent: Monday, June 20, 2016 1:44 PM
To: Boris Derzhavets
Subject: Re: [rdo-list] mitaka installation
Hello colleagues,
I repeated packstack --allinone (mitaka) exactly like described on
https://www.rdoproject.org/install/quickstart/
Packstack quickstart — RDO<https://www.rdoproject.org/install/quickstart/>
www.rdoproject.org
Packstack quickstart: Proof of concept for single node. Packstack is an installation utility that lets you spin up a proof of concept cloud on one node.
on newly created VM and newly installed from scratch (as virtual server) with OS
[root@openstack-test ~]# cat /etc/os-release*
NAME="Scientific Linux"
VERSION="7.2 (Nitrogen)"
ID="rhel"
ID_LIKE="fedora"
VERSION_ID="7.2"
PRETTY_NAME="Scientific Linux 7.2 (Nitrogen)"
ANSI_COLOR="0;31"
CPE_NAME="cpe:/o:scientificlinux:scientificlinux:7.2:GA"
HOME_URL="http://www.scientificlinux.org//"
BUG_REPORT_URL="mailto:scientific-linux-devel@listserv.fnal.gov"
I got exactly same errors as before
===================================
Applying 192.168.122.47_amqp.pp
Applying 192.168.122.47_mariadb.pp
192.168.122.47_amqp.pp: [ DONE ]
192.168.122.47_mariadb.pp: [ DONE ]
Applying 192.168.122.47_apache.pp
192.168.122.47_apache.pp: [ DONE ]
Applying 192.168.122.47_keystone.pp
Applying 192.168.122.47_glance.pp
Applying 192.168.122.47_cinder.pp
192.168.122.47_keystone.pp: [ ERROR ]
Applying Puppet manifests [ ERROR ]
ERROR : Error appeared during Puppet run: 192.168.122.47_keystone.pp
Error: Could not prefetch keystone_role provider 'openstack': Could
not authenticate
You will find full trace in log
/var/tmp/packstack/20160620-191746-ud6qNn/manifests/192.168.122.47_keystone.pp.log
Please check log file
/var/tmp/packstack/20160620-191746-ud6qNn/openstack-setup.log for more
information
Additional information:
* A new answerfile was created in: /root/packstack-answers-20160620-191747.txt
* Time synchronization installation was skipped. Please note that
unsynchronized time on server instances might be problem for some
OpenStack components.
* File /root/keystonerc_admin has been created on OpenStack client
host 192.168.122.47. To use the command line tools you need to source
the file.
* To access the OpenStack Dashboard browse to http://192.168.122.47/dashboard .
Please, find your login credentials stored in the keystonerc_admin in
your home directory.
* To use Nagios, browse to http://192.168.122.47/nagios username:
nagiosadmin, password: 6099599f76754f92
Stop Date & Time = Mon Jun 20 19:26:39 MSK 2016
==================================
It seems some openstack installation drawback takes place.
Automatically generated packstack-answers file is in the attachment
I did pay also attention that on the http://trystack.org/ we see
version 'liberty' but not 'mitaka'.
Any comments ?
On Fri, Jun 17, 2016 at 1:35 PM, Boris Derzhavets
<bderzhavets(a)hotmail.com> wrote:
> I have well tested workaround for CONFIG_KEYSTONE_API_VERSION=v3 based on
>
> back porting 2 recent upstream commits to stable RDO Mitaka.
>
> When you run `packstack --alinone` Keystone API is v2.0 by default not
> v3.
>
> So you might be focused on v2.0, otherwise let me know. I have detailed
> notes
>
> been done for back port ( one more time thanks to Javier Pena for upstream
> work )
>
>
> Boris.
> ________________________________
> From: rdo-list-bounces(a)redhat.com <rdo-list-bounces(a)redhat.com> on behalf of
> Andrey Shevel <shevel.andrey(a)gmail.com>
> Sent: Friday, June 17, 2016 4:08 AM
> To: alan.pevec(a)redhat.com
> Cc: rdo-list
> Subject: Re: [rdo-list] mitaka installation
>
> The file REINSTALL.... is script to reinstall Openstack-mitaka
>
> On Thu, Jun 16, 2016 at 9:39 PM, Alan Pevec <apevec(a)redhat.com> wrote:
>>> ERROR : Error appeared during Puppet run: 193.124.84.22_keystone.pp
>>> Error: Could not prefetch keystone_role provider 'openstack': Could
>>> not authenticate
>>> You will find full trace in log
>>>
>>> /var/tmp/packstack/20160616-133447-C9hfh9/manifests/193.124.84.22_keystone.pp.log
>>
>> ^ please paste this file so we can see more details about the error
>
>
>
> --
> Andrey Y Shevel
--
Andrey Y Shevel
8 years, 5 months
[rdo-list] Red Hat Summit: Demo volunteers wanted
by Rich Bowen
At OpenStack Summit, we had a number of people volunteer to present
demos at the RDO booth and/or answer attendee questions. This was a big
success, with almost every time slot being filled by very helpful people.
We'd like to do the same thing at Red Hat Summit, which will be held in
3 weeks in San Francisco. If you plan to attend, and if you have a free
time slot, I would appreciate it if you'd be willing to do a shift in
the booth, and possibly bring a demo along with you.
Demos *can* be a "live demo", but typically, unless it's completely
self-contained on your laptop, you're better off doing a video, since
network conditions can't be guaranteed. (We usually have a hard-wire
network in the booth, but even that can be flakey at peak times.)
If you're willing to participate, please claim a slot in the schedule
etherpad, HERE: https://etherpad.openstack.org/p/rhsummit-rdo-booth
Time slots are mostly 60 minutes. If some other time slot works better
for you, please do feel free to modify the start/end times. Please
indicate what you'll be demoing.
Thanks!
--Rich
8 years, 5 months