Re: [Rdo-list] Hi,I need your help-about packstack --allinone
by Yaniv Eylon
adding rdo-list
xiaoguang, it is better to share your findings on the mailing list.
On Sat, Nov 7, 2015 at 7:09 AM, xiaoguang.fan(a)netbric.com
<xiaoguang.fan(a)netbric.com> wrote:
> Hi,
> I want study RDO, when I meet this bug:
> https://bugzilla.redhat.com/show_bug.cgi?id=1254447
>
> cannot start httpd when do packstack --allinone
>
> howto fix this bug,now I cannot deploy rdo in my centos 7 (vm machine)?
> thanks
>
> ________________________________
> /********************************************
> * Name: fanxiaoguang
> * Add:
> * E-Mail: solar_ambitious(a)126.com;
> * fanxiaoguang008(a)gmail.com
> * Cel: 13716563304
> *
> ********************************************/
>
--
Yaniv.
9 years
[Rdo-list] issue with numa and cpu pinning using SRIOV ports
by Pedro Sousa
Hi all,
I have a rdo kilo deployment, using sr-iov ports to my instances. I'm
trying to configure NUMA topology and CPU pinning for some telco based
workloads based on this doc:
http://redhatstackblog.redhat.com/2015/05/05/cpu-pinning-and-numa-topolog...
I have 3 compute nodes, I'm trying to use one of them to use cpu pinning.
I've configured it like this:
*Compute Node (total 24 cpus)*
*/etc/nova/nova.conf*
vcpu_pin_set=2,3,4,5,6,7,8,9,10,11,12,13,14,15,18,19,22,23
Changed grub to isolate my cpus:
#grubby --update-kernel=ALL
--args="isolcpus=2,3,4,5,6,7,8,9,10,11,12,13,14,15,18,19,22,23"
#grub2-install /dev/sda
*Controller Nodes:* */etc/nova/nova.conf*
scheduler_default_filters=RetryFilter,AvailabilityZoneFilter,RamFilter,ComputeFilter,ComputeCapabilitiesFilter,ImagePropertiesFilter,ServerGroupAntiAffinityFilter,ServerGroupAffinityFilter,PciPassthroughFilter,NUMATopologyFilter,AggregateInstanceExtraSpecsFilter
scheduler_available_filters = nova.scheduler.filters.all_filters
scheduler_available_filters =
nova.scheduler.filters.pci_passthrough_filter.PciPassthroughFilter *Created
host aggregate performance * #nova aggregate-create performance #nova
aggregate-set-metadata 1 pinned=true
#nova aggregate-add-host 1 compute03
*Created host aggregate normal*
#nova aggregate-create normal
#nova aggregate-set-metadata 2 pinned=false
#nova aggregate-add-host 2 compute01
#nova aggregate-add-host 2 compute02
*Created the flavor with cpu pinning* #nova flavor-create m1.performance 6
2048 20 4 #nova flavor-key 6 set hw:cpu_policy=dedicated #nova flavor-key 6
set aggregate_instance_extra_specs:pinned=true *The issue is:* With SR-IOV
ports it only let's me create instances with 6 vcpus in total with the conf
described above. Without SR-IOV, using OVS, I don't have that limitation.
Is this a bug or something? I've seen this:
https://bugs.launchpad.net/nova/+bug/1441169, however I have the patch, and
as I said it works for the first 6 vcpus with my configuration.
*Some relevant logs:*
*/var/log/nova/nova-scheduler.log*
2015-11-06 11:18:17.955 59494 DEBUG nova.filters
[req-9e20f8a9-384f-45c2-aa99-2d7b3100c98d
9340dc4e70a14aeb82013e5a1631de80 d5ecb0eea96f4996b565fd983a768b11 - -
-] Starting with 3 host(s) get_filtered_objects
/usr/lib/python2.7/site-packages/nova/filters.py:70
2015-11-06 11:18:17.955 59494 DEBUG nova.filters
[req-9e20f8a9-384f-45c2-aa99-2d7b3100c98d
9340dc4e70a14aeb82013e5a1631de80 d5ecb0eea96f4996b565fd983a768b11 - -
-] Filter RetryFilter returned 3 host(s) get_filtered_objects
/usr/lib/python2.7/site-packages/nova/filters.py:84
2015-11-06 11:18:17.955 59494 DEBUG nova.filters
[req-9e20f8a9-384f-45c2-aa99-2d7b3100c98d
9340dc4e70a14aeb82013e5a1631de80 d5ecb0eea96f4996b565fd983a768b11 - -
-] Filter AvailabilityZoneFilter returned 3 host(s)
get_filtered_objects
/usr/lib/python2.7/site-packages/nova/filters.py:84
2015-11-06 11:18:17.955 59494 DEBUG nova.filters
[req-9e20f8a9-384f-45c2-aa99-2d7b3100c98d
9340dc4e70a14aeb82013e5a1631de80 d5ecb0eea96f4996b565fd983a768b11 - -
-] Filter RamFilter returned 3 host(s) get_filtered_objects
/usr/lib/python2.7/site-packages/nova/filters.py:84
2015-11-06 11:18:17.956 59494 DEBUG nova.filters
[req-9e20f8a9-384f-45c2-aa99-2d7b3100c98d
9340dc4e70a14aeb82013e5a1631de80 d5ecb0eea96f4996b565fd983a768b11 - -
-] Filter ComputeFilter returned 3 host(s) get_filtered_objects
/usr/lib/python2.7/site-packages/nova/filters.py:84
2015-11-06 11:18:17.956 59494 DEBUG nova.filters
[req-9e20f8a9-384f-45c2-aa99-2d7b3100c98d
9340dc4e70a14aeb82013e5a1631de80 d5ecb0eea96f4996b565fd983a768b11 - -
-] Filter ComputeCapabilitiesFilter returned 3 host(s)
get_filtered_objects
/usr/lib/python2.7/site-packages/nova/filters.py:84
2015-11-06 11:18:17.956 59494 DEBUG nova.filters
[req-9e20f8a9-384f-45c2-aa99-2d7b3100c98d
9340dc4e70a14aeb82013e5a1631de80 d5ecb0eea96f4996b565fd983a768b11 - -
-] Filter ImagePropertiesFilter returned 3 host(s)
get_filtered_objects
/usr/lib/python2.7/site-packages/nova/filters.py:84
2015-11-06 11:18:17.956 59494 DEBUG nova.filters
[req-9e20f8a9-384f-45c2-aa99-2d7b3100c98d
9340dc4e70a14aeb82013e5a1631de80 d5ecb0eea96f4996b565fd983a768b11 - -
-] Filter ServerGroupAntiAffinityFilter returned 3 host(s)
get_filtered_objects
/usr/lib/python2.7/site-packages/nova/filters.py:84
2015-11-06 11:18:17.956 59494 DEBUG nova.filters
[req-9e20f8a9-384f-45c2-aa99-2d7b3100c98d
9340dc4e70a14aeb82013e5a1631de80 d5ecb0eea96f4996b565fd983a768b11 - -
-] Filter ServerGroupAffinityFilter returned 3 host(s)
get_filtered_objects
/usr/lib/python2.7/site-packages/nova/filters.py:84
2015-11-06 11:18:17.957 59494 DEBUG nova.filters
[req-9e20f8a9-384f-45c2-aa99-2d7b3100c98d
9340dc4e70a14aeb82013e5a1631de80 d5ecb0eea96f4996b565fd983a768b11 - -
-] Filter PciPassthroughFilter returned 3 host(s) get_filtered_objects
/usr/lib/python2.7/site-packages/nova/filters.py:84*2015-11-06
11:18:17.959 59494 DEBUG nova.filters
[req-9e20f8a9-384f-45c2-aa99-2d7b3100c98d
9340dc4e70a14aeb82013e5a1631de80 d5ecb0eea96f4996b565fd983a768b11 - -
-] Filter NUMATopologyFilter returned 2 host(s) get_filtered_objects
/usr/lib/python2.7/site-packages/nova/filters.py:84*
Any help would be appreciated.
Thanks,
Pedro Sousa
9 years
[Rdo-list] [RDO-Manager] Working with the overcloud
by Ignacio Bravo
After jumping some hoops and with the help of the usual suspects on IRC, I was able to deploy an overcloud with HA, Network isolation and Ceph. Great!
Now I want to focus on what’s next, or how to manage this environment going forward. Let me give you a couple of examples:
After the installation of the overcloud, I was hit with the cinder bug described here: https://bugzilla.redhat.com/show_bug.cgi?id=1272572 <https://bugzilla.redhat.com/show_bug.cgi?id=1272572> The issue is that the cinder.conf file needs to replace ‘localhost’ with the ip of the public keystone. What I did, based on the bugzilla, was to log in to each controller node and then update the value inside the cinder.conf file.
Is this one off the proper way to patch and keep the environment updated? I mean, one week from now we will find that a particular RPM needs to be updated, how do you handle this? I thought that the proper way was to recreate the TripleO image and redeploy.
Or another example is Ceph. Currently Tripleo installs version 0.8 and I want to install version 9 Inferno. What is the correct path to achieve this?
Additionally, let’s say that I want to install, say: CloudKitty (choose your alternate, non mainstream openstack project here)
Do we recreate the images and redeploy, or do a puppet run after they have been installed with a tool like Foreman/Katello?
Regards,
IB
__
Ignacio Bravo
LTG Federal, Inc
www.ltgfederal.com <http://www.ltgfederal.com/>
9 years
[Rdo-list] How to access the 192.0.2.1:8004 URL to get the deployment failure logs
by Ramkumar GOWRISHANKAR
Hi,
My virtual test bed deployment with just one controller and no computes is
failing at ControllerNodesPostDeployment. The debug steps when a deployment
fails tells to run the following command: "heat resource-show overcloud
ControllerNodesPostDeployment". When I run the command, I see 3 URL
starting with http://192.0.2.1:8004.
How do I access these URLs? When I try a wget on these URLs or when I
create a ssh tunnel from the base machine and try to access the URLs I get
permission denied message. When I try to access just the base URL (
http://192.0.2.1:8004 mapped to http://localhost:8005) via a tunnel, I get
the following message:
{"versions": [{"status":"CURRENT", "id": "v1.0", "links": [{"href":"
http://localhost:8005/v1/","rel":"self"}]}]}
I have looked through the /var/log/heat/ folder for any error messages but
I cannot find any more detailed error message other than deployment failed
at step 1 LoadBalancer.
Any pointers on how to debug a deployment?
Thanks,
Ramkumar
9 years
[Rdo-list] [delorean] Planned Delorean upgrade on November 5
by Javier Pena
Dear rdo-list,
We are planning to update the current Delorean instance next Thursday, November 5. The upgrade should bring a bigger spec VM and several improvements on the instance configuration.
During the upgrade, the Delorean repos will still be available through the backup instance, but new packages will not be processed until the upgrade is completed.
If you have any questions or concerns, please let us know.
Regards,
Javier
9 years
[Rdo-list] Neutron-openswitch-agent configuration workaround in RDO Juno/Kilo and now in Liberty
by Markelov Andrey
Hi guys,
If we will see at Juno and Kilo installation guide for CentOS at docs.openstack.org we can see documented workaround about /etc/neutron/plugin.ini (symbolic link to ml2_conf.ini) . We need to edit start script for openswitch-agent as documented.
With Liberty that workaround does not works because /usr/lib/systemd/system/neutron-openvswitch-agent.service was changed. “Any” plugin.ini now not in –config-file options for Openswitch-agent.
And it not documented in Liberty install guide.
As solution you can rename /etc/neutron.plugins/ml2_conf.ini to /etc/neutron.plugins/openswitch_agent.ini and it will work.
Time by time I lead OpenStack Training cources and I want to explain config files and procidures in “right way”.
My questions are
What the idea behind deleting plugin.ini from –config-file options?
Is the ml2_conf.ini obsolete?
Is the plugin.ini obsolete?
--
Best regards,
Andrey Markelov
9 years
[Rdo-list] [meeting] RDO meeting (2015-11-04)
by Alan Pevec
=============================
#rdo: RDO meeting (2015-11-04)
=============================
Meeting started by apevec at 15:01:06 UTC. The full logs are available
at
http://meetbot.fedoraproject.org/rdo/2015-11-04/rdo.2015-11-04-15.01.log....
.
Meeting summary
---------------
* rollcall (apevec, 15:01:43)
* agenda at https://etherpad.openstack.org/p/RDO-Packaging (apevec,
15:01:53)
* Mitaka Summit reports (apevec, 15:05:03)
* release management session highlights - desynchronized Mitaka
milestones and stable/liberty point releases (apevec, 15:20:18)
* upstream packaging-rpm get momentum - extend core, deliverables for
Mitaka (apevec, 15:20:49)
* use Delorean tool for upstream infra builds, packaging-RPM builds
will replace RDO delorean at some point (number80, 15:23:06)
* RDO meetup (apevec, 15:26:12)
* 70 people at RDO meetup, minutes coming, please blog your views if
you were there! (apevec, 15:27:19)
* ACTION: number80 write blog post about summit (number80, 15:32:03)
* ACTION: trown blog about summit from rdo-manager perspective
(trown, 15:32:19)
* ACTION: rbowen start earlier our quest for RDO meetup room (apevec,
15:32:42)
* RDO Mitaka themes (apevec, 15:33:39)
* python3 (apevec, 15:34:55)
* switch to PyMySQL
https://trello.com/c/q0VoAYJq/89-migrate-mysql-python-to-pymysql
(apevec, 15:36:27)
* ACTION: jpena to include fedora rawhide worker in delorean rebuild
(jpena, 15:39:17)
* -tests subpackages and testdeps / enable %check (apevec, 15:42:19)
* ACTION: apevec add card for tracking -tests subpackages / testdeps
/ enable %check progress (apevec, 15:43:13)
* LINK:
https://trello.com/c/pFBmc3rk/80-bump-rdo-liberty-ci-tests-from-minimal-t...
(apevec, 15:49:13)
* DLM support (apevec, 15:51:54)
* more automation (apevec, 15:55:44)
* rdo-manager quickstart (apevec, 15:57:28)
* FOSEDM event (apevec, 16:00:07)
* Delorean instance rebuild on Nov 5 (apevec, 16:00:46)
* open floor (apevec, 16:01:54)
Meeting ended at 16:02:33 UTC.
Action Items
------------
* number80 write blog post about summit
* trown blog about summit from rdo-manager perspective
* rbowen start earlier our quest for RDO meetup room
* jpena to include fedora rawhide worker in delorean rebuild
* apevec add card for tracking -tests subpackages / testdeps / enable
%check progress
Action Items, by person
-----------------------
* apevec
* apevec add card for tracking -tests subpackages / testdeps /
enable %check progress
* jpena
* jpena to include fedora rawhide worker in delorean rebuild
* number80
* number80 write blog post about summit
* rbowen
* rbowen start earlier our quest for RDO meetup room
* trown
* trown blog about summit from rdo-manager perspective
* **UNASSIGNED**
* (none)
People Present (lines said)
---------------------------
* apevec (105)
* number80 (47)
* dmsimard (23)
* rbowen (21)
* EmilienM (18)
* jpena (14)
* trown (12)
* zodbot (9)
* jruzicka (5)
* jschlueter (5)
* olap (4)
* Humbedooh (2)
* chandankumar (1)
Generated by `MeetBot`_ 0.1.4
.. _`MeetBot`: http://wiki.debian.org/MeetBot
9 years