[Infra] images server cleanup
by Gabriele Cerami
Hi,
we'd like to participate actively in cleaning up after ourselves the
images we upload at each tripleo promotion. We are planning to do so
also for the container images in dockerhub, so part of this process has
to be done anyway. (maybe we should also do it in rdoregistry)
Since our access to the server is limited to sftp, we are thinking about
using paramiko library in our promoter script, to get the list of hashes
uploaded and their mtimes, so we can delete the oldest ones.
Is there any better solution ?
Thanks
6 years, 11 months
Changes to RDO Office Hour schedule
by Chandan kumar
Hello,
In the last RDO community meeting, we have decided to make some
changes to RDO Office hour.
Now, RDO office hour will happen bi-weekly and duration is 1 hour.
Next RDO office hour is on 14th Nov, 2017.
New timing: 13:30 UTC to 14:30 UTC
Thanks,
Chandan Kumar
7 years
Maintenance on the RDO container registry tonight at 12AM UTC
by David Moreau Simard
Hi,
We'll be doing a short maintenance on the RDO container registry tonight at
12AM UTC (night from wednesday to thursday) in order to grow the volume
where the container images are hosted.
We'll try to coordinate around the status of the periodic jobs on
review.rdoproject.org's Zuul but there's no guarantee it won't be without
impact.
Thanks,
David Moreau Simard
Senior Software Engineer | OpenStack RDO
dmsimard = [irc, github, twitter]
7 years
[Meeting] RDO meeting (2017-10-25) minutes
by Alfredo Moralejo Alonso
==============================
#rdo: RDO meeting - 2017-10-25
==============================
Meeting started by amoralej at 15:01:10 UTC. The full logs are
available at
http://eavesdrop.openstack.org/meetings/rdo_meeting___2017_10_25/2017/rdo...
.
Meeting summary
---------------
* roll call (amoralej, 15:01:28)
* Make RDO Office Hour biweekly and duration to one hour (amoralej,
15:04:33)
* AGREED: to do RDO Office Hour beweekly and duration to one hour
(amoralej, 15:08:37)
* ACTION: chandankumar will send a mail to communicate new schedule
(amoralej, 15:08:58)
* infra: any problem to report after the ML migration? (amoralej,
15:09:36)
* no issues have been reported after mailing lists migration
(amoralej, 15:18:34)
* given master CI status, delay Queens milestone 1 to the next week?
https://www.rdoproject.org/testday/queens/milestone1/ (amoralej,
15:18:42)
* LINK: https://dashboards.rdoproject.org/rdo-dev (apevec, 15:19:18)
* ACTION: apevec to update testdays page and move queens1 testday to
Nov2/3 (apevec, 15:23:23)
* longer EOL goodbye for Newton - keep trunk running for some projects
longer (apevec) (amoralej, 15:27:32)
*
http://lists.openstack.org/pipermail/openstack-dev/2017-October/123624.html
(amoralej, 15:27:44)
* LINK:
https://ci.centos.org/view/rdo/view/promotion-pipeline/job/rdo-delorean-p...
(amoralej, 15:44:15)
* ACTION: apevec to create Newton EOL card in rdo trello (apevec,
15:51:27)
* who will chair next week? (amoralej, 15:52:24)
* ACTION: ykarel will chair the meeting on next week (amoralej,
15:57:42)
* open floor (amoralej, 15:57:50)
* LINK: https://trello.com/b/WXJTwsuU/tripleo-and-rdo-ci-status is not
clear since it's just a scratchpad (apevec, 15:59:36)
Meeting ended at 16:02:30 UTC.
Action items, by person
-----------------------
* apevec
* apevec to update testdays page and move queens1 testday to Nov2/3
* apevec to create Newton EOL card in rdo trello
* chandankumar
* chandankumar will send a mail to communicate new schedule
* ykarel
* ykarel will chair the meeting on next week
People present (lines said)
---------------------------
* amoralej (96)
* apevec (93)
* dmsimard (41)
* Duck (22)
* EmilienM (21)
* number80 (14)
* jpena (13)
* chandankumar (10)
* openstack (8)
* jjoyce (6)
* ykarel (5)
* apevec_ (3)
* jrist (3)
* PagliaccisCloud (2)
* eggmaster (2)
* adarazs (1)
* jruzicka (1)
* rdogerrit (1)
Generated by `MeetBot`_ 0.1.4
7 years
[dev] Mailing list changes
by Rich Bowen
This information also appears at
http://rdoproject.org/blog/2017/10/mailing-list-changes/ and should be
reflected on the mailing list details page at
http://rdoproject.org/contribute/mailing-lists/
You need to be aware of recent changes to our mailing lists
What Happened, and Why?
Since the start of the project we have had one mailing list for both users
and developers of the RDO project. Over time, we felt that user questions
have been drowned out by the more technical developer-oriented discussion,
leaving users/operators out of the conversation.
To this end, we've decided to split the one mailing list -
rdo-list(a)redhat.com - into two new mailing lists - dev(a)lists.rdoproject.org
and users(a)lists.rdoproject.org
We've also moved the rdo-newsletter(a)redhat.com list to the new
newsletter(a)lists.rdoproject.org email address.
What you need to do
You need to update your contacts list to reflect this change, and start
sending email to the new addresses.
As in any typical open source project, user conversations (questions,
discussion, community announcements, and so on) should go to the users
list, while developer related discussion should go to the dev list.
If you send email to the old address, you should receive an immediate
autoresponse reminding you of the new addresses.
List descriptions and archives are now all at
https://lists.rdoproject.org/mailman/listinfo. Please let me know if you
see references to the old list information, so we can get it updated.
Thanks!
--Rich
--
--
Rich Bowen - rbowen(a)redhat.com
@rbowen // @rdocommunity // @CentOSProject
859 351 9166
7 years
[rdo-list] Upcoming mailing list changes
by Rich Bowen
A few months ago we discussed splitting the mailing list into two - a users@
and dev@ list - and at the same time moving the list from @redhat.com to @
rdoproject.org
After some delays and technical hurdles, this should be happening in the
coming few weeks.
Initially, you will be on the subscriber list for both of these two new
lists, and it will be up to you to determine whether you stay on both, or
just one or the other.
You can read more details in this (not yet merged) pull request -
https://github.com/redhat-openstack/website/pull/1088/commits/ade9f345bec...
-
about what things will look like once the task is completed. And you can
track the status of the issue in this ticket:
https://bugzilla.redhat.com/show_bug.cgi?id=1487324
Thanks for your patience.
--Rich
--
--
Rich Bowen - rbowen(a)redhat.com
@rbowen // @rdocommunity // @CentOSProject
859 351 9166
7 years
[rdo-list] tunneling to Horizon
by James LaBarre
I have experimented with various configurations, and I have yet to find
a combination that works.
I have a TripleO quickstart install (basic setup) and am trying to
connect to the Horizon dashboard. My scenario us like this:
Laptop at home connects to HWhost in server lab (can ssh directly
through a VPN to the HWHost, can ping)
Undercloud can be seen from HWhost, not from Laptop (have to ssh to
undercloud from HWHost, after having SSHed to HWHost from Laptop. Can
ping from HWhost, not from Laptop)
Controller and Compute can ping from undercloud, can ssh from undercloud
directly, can ssh from HWhost by redirecting through undercloud (?)
So with all this, how does one use a web browser to connect to Horizon?
Does the browser have to be running on HWHost or Undercloud, or can it
be tunnelled for the Laptop? It would seem I'd have to do multiple
tunnels (if that's even allowed).
7 years
[rdo-list] Problem with ha-router
by Cedric Lecomte
Hello all,
I tried to deploy RDO Pike without container on our internal plateform.
The setup is pretty simple :
- 3 Controller in HA
- 5 Ceph
- 4 Compute
- 3 Object-Store
I didn't used any exotic parameter.
This is my deployment command :
openstack overcloud deploy --templates
-e environement.yaml
--ntp-server 0.pool.ntp.org
-e storage-env.yaml
-e network-env.yaml
-e /usr/share/openstack-tripleo-heat-templates/environments/puppet-ceph.yaml
--control-scale 3 --control-flavor control
--compute-scale 4 --compute-flavor compute
--ceph-storage-scale 5 --ceph-storage-flavor ceph-storage
--swift-storage-flavor swift-storage --swift-storage-scale 3
-e scheduler_hints_env.yaml
-e /usr/share/openstack-tripleo-heat-templates/environments/network-isolation.yaml
-e /usr/share/openstack-tripleo-heat-templates/environments/pup
pet-pacemaker.yaml
*environnement.yaml :*
parameter_defaults:
ControllerCount: 3
ComputeCount: 4
CephStorageCount: 5
OvercloudCephStorageFlavor: ceph-storage
CephDefaultPoolSize: 3
ObjectStorageCount: 3
*network-env.yaml :*
resource_registry:
OS::TripleO::Compute::Net::SoftwareConfig: /home/stack/templates/nic-conf
igs/compute.yaml
OS::TripleO::Controller::Net::SoftwareConfig:
/home/stack/templates/nic-configs/controller.yaml
OS::TripleO::CephStorage::Net::SoftwareConfig:
/home/stack/templates/nic-configs/ceph-storage.yaml
OS::TripleO::ObjectStorage::Net::SoftwareConfig:
/home/stack/templates/nic-configs/swift-storage.yaml
parameter_defaults:
InternalApiNetCidr: 172.16.0.0/24
TenantNetCidr: 172.17.0.0/24
StorageNetCidr: 172.18.0.0/24
StorageMgmtNetCidr: 172.19.0.0/24
ManagementNetCidr: 172.20.0.0/24
ExternalNetCidr: 10.41.11.0/24
InternalApiAllocationPools: [{'start': '172.16.0.10', 'end':
'172.16.0.200'}]
TenantAllocationPools: [{'start': '172.17.0.10', 'end': '172.17.0.200'}]
StorageAllocationPools: [{'start': '172.18.0.10', 'end': '172.18.0.200'}]
StorageMgmtAllocationPools: [{'start': '172.19.0.10', 'end':
'172.19.0.200'}]
ManagementAllocationPools: [{'start': '172.20.0.10', 'end':
'172.20.0.200'}]
# Leave room for floating IPs in the External allocation pool
ExternalAllocationPools: [{'start': '10.41.11.10', 'end': '10.41.11.30'}]
# Set to the router gateway on the external network
ExternalInterfaceDefaultRoute: 10.41.11.254
# Gateway router for the provisioning network (or Undercloud IP)
ControlPlaneDefaultRoute: 192.168.131.253
# The IP address of the EC2 metadata server. Generally the IP of the
Undercloud
EC2MetadataIp: 192.0.2.1
# Define the DNS servers (maximum 2) for the overcloud nodes
DnsServers: ["10.38.5.26"]
InternalApiNetworkVlanID: 202
StorageNetworkVlanID: 203
StorageMgmtNetworkVlanID: 204
TenantNetworkVlanID: 205
ManagementNetworkVlanID: 206
ExternalNetworkVlanID: 198
NeutronExternalNetworkBridge: "''"
ControlPlaneSubnetCidr: '24'
BondInterfaceOvsOptions:
"mode=balance-xor"
*storage-env.yaml :*
parameter_defaults:
ExtraConfig:
ceph::profile::params::osds:
'/dev/sdb': {}
'/dev/sdc': {}
'/dev/sdd': {}
'/dev/sde': {}
'/dev/sdf': {}
'/dev/sdg': {}
SwiftRingBuild: false
RingBuild: false
*scheduler_hints_env.yaml*
parameter_defaults:
ControllerSchedulerHints:
'capabilities:node': 'control-%index%'
NovaComputeSchedulerHints:
'capabilities:node': 'compute-%index%'
CephStorageSchedulerHints:
'capabilities:node': 'ceph-storage-%index%'
ObjectStorageSchedulerHints:
'capabilities:node': 'swift-storage-%index%'
After a little use, I found that I found that one controller is unable to
get active ha-router and I got this output :
neutron l3-agent-list-hosting-router XXX
+--------------------------------------+--------------------
----------------+----------------+-------+----------+
| id | host
| admin_state_up | alive | ha_state |
+--------------------------------------+--------------------
----------------+----------------+-------+----------+
| 420a7e31-bae1-4f8c-9438-97839cf190c4 | overcloud-controller-0.localdomain
| True | :-) | standby |
| 6a943aa5-6fd1-4b44-8557-f0043b266a2f | overcloud-controller-1.localdomain
| True | :-) | standby |
| dd66ef16-7533-434f-bf5b-25e38c51375f | overcloud-controller-2.localdomain
| True | :-) | standby |
+--------------------------------------+--------------------
----------------+----------------+-------+----------+
So each time a router is schedule on this controller I can't get an active
router. I tried to compare the configuration but everything seems to be
good. I redeployed to see if it help, and the only thing that change is the
controller where the ha-router are stuck.
The only message that I got is fron OVS :
2017-10-20 08:38:44.930 136145 WARNING neutron.agent.rpc
[req-0ad9aec4-f718-498f-9ca7-15b265340174 - - - - -] Device
Port(admin_state_up=True,allowed_address_pairs=[],
binding=PortBinding,binding_levels=[],created_at=2017-10-
20T08:38:38Z,data_plane_status=<?>,description='',
device_id='a7e23552-9329-4572-a69d-d7f316fcc5c9',device_
owner='network:router_ha_interface',dhcp_options=[],
distributed_binding=None,dns=None,fixed_ips=[IPAllocation],
id=7b6d81ef-0451-4216-9fe5-52d921052cb7,mac_address=fa:16:3e:13:e9:3c,name='HA
port tenant 0ee0af8e94044a42923873939978ed42',network_id=ffe5ffa5-2693-
4d35-988e-7290899601e0,project_id='',qos_policy_id=None,revision_number=5,
security=PortSecurity(7b6d81ef-0451-4216-9fe5-52d921052cb7),security_group_
ids=set([]),status='DOWN',updated_at=2017-10-20T08:38:44Z) is not bound.
2017-10-20 08:38:44.944 136145 WARNING neutron.plugins.ml2.drivers.
openvswitch.agent.ovs_neutron_agent [req-0ad9aec4-f718-498f-9ca7-15b265340174
- - - - -] Device 7b6d81ef-0451-4216-9fe5-52d921052cb7 not defined on
plugin or binding failed
Any Idea ?
--
LECOMTE Cedric
Senior software ENgineer
Red Hat
<https://www.redhat.com>
clecomte(a)redhat.com
<https://red.ht/sig>
TRIED. TESTED. TRUSTED. <https://redhat.com/trusted>
7 years