[Rdo-list] (no subject)
by Vedsar Kushwaha
Can anyone explain me the meaning of below line taken from the page "
https://openstack.redhat.com/Neutron_with_existing_external_network":
"You need to recreate the public subnet with an allocation range outside of
your external DHCP range and set the gateway to the default gateway of the
external network. "
My IP address is 10.16.37.222 and I'm on institute network with proxy.
What if I give allocation range from the external DHCP?
Also guide me some good links to setup the network of openstack. The
default network I got after rdo installation is 172.24.4.0/24.
--
Vedsar Kushwaha
M.Tech-Computational Science
Indian Institute of Science
9 years, 9 months
[Rdo-list] [meeting] RDO packaging meeting minutes (2015-02-18)
by Alan Pevec
========================================
#rdo: RDO packaging meeting (2015-02-18)
========================================
Meeting started by apevec at 15:01:47 UTC. The full logs are available
at
http://meetbot.fedoraproject.org/rdo/2015-02-18/rdo.2015-02-18-15.01.log....
.
Meeting summary
---------------
* roll-call (apevec, 15:02:03)
* agenda at https://etherpad.openstack.org/p/RDO-Packaging (apevec,
15:02:15)
* RDO update CI status (apevec, 15:04:49)
* rdo-update internal and external events are now triggering phase1,
should have legit results shortly after meeting (eggmaster,
15:06:37)
* eggmaster will retrigger queued recent updates to send them through
phase1 (eggmaster, 15:06:59)
* LINK: https://prod-rdojenkins.rhcloud.com/ (apevec, 15:08:28)
* ACTION: jruzicka to push rdopkg 0.25 (apevec, 15:08:39)
* ACTION: apevec to move pending updates from internal gerrit to
gerrithub rdo-update.git (apevec, 15:08:58)
* Kilo Packstack/OPM status (apevec, 15:11:39)
* current Delorean openstack-puppet-modules is building from all
puppet modules master branches (apevec, 15:12:27)
* which does not work with Packstack (apevec, 15:12:38)
* ACTION: apevec is modifying build_rpm_opm.sh to build from
redhat-openstack/OPM master branch (apevec, 15:13:06)
* ACTION: gchamoul to create packstack kilo branch (apevec, 15:26:15)
* ACTION: gchamoul to create packstack/opm kilo branches (gchamoul,
15:26:47)
* ACTION: apevec will modify build_rpm_opm.sh to build from
redhat-openstack/OPM master-patches (apevec, 15:33:04)
* EL6 Juno status (apevec, 15:37:21)
* number80 started working on clients (apevec, 15:38:31)
* open floor (apevec, 15:41:06)
* LINK: http://trunk.rdoproject.org/ is a Fedora test page. So we
still need that DNS record updated, right? (rbowen, 15:46:27)
* ACTION: rbowen will provide index.html landing page for
trunk.rdoproject.org (rbowen, 15:49:28)
* LINK: https://etherpad.openstack.org/p/RDO-Trunk (apevec,
15:51:35)
Meeting ended at 15:55:45 UTC.
Action Items
------------
* jruzicka to push rdopkg 0.25
* apevec to move pending updates from internal gerrit to gerrithub
rdo-update.git
* apevec is modifying build_rpm_opm.sh to build from
redhat-openstack/OPM master branch
* gchamoul to create packstack kilo branch
* gchamoul to create packstack/opm kilo branches
* apevec will modify build_rpm_opm.sh to build from redhat-openstack/OPM
master-patches
* rbowen will provide index.html landing page for trunk.rdoproject.org
Action Items, by person
-----------------------
* apevec
* apevec to move pending updates from internal gerrit to gerrithub
rdo-update.git
* apevec will modify build_rpm_opm.sh to build from
redhat-openstack/OPM master-patches
* gchamoul
* gchamoul to create packstack/opm kilo branches [op.ed. not yet,
master branches will be used for Kilo]
* jruzicka
* jruzicka to push rdopkg 0.25
* rbowen
* rbowen will provide index.html landing page for trunk.rdoproject.org
People Present (lines said)
---------------------------
* apevec (112)
* gchamoul (27)
* rbowen (13)
* eggmaster (13)
* number80 (13)
* ihrachyshka (4)
* derekh (4)
* zodbot (3)
* jruzicka (3)
* Rodrigo_US (2)
* ryansb (1)
* panda (1)
Generated by `MeetBot`_ 0.1.4
.. _`MeetBot`: http://wiki.debian.org/MeetBot
9 years, 9 months
[Rdo-list] Install rdo on rack system Hardware and GPU
by jupiter
Hi,
I like to install rdo manually, and I find the Red Hat Enterprise Linux
OpenStack Platform 5 document has very good description for manually
installing openstack. Is it compatible to install rdo manually following
instruction of that document on CentOS 7?
Also, I like to install rdo in a rack hardware system with ceph storage and
GPUs, has any one succeeded on rack system with GPU?
Appreciate your advice to recommend cheap and good rack hardware.
Thank you.
Kind regards,
- j
9 years, 9 months
[Rdo-list] rdopkg overview
by Steve Linabery
I have been struggling with the amount of information to convey and what level of detail to include. Since I can't seem to get it perfect to my own satisfaction, here is the imperfect (and long, sorry) version to begin discussion.
This is an overview of where things stand (rdopkg CI 'v0.1').
Terminology:
'Release' refers to an OpenStack release (e.g. havana,icehouse,juno)
'Dist' refers to a distro supported by RDO (e.g. fedora-20, epel-6, epel-7)
'phase1' is the initial smoketest for an update submitted via `rdopkg update`
'phase2' is a full-provision test for accumulated updates that have passed phase1
'snapshot' means an OpenStack snapshot of a running instance, i.e. a disk image created from a running OS instance.
The very broad strokes:
-----------------------
rdopkg ci is triggered when a packager uses `rdopkg update`.
When a review lands in the rdo-update gerrit project, a 'phase1' smoketest is initiated via jenkins for each Release/Dist combination present in the update (e.g. if the update contains builds for icehouse/fedora-20 and icehouse/epel-6, each set of RPMs from each build will be smoketested on an instance running the associated Release/Dist). If *all* supported builds from the update pass phase1, then the update is merged into rdo-update. Updates that pass phase1 accumulate in the updates/ directory in the rdo-update project.
Periodically, a packager may run 'phase2'. This takes everything in updates/ and uses those RPMs + RDO production repo to provision a set of base images with packstack aio. Again, a simple tempest test is run against the packstack aio instances. If all pass, then phase2 passes, and the `rdopkg update` yaml files are moved from updates/ to ready/.
At that point, someone with the keys to the stage repos will push the builds in ready/ to the stage repo. If CI against stage repo passes, stage is rsynced to production.
Complexity, Part 1:
-------------------
Rdopkg CI v0.1 was designed around the use of OpenStack VM disk snapshots. On a periodic basis, we provision two nodes for each supported combination in [Releases] X [Dists] (e.g. "icehouse, fedora-20" "juno, epel-7" etc). One node is a packstack aio instance built against RDO production repos, and the other is a node running tempest. After a simple tempest test passes for all the packstack aio nodes, we would snapshot the set of instances. Then when we want to do a 'phase1' test for e.g. "icehouse, fedora-20", we can spin up the instances previously snapshotted and save the time of re-running packstack aio.
Using snapshots saves approximately 30 min of wait time per test run by skipping provisioning. Using snapshots imposes a few substantial costs/complexities though. First and most significant, snapshots need to be reinstantiated using the same IP addresses that were present when packstack and tempest were run during the provisioning. This means we have to have concurrency control around running only one phase1 run at a time; otherwise an instance might fail to provision because its 'static' IP address is already in use by another run. The second cost is that in practice, a) our OpenStack infrastructure has been unreliable, b) not all Release/Dist combinations reliably provision. So it becomes hard to create a full set of snapshots reliably.
Additionally, some updates (e.g. when an update comes in for openstack-puppet-modules) prevent the use of a previously-provisioned packstack instance. Continuing with the o-p-m example: that package is used for provisioning. So simply updating the RPM for that package after running packstack aio doesn't tell us anything about the package sanity (other than perhaps if a new, unsatisfied RPM dependency was introduced).
Another source of complexity comes from the nature of the rdopkg update 'unit'. Each yaml file created by `rdopkg update` can contain multiple builds for different Release,Dist combinations. So there must be a way to 'collate' the results of each smoketest for each Release,Dist and pass phase1 only if all updates pass. Furthermore, some combinations of Release,Dist are known (at times, for various ad hoc reasons) to fail testing, and those combinations sometimes need to be 'disabled'. For example, if we know that icehouse/f20 is 'red' on a given day, we might want an update containing icehouse/fedora-20,icehouse/epel-6 to test only the icehouse/epel-6 combination and pass if that passes.
Finally, pursuant to the previous point, there need to be 'control' structure jobs for provision/snapshot, phase1, and phase2 runs that pass (and perform some action upon passing) only when all their 'child' jobs have passed.
The way we have managed this complexity to date is through the use of the jenkins BuildFlow plugin. Here's some ASCII art (courtesy of 'tree') to show how the jobs are structured now (these are descriptive job names, not the actual jenkins job names). BuildFlow jobs are indicated by (bf).
.
`-- rdopkg_master_flow (bf)
|-- provision_and_snapshot (bf)
| |-- provision_and_snapshot_icehouse_epel6
| |-- provision_and_snapshot_icehouse_f20
| |-- provision_and_snapshot_juno_epel7
| `-- provision_and_snapshot_juno_f21
|-- phase1_flow (bf)
| |-- phase1_test_icehouse_f20
| `-- phase1_test_juno_f21
`-- phase2_flow (bf)
|-- phase2_test_icehouse_epel6
|-- phase2_test_icehouse_f20
|-- phase2_test_juno_epel7
`-- phase2_test_juno_f21
When a change comes in from `rdopkg update`, the rdopkg_master_flow job is triggered. It's the only job that gets triggered from gerrit, so it kicks off phase1_flow. phase1_flow runs 'child' jobs (normal jenkins jobs, not buildflow) for each Release,Dist combination present in the update.
provision_and_snapshot is run by manually setting a build parameter (BUILD_SNAPS) in the rdopkg_master_flow job, and triggering the build of rdopkg_master_flow.
phase2 is invoked similar to the provision_and_snapshot build, by checking 'RUN_PHASE2' in the rdopkg_master_flow build parameters before executing a build thereof.
Concurrency control is a side effect of requiring the user or gerrit to execute rdopkg_master_flow for every action. There can be only one rdopkg_master_flow build executing at any given time.
Complexity, Part 2:
-------------------
In addition to the nasty complexity of using nested BuildFlow type jobs, each 'worker' job (i.e. the non-buildflow type jobs) has some built in complexity that is reflected in the amount of logic in each job's bash script definition.
Some of this has been alluded to in previous points. For instance, each job in the phase1 flow needs to determine, for each update, if the update contains a package that requires full packstack aio provisioning from a base image (e.g. openstack-puppet-modules). This 'must provision' list needs to be stored somewhere that all jobs can read it, and it needs to be dynamic enough to add to it as requirements dictate.
But additionally, for package sets not requiring provisioning a base image, phase1 job needs to query the backing OpenStack instance to see if there exists a 'known good' snapshot, get the images' UUIDs from OpenStack, and spin up the instances using the snapshot images.
This baked-in complexity in the 'worker' jenkins jobs has made it difficult to maintain the job definitions, and more importantly difficult to run using jjb or in other more 'orthodox' CI-type ways. The rdopkg CI stuff is a bastard child of a fork. It lives in its own mutant gene pool.
A Way Forward...?
----------------
Wes Hayutin had a good idea that might help reduce some of the complexity here as we contemplate a) making rdopkg CI public, b) moving toward rdopkg CI 0.2.
His idea was a) stop using snapshots since the per-test-run savings doesn't seem to justify the burden they create, b) do away with BuildFlow by including the 'this update contains builds for (Release1,Dist2),...,(ReleaseN,DistM)' information in the gerrit change topic.
I think that's a great idea, but I have a superstitious gut feeling that we may lose some 'transaction'y-ness from the current setup. For example, what happens if phase1 and phase2 overlap their execution? It's not that I have evidence that this will be a problem; it's more that we had these issues worked out fairly well with rdopkg CI 0.1, and I think the change warrants some scrutiny/thought (which clearly I have not done!).
We'd still need to work out a way to execute phase2, though. There would be no `rdopkg update` event to trigger phase2 runs. I'm not sure how we'd do that without a BuildFlow. BuildFlow jobs also allow parallelization of the child jobs, and I'm not sure how we could replicate that without using that type of job.
Whew. I hope this was helpful. I'm saving a copy of this text to http://slinabery.fedorapeople.org/rdopkg-overview.txt
Cheers,
Steve Linabery (freenode: eggmaster)
Senior Software Engineer, Red Hat, Inc.
9 years, 9 months
[Rdo-list] RDO/OpenStack meetups coming up (Feb 16, 2015)
by Rich Bowen
The following are the meetups I'm aware of in the coming week where RDO
enthusiasts are likely to be present. If you know of others, please let
me know, and/or add them to http://openstack.redhat.com/Events
If there's a meetup in your area, please consider attending. It's the
best way to find out what interesting things are going on in the larger
community, and a great way to make contacts that will help you solve
your own problems in the future.
--Rich
* Monday, February 16 in Guadalajara, MX: OpenStack & Docker -
http://www.meetup.com/OpenStack-GDL/events/220237882/
* Tuesday, February 17 in Calgary, AB, CA: OpenStack Networking and Data
Storage solutions -
http://www.meetup.com/Calgary-OpenStack-Meetup/events/219945084/
* Tuesday, February 17 in Chesterfield, MO, US: OpenStack Object Storage
- http://www.meetup.com/OpenStack-STL/events/220318049/
* Wednesday, February 18 in Helsinki, FI: OpenShift Users Meetup -
http://www.meetup.com/RedHatFinland/events/219689228/
* Wednesday, February 18 in Cambridge, MA, US: Platform as a Service
(PaaS) and OpenStack Architecture / Orchestration -
http://www.meetup.com/Cloud-Centric-Boston/events/220265219/
* Wednesday, February 18 in Pasadena, CA, US: What is Trove? OpenStack
L.A. February '15 Meetup -
http://www.meetup.com/OpenStack-LA/events/219262037/
* Wednesday, February 18 in Stuttgart, DE: 1. Treffen -
http://www.meetup.com/OpenStack-Baden-Wuerttemberg/events/219990894/
* Wednesday, February 18 in Santa Monica, CA, US: February 2015
OpenStack LA - OpenStack Trove Project -
http://www.meetup.com/LAWebSpeed/events/220282039/
* Thursday, February 19 in Los Angeles, CA, US: SCaLE 13x -
http://www.meetup.com/LinuxLA/events/219676387/
* Thursday, February 19 in Vancouver, BC, CA: OpenStack Networking and
Data storage solutions -
http://www.meetup.com/Vancouver-OpenStack-Meetup/events/220329956/
* Thursday, February 19 in Boston, MA, US: Double Header Meetup!
OpenStack and VMWare Integration Best Practices -
http://www.meetup.com/Openstack-Boston/events/218863008/
* Thursday, February 19 in Baltimore, MD, US: OpenStack Baltimore Meetup
#2 - http://www.meetup.com/OpenStack-Baltimore/events/219933731/
* Thursday, February 19 in Austin, TX, US: Speed OpenStack NFV
deployments with PLUMgrid -
http://www.meetup.com/OpenStack-Austin/events/218909556/
* Thursday, February 19 in Whittier, CA, US: Introduction to Red Hat and
OpenShift (cohosted with Cal Poly Pomona) -
http://www.meetup.com/Southern-California-Red-Hat-User-Group-RHUG/events/...
* Thursday, February 19 in Sunnyvale, CA, US: Openstack, Containers, and
the Private Cloud - http://www.meetup.com/BayLISA/events/219854114/
* Monday, February 23 in Saint Paul, MN, US: Kicking off 2015 with a
huge Minnesota OpenStack Meetup! -
http://www.meetup.com/Minnesota-OpenStack-Meetup/events/219791086/
* Monday, February 23 in Melbourne, AU: Australian OpenStack User Group
- Quarterly Brisbane Meetup -
http://www.meetup.com/Australian-OpenStack-User-Group/events/201085722/
--
Rich Bowen - rbowen(a)redhat.com
OpenStack Community Liaison
http://openstack.redhat.com/
9 years, 9 months
[Rdo-list] Query on RDO networking
by Deepak Shetty
Hi,
Do we have any documentation on whats the right way in RDO for the Nova
VMs connect to external network (eg: 8.8.8.8). I asked this on #rdo but
didn't get a response, hence this mail.
I am aware of the br-ex - eth0/1/2 magic (i know it from my devstack
experience), but wondering if we need to do the same manually on the RDO
network node or there is some automation way of achieving it in RDO ?
I am on a 3-node RDO setup (Controller, Network, Compute), using
rdo-release-juno-1.noarch
thanx,
deepak
9 years, 9 months