[rdo-list] OVS 2.7 on rdo

Numan Siddique nusiddiq at redhat.com
Fri Jun 16 01:03:40 UTC 2017


On Wed, Jun 14, 2017 at 4:37 PM, Numan Siddique <nusiddiq at redhat.com> wrote:

>
>
> On Tue, Jun 13, 2017 at 4:39 PM, Numan Siddique <nusiddiq at redhat.com>
> wrote:
>
>>
>>
>> On Tue, Jun 13, 2017 at 4:18 PM, Alan Pevec <apevec at redhat.com> wrote:
>>
>>> Hi Numan,
>>>
>>> thanks for testing! Few questions inline:
>>>
>>> >>>> > I tested by deploying upstream tripleo on my local setup by
>>> making use of these rpms and it works fine for me.
>>> >>>> Did you also try upgrade tests? That was the pain-point with the
>>> previous OVS updates.
>>> >>> No. I didn't try that.
>>> > I did some testing locally and these are my findings
>>> >  - After deploying a overcloud (with 3 controllers and 1 compute )
>>> using
>>> > tripleo-quickstart, I added the OVS 2.7 repo from here [1] manually on
>>> all
>>>
>>> Was [1] http://cbs.centos.org/repos/cloud7-openstack-pike-candidate/
>>> x86_64/os/ ?
>>>
>>
>> No. It was http://cbs.centos.org/repos/cloud7-openstack-common-candidat
>> e/x86_64/os/
>>
>>
>>
>>> Can you confirm which openvswitch NVR you got?
>>>
>>
>> I am not sure which one. I will get back to you on this.
>>
>>
>>> > the nodes and triggered "openstack overcloud update stack".
>>> > - For some reason update was always failing. Not sure what is going
>>> wrong
>>> > - So I manually ran "yum update" on the nodes
>>> >  - OVS 2.7 was updated successfully, but the service was  not
>>> restarted.
>>>
>>> Numan, please report update stack failure as tripleo upstream LP bug.
>>> Adding Sofer from the upgrades team: is there upstream tripleo
>>> update/upgrade CI job which is testing with OVS 2.7 ?
>>>
>>
>> I want to test once more before I submit a bug to be really sure if I
>> haven't done any mistake.
>>
>
> Hi Alan, I ran the stack update again, but now with 1 controller and 1
> compute node and the stack update was successful.
>
> I have captured the terminal logs which you can find here -
> https://paste.fedoraproject.org/paste/FELBFKHNWiIwgNsd3o22~A
>
> Before starting the update, I created a VM with ip 10.0.0.9. I started a
> ping command in the controller node from the neutron dhcp namespace. The
> ping was running successfully without any issues.
>
> As expected, the OVS packages were updated to 2.7 and openvswitch service
> was not restarted. So essentially OVS 2.6 was running even after the update.
>
> Then I manually restarted openvswitch service first on the compute node
> and then on controller node. The ping was only interrupted during the
> service restarts and was back again.
>
> Below is the repo file I added to get the OVS 2.7
>
> ******************************************************
>  cat /etc/yum.repos.d/cbs.centos.org_repos_cloud7-openstack-
> pike-candidate_x86_64_os.repo
>
> [cbs.centos.org_repos_cloud7-openstack-pike-candidate_x86_64_os]
> name=added from: http://cbs.centos.org/repos/cloud7-openstack-pike-
> candidate/x86_64/os
> baseurl=http://cbs.centos.org/repos/cloud7-openstack-pike-
> candidate/x86_64/os
> enabled=1
> gpgcheck=0
> includepkgs=openvswitch,openvswitch-ovn*
> *********************************************************
>
>
>

Hi Alan,

Just an update. The test review [1] is passing for the CI job  -
gate-tripleo-ci-centos-7-multinode-upgrades-nv
<http://logs.openstack.org/91/473191/9/check/gate-tripleo-ci-centos-7-multinode-upgrades-nv/3307cf8/>
.
This patch updates the OVS to OVS 2.7 from the repo - http://cbs.centos.org/
repos/cloud7-openstack-pike-candidate/x86_64/os and restarts it at step0 of
the upgrade.


[1] - https://review.openstack.org/#/c/473191/

http://logs.openstack.org/91/473191/9/check/gate-tripleo-ci-centos-7-multinode-upgrades-nv/3307cf8/logs/subnode-2/var/log/ovs_debug.txt.gz

http://logs.openstack.org/91/473191/9/check/gate-tripleo-ci-centos-7-multinode-upgrades-nv/3307cf8/logs/subnode-2/var/log/openvswitch/ovsdb-server.txt.gz

Thanks
Numan



Thanks
> Numan
>
>
>> I don't think there is any CI job yet. I tried to be bold and submitted a
>> patch to test it here (please note it's only for testing) -
>> https://review.openstack.org/#/c/473191/
>>
>> In the puppet/services/openvswitch_upgrade.yaml, I tried to install the
>> OVS 2.7 and restart it.
>> From the logs I could see that ovs was never updated (not sure why) and
>> the ansible task to restart the openvswitch (still with version 2.6) makes
>> the node loose the network completely. I will explore more on this job and
>> see how it goes.
>>
>> But from what I understand upgrading OVS (presently) has a down time
>> overhead. i.e We can update the ovs package. But restarting OVS with the
>> updated version would cause some down time or reboot. I may be wrong here.
>> But that's what I understand. If this is the case, not sure if we can test
>> this scenario in CI other than just updating the package without restart.
>>
>>
>>
>>> >  - After the update everything was working fine.
>>>
>>> Working fine without ovs service restart?
>>>
>>
>> That's right.
>>
>>
>>>
>>> >  - Then I restarted all the nodes to have OVS 2.7 started up.
>>> >  - After restart I was able to have my tenant networks working fine.
>>>
>>> So that sounds promising but I'd like to get details on update stack
>>> failure before we proceed promotion OVS 2.7 packages to the RDO Pike
>>> repo.
>>> Also note that 2.7.0-1 RPM from v2.7.0 tag is missing > 100 backports
>>> on branch-2.7, upstream should really consider releasing 2.7.1.
>>>
>>>
>> I have requested for a new tag in the OVS mailing list. Hope it would be
>> done soon.
>>
>> Thanks
>> Numan
>>
>>
>>> Cheers,
>>> Alan
>>>
>>> > Thanks
>>> > Numan
>>>
>>> >>>> > Is it possible to cross-tag for RDO pike ?
>>> >>>> I've tagged openvswitch-2.7.0-1.el7 for pike-candidate for now so we
>>> >>>> can ran full RDO CI on it.
>>> >>>> Alfredo, David, Emilien, can you please run pipeline jobs with
>>> >>>> http://cbs.centos.org/repos/cloud7-openstack-pike-candidate/
>>> x86_64/os/
>>> >>>> repo added ?
>>>
>>
>>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.rdoproject.org/pipermail/dev/attachments/20170616/1e775b03/attachment.html>


More information about the dev mailing list