[Rdo-list] HA with network isolation on virt howto

Pedro Sousa pgsousa at gmail.com
Wed Oct 21 19:17:41 UTC 2015


Hi Marius,

[stack at undercloud environments]$ cat network-environment.yaml
parameter_defaults:
  InternalApiNetCidr: 192.168.100.0/24
  StorageNetCidr: 192.168.101.0/24
  StorageMgmtNetCidr: 192.168.102.0/24
  TenantNetCidr: 10.0.20.0/24
  ExternalNetCidr: 192.168.174.0/24
  InternalApiAllocationPools: [{'start': '192.168.100.10', 'end':
'192.168.100.100'}]
  StorageAllocationPools: [{'start': '192.168.101.10', 'end':
'192.168.101.100'}]
  StorageMgmtAllocationPools: [{'start': '192.168.102.10', 'end':
'192.168.102.100'}]
  TenantAllocationPools: [{'start': '10.0.20.10', 'end': '10.0.20.100'}]
  ExternalAllocationPools: [{'start': '192.168.174.35', 'end':
'192.168.174.50'}]
  ExternalInterfaceDefaultRoute: 192.168.174.1
  ControlPlaneSubnetCidr: "24"
  ControlPlaneDefaultRoute: 192.168.21.30
  EC2MetadataIp: 192.168.21.30
  DnsServers: ["8.8.8.8", "8.8.4.4"]

I'll test it out following etherpad thanks

On Wed, Oct 21, 2015 at 6:41 PM, Marius Cornea <marius at remote-lab.net>
wrote:

> It's definitely a bug, the deployment shouldn't pass without
> completing keystone init. What's the content of your
> network-environment.yaml?
>
> I'm not sure if this relates but it's worth trying an installation
> with the GA bits, the docs are being updated to describe the steps.
> Some useful notes can be found here:
> https://etherpad.openstack.org/p/RDO-Manager_liberty
>
> trown ╡ mcornea: the important bit is to use `yum install -y
> http://rdoproject.org/repos/openstack-liberty/rdo-release-liberty.rpm`
> for undercloud repos, and `export RDO_RELEASE='liberty'` for image
> build
>
> On Wed, Oct 21, 2015 at 6:54 PM, Pedro Sousa <pgsousa at gmail.com> wrote:
> > Yes, I've done that already, however it never runs keystone init. Is it
> > something wrong in my deployment command "openstack overcloud deploy" or
> do
> > you think it's a bug/conf issue?
> >
> > Thanks
> >
> > On Wed, Oct 21, 2015 at 5:50 PM, Marius Cornea <marius at remote-lab.net>
> > wrote:
> >>
> >> To delete the overcloud you need to run heat stack-delete overcloud
> >> and wait until it finishes(check heat stack-list)
> >>
> >> On Wed, Oct 21, 2015 at 6:29 PM, Pedro Sousa <pgsousa at gmail.com> wrote:
> >> > You're right, I didn't get that output, keystone init didn't run:
> >> >
> >> > $ openstack overcloud deploy --control-scale 3 --compute-scale 1
> >> > --libvirt-type kvm --ntp-server pool.ntp.org --templates
> ~/the-cloud/ -e
> >> > ~/the-cloud/environments/puppet-pacemaker.yaml -e
> >> > ~/the-cloud/environments/network-isolation.yaml -e
> >> > ~/the-cloud/environments/net-single-nic-with-vlans.yaml -e
> >> > ~/the-cloud/environments/network-environment.yaml --control-flavor
> >> > controller --compute-flavor compute
> >> >
> >> > Deploying templates in the directory /home/stack/the-cloud
> >> > Overcloud Endpoint: http://192.168.174.35:5000/v2.0/
> >> > Overcloud Deployed
> >> >
> >> >
> >> > In fact I have some mysql errors in my controllers, see below. Is
> there
> >> > a
> >> > way to redeploy? Because I've run "openstack overcloud deploy" and
> >> > nothing
> >> > happens.
> >> >
> >> > Oct 21 14:21:51 overcloud-controller-0 os-collect-config[11715]:
> >> > [2015-10-21
> >> > 14:21:50,903] (heat-config) [INFO] Error: Could not prefetch
> mysql_user
> >> > provider 'mysql': Execution of '/usr/bin/mysql -NBe SELECT
> CONCAT(User,
> >> > '@',Host) AS User FROM mysql.user' returned 1: ERROR 2002 (HY000):
> Can't
> >> > connect to local MySQL server through socket
> '/var/lib/mysql/mysql.sock'
> >> > (2)
> >> > Oct 21 14:21:51 overcloud-controller-0 os-collect-config[11715]:
> Error:
> >> > Could not prefetch mysql_database provider 'mysql': Execution of
> >> > '/usr/bin/mysql -NBe show databases' returned 1: ERROR 2002 (HY000):
> >> > Can't
> >> > connect to local MySQL server through socket
> '/var/lib/mysql/mysql.sock'
> >> > (2)
> >> >
> >> > Thanks
> >> >
> >> >
> >> >
> >> >
> >> >
> >> >
> >> > On Wed, Oct 21, 2015 at 4:56 PM, Marius Cornea <marius at remote-lab.net
> >
> >> > wrote:
> >> >>
> >> >> I believe the keystone init failed. It is done in a postconfig step
> >> >> via ssh on the public VIP(see lines 3-13 in
> >> >> https://gist.github.com/remoteur/920109a31083942ba5e1 ). Did you get
> >> >> that kind of output for the deploy command?
> >> >>
> >> >> Try also journalctl -l -u os-collect-config | grep -i error on the
> >> >> controller nodes, it should indicate if something went wrong during
> >> >> deployment.
> >> >>
> >> >> On Wed, Oct 21, 2015 at 5:05 PM, Pedro Sousa <pgsousa at gmail.com>
> wrote:
> >> >> > Hi Marius,
> >> >> >
> >> >> > your tip worked fine thanks, bridges seems to be correctly created,
> >> >> > however
> >> >> > I still cannot login, seems some keystone problem:
> >> >> >
> >> >> > #keystone --debug tenant-list
> >> >> >
> >> >> > DEBUG:keystoneclient.auth.identity.v2:Making authentication request
> >> >> > to
> >> >> > http://192.168.174.35:5000/v2.0/tokens
> >> >> > INFO:requests.packages.urllib3.connectionpool:Starting new HTTP
> >> >> > connection
> >> >> > (1): 192.168.174.35
> >> >> > DEBUG:requests.packages.urllib3.connectionpool:"POST /v2.0/tokens
> >> >> > HTTP/1.1"
> >> >> > 401 114
> >> >> > DEBUG:keystoneclient.session:Request returned failure status: 401
> >> >> > DEBUG:keystoneclient.v2_0.client:Authorization Failed.
> >> >> > The request you have made requires authentication. (HTTP 401)
> >> >> > (Request-ID:
> >> >> > req-accee3b3-b552-4c6b-ac39-d0791b5c1390)
> >> >> >
> >> >> > Did you had this issue when deployed on virtual?
> >> >> >
> >> >> > Regards
> >> >> >
> >> >> >
> >> >> >
> >> >> > On Wed, Oct 21, 2015 at 12:32 PM, Marius Cornea
> >> >> > <marius at remote-lab.net>
> >> >> > wrote:
> >> >> >>
> >> >> >> Here's an adjusted controller.yaml which disables DHCP on the
> first
> >> >> >> nic: enp1s0f0 so it doesn't get an IP address
> >> >> >> http://paste.openstack.org/show/476981/
> >> >> >>
> >> >> >> Please note that this assumes that your overcloud nodes are PXE
> >> >> >> booting on the 2nd NIC(basically disabling the 1st nic)
> >> >> >>
> >> >> >> Given your setup(I'm doing some assumptions here so I might be
> >> >> >> wrong)
> >> >> >> I would use the 1st nic for PXE booting and provisioning network
> and
> >> >> >> 2nd nic for running the isolated networks with this kind of
> >> >> >> template:
> >> >> >> http://paste.openstack.org/show/476986/
> >> >> >>
> >> >> >> Let me know if it works for you.
> >> >> >>
> >> >> >> Thanks,
> >> >> >> Marius
> >> >> >>
> >> >> >> On Wed, Oct 21, 2015 at 1:16 PM, Pedro Sousa <pgsousa at gmail.com>
> >> >> >> wrote:
> >> >> >> > Hi,
> >> >> >> >
> >> >> >> > here you go.
> >> >> >> >
> >> >> >> > Regards,
> >> >> >> > Pedro Sousa
> >> >> >> >
> >> >> >> > On Wed, Oct 21, 2015 at 12:05 PM, Marius Cornea
> >> >> >> > <marius at remote-lab.net>
> >> >> >> > wrote:
> >> >> >> >>
> >> >> >> >> Hi Pedro,
> >> >> >> >>
> >> >> >> >> One issue I can quickly see is that br-ex has assigned the same
> >> >> >> >> IP
> >> >> >> >> address as enp1s0f0. Can you post the nic templates you used
> for
> >> >> >> >> deployment?
> >> >> >> >>
> >> >> >> >> 2: enp1s0f0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc
> mq
> >> >> >> >> state
> >> >> >> >> UP qlen 1000
> >> >> >> >>     link/ether 7c:a2:3e:fb:25:55 brd ff:ff:ff:ff:ff:ff
> >> >> >> >>     inet 192.168.21.60/24 brd 192.168.21.255 scope global
> dynamic
> >> >> >> >> enp1s0f0
> >> >> >> >> 9: br-ex: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc
> >> >> >> >> noqueue
> >> >> >> >> state
> >> >> >> >> UNKNOWN
> >> >> >> >>     link/ether 7c:a2:3e:fb:25:56 brd ff:ff:ff:ff:ff:ff
> >> >> >> >>     inet 192.168.21.60/24 brd 192.168.21.255 scope global
> br-ex
> >> >> >> >>
> >> >> >> >> Thanks,
> >> >> >> >> Marius
> >> >> >> >>
> >> >> >> >> On Wed, Oct 21, 2015 at 12:39 PM, Pedro Sousa <
> pgsousa at gmail.com>
> >> >> >> >> wrote:
> >> >> >> >> > Hi Marius,
> >> >> >> >> >
> >> >> >> >> > I've followed your howto and managed to get overcloud
> deployed
> >> >> >> >> > in
> >> >> >> >> > HA,
> >> >> >> >> > thanks. However I cannot login to it (via CLI or Horizon) :
> >> >> >> >> >
> >> >> >> >> > ERROR (Unauthorized): The request you have made requires
> >> >> >> >> > authentication.
> >> >> >> >> > (HTTP 401) (Request-ID:
> >> >> >> >> > req-96310dfa-3d64-4f05-966f-f4d92702e2b1)
> >> >> >> >> >
> >> >> >> >> > So I rebooted the controllers and now I cannot login through
> >> >> >> >> > Provisioning
> >> >> >> >> > network, seems some openvswitch bridge conf problem, heres my
> >> >> >> >> > conf:
> >> >> >> >> >
> >> >> >> >> > # ip a
> >> >> >> >> > 1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state
> >> >> >> >> > UNKNOWN
> >> >> >> >> >     link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
> >> >> >> >> >     inet 127.0.0.1/8 scope host lo
> >> >> >> >> >        valid_lft forever preferred_lft forever
> >> >> >> >> >     inet6 ::1/128 scope host
> >> >> >> >> >        valid_lft forever preferred_lft forever
> >> >> >> >> > 2: enp1s0f0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc
> >> >> >> >> > mq
> >> >> >> >> > state
> >> >> >> >> > UP
> >> >> >> >> > qlen 1000
> >> >> >> >> >     link/ether 7c:a2:3e:fb:25:55 brd ff:ff:ff:ff:ff:ff
> >> >> >> >> >     inet 192.168.21.60/24 brd 192.168.21.255 scope global
> >> >> >> >> > dynamic
> >> >> >> >> > enp1s0f0
> >> >> >> >> >        valid_lft 84562sec preferred_lft 84562sec
> >> >> >> >> >     inet6 fe80::7ea2:3eff:fefb:2555/64 scope link
> >> >> >> >> >        valid_lft forever preferred_lft forever
> >> >> >> >> > 3: enp1s0f1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc
> >> >> >> >> > mq
> >> >> >> >> > master
> >> >> >> >> > ovs-system state UP qlen 1000
> >> >> >> >> >     link/ether 7c:a2:3e:fb:25:56 brd ff:ff:ff:ff:ff:ff
> >> >> >> >> >     inet6 fe80::7ea2:3eff:fefb:2556/64 scope link
> >> >> >> >> >        valid_lft forever preferred_lft forever
> >> >> >> >> > 4: ovs-system: <BROADCAST,MULTICAST> mtu 1500 qdisc noop
> state
> >> >> >> >> > DOWN
> >> >> >> >> >     link/ether c2:15:45:c8:b3:04 brd ff:ff:ff:ff:ff:ff
> >> >> >> >> > 5: br-tun: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state
> DOWN
> >> >> >> >> >     link/ether e6:df:8e:fb:f0:42 brd ff:ff:ff:ff:ff:ff
> >> >> >> >> > 6: vlan20: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc
> >> >> >> >> > noqueue
> >> >> >> >> > state
> >> >> >> >> > UNKNOWN
> >> >> >> >> >     link/ether e6:79:56:5d:07:f2 brd ff:ff:ff:ff:ff:ff
> >> >> >> >> >     inet 192.168.100.12/24 brd 192.168.100.255 scope global
> >> >> >> >> > vlan20
> >> >> >> >> >        valid_lft forever preferred_lft forever
> >> >> >> >> >     inet 192.168.100.10/32 brd 192.168.100.255 scope global
> >> >> >> >> > vlan20
> >> >> >> >> >        valid_lft forever preferred_lft forever
> >> >> >> >> >     inet6 fe80::e479:56ff:fe5d:7f2/64 scope link
> >> >> >> >> >        valid_lft forever preferred_lft forever
> >> >> >> >> > 7: vlan40: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc
> >> >> >> >> > noqueue
> >> >> >> >> > state
> >> >> >> >> > UNKNOWN
> >> >> >> >> >     link/ether ea:43:69:c3:bf:a2 brd ff:ff:ff:ff:ff:ff
> >> >> >> >> >     inet 192.168.102.11/24 brd 192.168.102.255 scope global
> >> >> >> >> > vlan40
> >> >> >> >> >        valid_lft forever preferred_lft forever
> >> >> >> >> >     inet6 fe80::e843:69ff:fec3:bfa2/64 scope link
> >> >> >> >> >        valid_lft forever preferred_lft forever
> >> >> >> >> > 8: vlan174: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc
> >> >> >> >> > noqueue
> >> >> >> >> > state
> >> >> >> >> > UNKNOWN
> >> >> >> >> >     link/ether 16:bf:9e:e0:9c:e0 brd ff:ff:ff:ff:ff:ff
> >> >> >> >> >     inet 192.168.174.36/24 brd 192.168.174.255 scope global
> >> >> >> >> > vlan174
> >> >> >> >> >        valid_lft forever preferred_lft forever
> >> >> >> >> >     inet 192.168.174.35/32 brd 192.168.174.255 scope global
> >> >> >> >> > vlan174
> >> >> >> >> >        valid_lft forever preferred_lft forever
> >> >> >> >> >     inet6 fe80::14bf:9eff:fee0:9ce0/64 scope link
> >> >> >> >> >        valid_lft forever preferred_lft forever
> >> >> >> >> > 9: br-ex: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc
> >> >> >> >> > noqueue
> >> >> >> >> > state
> >> >> >> >> > UNKNOWN
> >> >> >> >> >     link/ether 7c:a2:3e:fb:25:56 brd ff:ff:ff:ff:ff:ff
> >> >> >> >> >     inet 192.168.21.60/24 brd 192.168.21.255 scope global
> br-ex
> >> >> >> >> >        valid_lft forever preferred_lft forever
> >> >> >> >> >     inet6 fe80::7ea2:3eff:fefb:2556/64 scope link
> >> >> >> >> >        valid_lft forever preferred_lft forever
> >> >> >> >> > 10: vlan50: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc
> >> >> >> >> > noqueue
> >> >> >> >> > state
> >> >> >> >> > UNKNOWN
> >> >> >> >> >     link/ether da:15:7f:b9:72:4b brd ff:ff:ff:ff:ff:ff
> >> >> >> >> >     inet 10.0.20.10/24 brd 10.0.20.255 scope global vlan50
> >> >> >> >> >        valid_lft forever preferred_lft forever
> >> >> >> >> >     inet6 fe80::d815:7fff:feb9:724b/64 scope link
> >> >> >> >> >        valid_lft forever preferred_lft forever
> >> >> >> >> > 11: vlan30: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc
> >> >> >> >> > noqueue
> >> >> >> >> > state
> >> >> >> >> > UNKNOWN
> >> >> >> >> >     link/ether 7a:b3:4d:ad:f1:72 brd ff:ff:ff:ff:ff:ff
> >> >> >> >> >     inet 192.168.101.11/24 brd 192.168.101.255 scope global
> >> >> >> >> > vlan30
> >> >> >> >> >        valid_lft forever preferred_lft forever
> >> >> >> >> >     inet 192.168.101.10/32 brd 192.168.101.255 scope global
> >> >> >> >> > vlan30
> >> >> >> >> >        valid_lft forever preferred_lft forever
> >> >> >> >> >     inet6 fe80::78b3:4dff:fead:f172/64 scope link
> >> >> >> >> >        valid_lft forever preferred_lft forever
> >> >> >> >> > 12: br-int: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state
> >> >> >> >> > DOWN
> >> >> >> >> >     link/ether b6:88:6b:d7:3a:4c brd ff:ff:ff:ff:ff:ff
> >> >> >> >> >
> >> >> >> >> >
> >> >> >> >> > # ovs-vsctl show
> >> >> >> >> > 3ee4adeb-4a5a-49a6-a16e-1e5f6e22f101
> >> >> >> >> > Bridge br-ex
> >> >> >> >> > Port br-ex
> >> >> >> >> > Interface br-ex
> >> >> >> >> > type: internal
> >> >> >> >> > Port "enp1s0f1"
> >> >> >> >> > Interface "enp1s0f1"
> >> >> >> >> > Port "vlan40"
> >> >> >> >> > tag: 40
> >> >> >> >> > Interface "vlan40"
> >> >> >> >> > type: internal
> >> >> >> >> > Port "vlan20"
> >> >> >> >> > tag: 20
> >> >> >> >> > Interface "vlan20"
> >> >> >> >> > type: internal
> >> >> >> >> > Port phy-br-ex
> >> >> >> >> > Interface phy-br-ex
> >> >> >> >> > type: patch
> >> >> >> >> > options: {peer=int-br-ex}
> >> >> >> >> > Port "vlan50"
> >> >> >> >> > tag: 50
> >> >> >> >> > Interface "vlan50"
> >> >> >> >> > type: internal
> >> >> >> >> > Port "vlan30"
> >> >> >> >> > tag: 30
> >> >> >> >> > Interface "vlan30"
> >> >> >> >> > type: internal
> >> >> >> >> > Port "vlan174"
> >> >> >> >> > tag: 174
> >> >> >> >> > Interface "vlan174"
> >> >> >> >> > type: internal
> >> >> >> >> > Bridge br-int
> >> >> >> >> > fail_mode: secure
> >> >> >> >> > Port br-int
> >> >> >> >> > Interface br-int
> >> >> >> >> > type: internal
> >> >> >> >> > Port patch-tun
> >> >> >> >> > Interface patch-tun
> >> >> >> >> > type: patch
> >> >> >> >> > options: {peer=patch-int}
> >> >> >> >> > Port int-br-ex
> >> >> >> >> > Interface int-br-ex
> >> >> >> >> > type: patch
> >> >> >> >> > options: {peer=phy-br-ex}
> >> >> >> >> > Bridge br-tun
> >> >> >> >> > fail_mode: secure
> >> >> >> >> > Port "gre-0a00140b"
> >> >> >> >> > Interface "gre-0a00140b"
> >> >> >> >> > type: gre
> >> >> >> >> > options: {df_default="true", in_key=flow,
> >> >> >> >> > local_ip="10.0.20.10",
> >> >> >> >> > out_key=flow, remote_ip="10.0.20.11"}
> >> >> >> >> > Port patch-int
> >> >> >> >> > Interface patch-int
> >> >> >> >> > type: patch
> >> >> >> >> > options: {peer=patch-tun}
> >> >> >> >> > Port "gre-0a00140d"
> >> >> >> >> > Interface "gre-0a00140d"
> >> >> >> >> > type: gre
> >> >> >> >> > options: {df_default="true", in_key=flow,
> >> >> >> >> > local_ip="10.0.20.10",
> >> >> >> >> > out_key=flow, remote_ip="10.0.20.13"}
> >> >> >> >> > Port "gre-0a00140c"
> >> >> >> >> > Interface "gre-0a00140c"
> >> >> >> >> > type: gre
> >> >> >> >> > options: {df_default="true", in_key=flow,
> >> >> >> >> > local_ip="10.0.20.10",
> >> >> >> >> > out_key=flow, remote_ip="10.0.20.12"}
> >> >> >> >> > Port br-tun
> >> >> >> >> > Interface br-tun
> >> >> >> >> > type: internal
> >> >> >> >> > ovs_version: "2.4.0"
> >> >> >> >> >
> >> >> >> >> > Regards,
> >> >> >> >> > Pedro Sousa
> >> >> >> >> >
> >> >> >> >> >
> >> >> >> >> > On Sun, Oct 18, 2015 at 11:13 AM, Marius Cornea
> >> >> >> >> > <marius at remote-lab.net>
> >> >> >> >> > wrote:
> >> >> >> >> >>
> >> >> >> >> >> Hi everyone,
> >> >> >> >> >>
> >> >> >> >> >> I wrote a blog post about how to deploy a HA with network
> >> >> >> >> >> isolation
> >> >> >> >> >> overcloud on top of the virtual environment. I tried to
> >> >> >> >> >> provide
> >> >> >> >> >> some
> >> >> >> >> >> insights into what instack-virt-setup creates and how to use
> >> >> >> >> >> the
> >> >> >> >> >> network isolation templates in the virtual environment. I
> hope
> >> >> >> >> >> you
> >> >> >> >> >> find it useful.
> >> >> >> >> >>
> >> >> >> >> >> https://remote-lab.net/rdo-manager-ha-openstack-deployment/
> >> >> >> >> >>
> >> >> >> >> >> Thanks,
> >> >> >> >> >> Marius
> >> >> >> >> >>
> >> >> >> >> >> _______________________________________________
> >> >> >> >> >> Rdo-list mailing list
> >> >> >> >> >> Rdo-list at redhat.com
> >> >> >> >> >> https://www.redhat.com/mailman/listinfo/rdo-list
> >> >> >> >> >>
> >> >> >> >> >> To unsubscribe: rdo-list-unsubscribe at redhat.com
> >> >> >> >> >
> >> >> >> >> >
> >> >> >> >
> >> >> >> >
> >> >> >
> >> >> >
> >> >
> >> >
> >
> >
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.rdoproject.org/pipermail/dev/attachments/20151021/975d0c88/attachment.html>


More information about the dev mailing list