[Rdo-list] baremetal rdo manager overcloud deployment issues

Pedro Sousa pgsousa at gmail.com
Mon Jun 22 09:28:58 UTC 2015


Hi Marius,

*Before:*

*[root at ov-iagiwqs7y3w-0-dq2lxejobfzq-controller-57rzpswhpzf6 ~]# ip a*
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host
       valid_lft forever preferred_lft forever
2: enp0s25: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast
state UP qlen 1000
    link/ether 00:a0:d1:e3:dd:ed brd ff:ff:ff:ff:ff:ff
    inet 192.168.1.149/24 brd 192.168.1.255 scope global dynamic enp0s25
       valid_lft 42508sec preferred_lft 42508sec
    inet6 fe80::2a0:d1ff:fee3:dded/64 scope link
       valid_lft forever preferred_lft forever
3: enp1s0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast
state UP qlen 1000
    link/ether 00:a0:d1:e3:dd:ec brd ff:ff:ff:ff:ff:ff
    inet 192.168.21.20/24 brd 192.168.21.255 scope global dynamic enp1s0
       valid_lft 85708sec preferred_lft 85708sec
    inet6 fe80::2a0:d1ff:fee3:ddec/64 scope link
       valid_lft forever preferred_lft forever
4: p55p1: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc pfifo_fast
state DOWN qlen 1000
    link/ether 00:15:17:68:7b:42 brd ff:ff:ff:ff:ff:ff
5: p55p2: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc pfifo_fast
state DOWN qlen 1000
    link/ether 00:15:17:68:7b:43 brd ff:ff:ff:ff:ff:ff

*[root at ov-iagiwqs7y3w-0-dq2lxejobfzq-controller-57rzpswhpzf6 ~]# ip r*
default via 192.168.1.246 dev enp0s25
169.254.169.254 via 192.168.21.180 dev enp1s0  proto static
192.168.1.0/24 dev enp0s25  proto kernel  scope link  src 192.168.1.149
192.168.21.0/24 dev enp1s0  proto kernel  scope link  src 192.168.21.20


*After:*

*[root at ov-iagiwqs7y3w-0-dq2lxejobfzq-controller-57rzpswhpzf6 ~]# ip a*
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host
       valid_lft forever preferred_lft forever
2: enp0s25: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast
state UP qlen 1000
    link/ether 00:a0:d1:e3:dd:ed brd ff:ff:ff:ff:ff:ff
    inet 192.168.1.149/24 brd 192.168.1.255 scope global dynamic enp0s25
       valid_lft 42468sec preferred_lft 42468sec
    inet6 fe80::2a0:d1ff:fee3:dded/64 scope link
       valid_lft forever preferred_lft forever
3: enp1s0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast
state UP qlen 1000
    link/ether 00:a0:d1:e3:dd:ec brd ff:ff:ff:ff:ff:ff
    inet 192.168.21.20/24 brd 192.168.21.255 scope global dynamic enp1s0
       valid_lft 85668sec preferred_lft 85668sec
    inet6 fe80::2a0:d1ff:fee3:ddec/64 scope link
       valid_lft forever preferred_lft forever
4: p55p1: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc pfifo_fast
state DOWN qlen 1000
    link/ether 00:15:17:68:7b:42 brd ff:ff:ff:ff:ff:ff
5: p55p2: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc pfifo_fast
state DOWN qlen 1000
    link/ether 00:15:17:68:7b:43 brd ff:ff:ff:ff:ff:ff

*[root at ov-iagiwqs7y3w-0-dq2lxejobfzq-controller-57rzpswhpzf6 ~]# ip r*
default via 192.168.1.246 dev enp0s25
169.254.169.254 via 192.168.21.180 dev enp1s0  proto static
192.168.1.0/24 dev enp0s25  proto kernel  scope link  src 192.168.1.149
192.168.21.0/24 dev enp1s0  proto kernel  scope link  src 192.168.21.20



Thanks,
Pedro Sousa


On Fri, Jun 19, 2015 at 9:33 PM, Marius Cornea <marius at remote-lab.net>
wrote:

> Hm..interesting. Could you post the output of 'ip a; ip r' before and
> after restarting cloud-init please?
>
> Thanks
>
> On Fri, Jun 19, 2015 at 10:16 PM, Pedro Sousa <pgsousa at gmail.com> wrote:
> > Hi Marius,
> >
> > Yes.
> >
> > Regards,
> > Pedro Sousa
> >
> > Em 19/06/2015 21:14, "Marius Cornea" <marius at remote-lab.net> escreveu:
> >>
> >> Hi Pedro,
> >>
> >> Just to make sure I understand it correctly - you are able to SSH to
> >> the overcloud nodes and restart cloud-init ?
> >>
> >> Thanks,
> >> Marius
> >>
> >> On Fri, Jun 19, 2015 at 6:42 PM, Pedro Sousa <pgsousa at gmail.com> wrote:
> >> > Hi Maruis,
> >> >
> >> > thank you for your reply. Yes the nodes can reach dhcp, but my
> >> > understanding
> >> > is that cloud init starts first than I get ip addresses.
> >> >
> >> > If I restart cloud-init I will see the routes tables being created
> >> > properly.
> >> > However, after I restart cloud-init nothing happens, I don't see the
> >> > deployment to resume, I only see this in the logs:
> >> >
> >> > Jun 19 16:38:19 localhost os-collect-config: 2015-06-19 16:38:19.357
> >> > 1518
> >> > WARNING os_collect_config.heat [-] No auth_url configured.
> >> > Jun 19 16:38:19 localhost os-collect-config: 2015-06-19 16:38:19.359
> >> > 1518
> >> > WARNING os_collect_config.request [-] No metadata_url configured.
> >> > Jun 19 16:38:19 localhost os-collect-config: 2015-06-19 16:38:19.359
> >> > 1518
> >> > WARNING os-collect-config [-] Source [request] Unavailable.
> >> > Jun 19 16:38:19 localhost os-collect-config: 2015-06-19 16:38:19.359
> >> > 1518
> >> > WARNING os_collect_config.local [-]
> >> > /var/lib/os-collect-config/local-data
> >> > not found. Skipping
> >> > Jun 19 16:38:19 localhost os-collect-config: 2015-06-19 16:38:19.359
> >> > 1518
> >> > WARNING os_collect_config.local [-] No local metadata found
> >> > (['/var/lib/os-collect-config/local-data'])
> >> >
> >> > I also see that it gets stuck here:
> >> >
> >> > [stack at instack ~]$ heat resource-show overcloud Controller
> >> >
> >> >
> +------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------+
> >> > | Property               | Value
> >> > |
> >> >
> >> >
> +------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------+
> >> > | attributes             | {
> >> > |
> >> > |                        |   "attributes": null,
> >> > |
> >> > |                        |   "refs": null
> >> > |
> >> > |                        | }
> >> > |
> >> > | description            |
> >> > |
> >> > | links                  |
> >> >
> >> >
> http://192.168.21.180:8004/v1/9fcf8994049b48d6af5ea6fe5323a21d/stacks/overcloud/32a0c1d4-4915-4ff9-8a8a-f8f40590fcae/resources/Controller
> >> > (self)      |
> >> > |                        |
> >> >
> >> >
> http://192.168.21.180:8004/v1/9fcf8994049b48d6af5ea6fe5323a21d/stacks/overcloud/32a0c1d4-4915-4ff9-8a8a-f8f40590fcae
> >> > (stack)                          |
> >> > |                        |
> >> >
> >> >
> http://192.168.21.180:8004/v1/9fcf8994049b48d6af5ea6fe5323a21d/stacks/overcloud-Controller-vkqyq5m4na2v/89b52bb2-de3b-45c5-8ec4-43c2545b8d04
> >> > (nested) |
> >> > | logical_resource_id    | Controller
> >> > |
> >> > | physical_resource_id   | 89b52bb2-de3b-45c5-8ec4-43c2545b8d04
> >> > |
> >> > | required_by            | allNodesConfig
> >> > |
> >> > |                        | VipDeployment
> >> > |
> >> > |                        | ControllerAllNodesDeployment
> >> > |
> >> > |                        | ControllerIpListMap
> >> > |
> >> > |                        | CephClusterConfig
> >> > |
> >> > |                        | ControllerBootstrapNodeConfig
> >> > |
> >> > |                        | ControllerCephDeployment
> >> > |
> >> > |                        | ControllerBootstrapNodeDeployment
> >> > |
> >> > |                        | ControllerClusterConfig
> >> > |
> >> > |                        | ControllerSwiftDeployment
> >> > |
> >> > |                        | SwiftDevicesAndProxyConfig
> >> > |
> >> > |                        | ControllerNodesPostDeployment
> >> > |
> >> > |                        | ControllerClusterDeployment
> >> > |
> >> > | resource_name          | Controller
> >> > |
> >> > | resource_status        | CREATE_IN_PROGRESS
> >> > |
> >> > | resource_status_reason | state changed
> >> > |
> >> > | resource_type          | OS::Heat::ResourceGroup
> >> > |
> >> > | updated_time           | 2015-06-19T17:25:45Z
> >> > |
> >> >
> >> >
> +------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------+
> >> >
> >> >
> >> > Regards,
> >> > Pedro Sousa
> >> >
> >> >
> >> > On Thu, Jun 18, 2015 at 9:16 PM, Marius Cornea <marius at remote-lab.net
> >
> >> > wrote:
> >> >>
> >> >> Hi Pedro,
> >> >>
> >> >> Can you check if the nodes can reach the dhcp server on the
> undercloud
> >> >> node? Looks to me that the nodes can't get an IP address:
> >> >>
> >> >> systemctl status neutron-dhcp-agent.service # check service status
> >> >> ip netns list # check if dhcp namespace is there
> >> >> cat /var/lib/neutron/dhcp/<namespace_uuid>/leases # check if the file
> >> >> shows leases for your nodes nic mac addresses
> >> >>
> >> >> Thanks,
> >> >> Marius
> >> >>
> >> >>
> >> >> On Wed, Jun 17, 2015 at 7:17 PM, Pedro Sousa <pgsousa at gmail.com>
> wrote:
> >> >> > Hi all,
> >> >> >
> >> >> > I'm trying to deploy 2 nodes, one compute and one controller using
> >> >> > RDO.
> >> >> > However my heat stack times out and I don't understand why, I see
> >> >> > that
> >> >> > openstack doesn't get configured, checking the logs, I see this:
> >> >> >
> >> >> > Jun 17 16:44:12 localhost cloud-init: ci-info:
> >> >> > +++++++++++++++++++++++Net
> >> >> > device info+++++++++++++++++++++++
> >> >> > Jun 17 16:44:12 localhost cloud-init: ci-info:
> >> >> > +--------+------+-----------+-----------+-------------------+
> >> >> > Jun 17 16:44:12 localhost cloud-init: ci-info: | Device |  Up  |
> >> >> > Address  |
> >> >> > Mask   |     Hw-Address    |
> >> >> > Jun 17 16:44:12 localhost cloud-init: ci-info:
> >> >> > +--------+------+-----------+-----------+-------------------+
> >> >> > Jun 17 16:44:12 localhost cloud-init: ci-info: |  lo:   | True |
> >> >> > 127.0.0.1 |
> >> >> > 255.0.0.0 |         .         |
> >> >> > Jun 17 16:44:12 localhost cloud-init: ci-info: |  em2:  | True |
> >> >> > .
> >> >> > |
> >> >> > .     | d4:ae:52:a1:cd:80 |
> >> >> > Jun 17 16:44:12 localhost cloud-init: ci-info: |  em1:  | True |
> >> >> > .
> >> >> > |
> >> >> > .     | d4:ae:52:a1:cd:7f |
> >> >> > Jun 17 16:44:12 localhost cloud-init: ci-info: | p2p1:  | True |
> >> >> > .
> >> >> > |
> >> >> > .     | 68:05:ca:16:db:94 |
> >> >> > Jun 17 16:44:12 localhost cloud-init: ci-info:
> >> >> > +--------+------+-----------+-----------+-------------------+
> >> >> > Jun 17 16:44:12 localhost cloud-init: ci-info:
> >> >> > !!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!Route info
> >> >> > failed!!!!!!!!!!!!!!!!!!!!!!!
> >> >> >
> >> >> >
> >> >> > os-collect-config: 2015-06-17 15:39:22.319 1663 WARNING
> >> >> > os_collect_config.cfn [-] 403 Client Error: AccessDenied
> >> >> >
> >> >> > Any hint?
> >> >> >
> >> >> > Thanks,
> >> >> > Pedro Sousa
> >> >> >
> >> >> >
> >> >> >
> >> >> > _______________________________________________
> >> >> > Rdo-list mailing list
> >> >> > Rdo-list at redhat.com
> >> >> > https://www.redhat.com/mailman/listinfo/rdo-list
> >> >> >
> >> >> > To unsubscribe: rdo-list-unsubscribe at redhat.com
> >> >
> >> >
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.rdoproject.org/pipermail/dev/attachments/20150622/bb1400b5/attachment.html>


More information about the dev mailing list