[Rdo-list] [rdo-manager] Authentication required during overcloud deployment
Sasha Chuzhoy
sasha at redhat.com
Fri Oct 16 03:50:22 UTC 2015
Hi Erming,
So I tried to reproduce your issue by setting the passwords in the auth section of the undercloud file. My deployment completed successfully, although I ran into https://bugzilla.redhat.com/show_bug.cgi?id=1271289.
The warnings (not errors) are on the successfully deployed node too:
Oct 16 03:46:45 localhost os-collect-config: 2015-10-16 03:46:45.845 1634 WARNING os-collect-config [-] Source [request] Unavailable.
Oct 16 03:46:45 localhost os-collect-config: 2015-10-16 03:46:45.845 1634 WARNING os_collect_config.local [-] /var/lib/os-collect-config/local-data not found. Skipping
Oct 16 03:46:45 localhost os-collect-config: 2015-10-16 03:46:45.846 1634 WARNING os_collect_config.local [-] No local metadata found (['/var/lib/os-collect-config/local-data'])
Oct 16 03:46:45 localhost os-collect-config: 2015-10-16 03:46:45.847 1634 WARNING os_collect_config.zaqar [-] No auth_url configured.
Oct 16 03:47:18 localhost os-collect-config: 2015-10-16 03:47:18.167 1634 WARNING os-collect-config [-] Source [request] Unavailable.
Oct 16 03:47:18 localhost os-collect-config: 2015-10-16 03:47:18.168 1634 WARNING os_collect_config.local [-] /var/lib/os-collect-config/local-data not found. Skipping
Oct 16 03:47:18 localhost os-collect-config: 2015-10-16 03:47:18.168 1634 WARNING os_collect_config.local [-] No local metadata found (['/var/lib/os-collect-config/local-data'])
Oct 16 03:47:18 localhost os-collect-config: 2015-10-16 03:47:18.168 1634 WARNING os_collect_config.zaqar [-] No auth_url configured.
Oct 16 03:47:51 localhost os-collect-config: 2015-10-16 03:47:51.150 1634 WARNING os-collect-config [-] Source [request] Unavailable.
Oct 16 03:47:51 localhost os-collect-config: 2015-10-16 03:47:51.153 1634 WARNING os_collect_config.local [-] /var/lib/os-collect-config/local-data not found. Skipping
Oct 16 03:47:51 localhost os-collect-config: 2015-10-16 03:47:51.153 1634 WARNING os_collect_config.local [-] No local metadata found (['/var/lib/os-collect-config/local-data'])
Oct 16 03:47:51 localhost os-collect-config: 2015-10-16 03:47:51.154 1634 WARNING os_collect_config.zaqar [-] No auth_url configured.
Did you went through the history of the commands you executed and compared with a guide?
Thanks.
Best regards,
Sasha Chuzhoy.
----- Original Message -----
> From: "Erming Pei" <erming at ualberta.ca>
> To: "Sasha Chuzhoy" <sasha at redhat.com>
> Cc: "Dan Sneddon" <dsneddon at redhat.com>, rdo-list at redhat.com
> Sent: Thursday, October 15, 2015 6:56:17 PM
> Subject: Re: [Rdo-list] [rdo-manager] Authentication required during overcloud deployment
>
> Hi Sasha,
>
> I checked the sys logs and see many such errors:
>
> Oct 15 22:50:19 localhost os-collect-config: 2015-10-15 22:50:19.133
> 8516 WARNING os_collect_config.ec2 [-] ('Connection aborted.',
> error(113, 'No route to host'))
> Oct 15 22:50:19 localhost os-collect-config: 2015-10-15 22:50:19.133
> 8516 WARNING os-collect-config [-] Source [ec2] Unavailable.
> Oct 15 22:50:20 localhost os-collect-config: 2015-10-15 22:50:20.007
> 8516 WARNING os_collect_config.heat [-] No auth_url configured.
> Oct 15 22:50:20 localhost os-collect-config: 2015-10-15 22:50:20.007
> 8516 WARNING os_collect_config.request [-] No metadata_url configured.
> Oct 15 22:50:20 localhost os-collect-config: 2015-10-15 22:50:20.007
> 8516 WARNING os-collect-config [-] Source [request] Unavailable.
> Oct 15 22:50:20 localhost os-collect-config: 2015-10-15 22:50:20.007
> 8516 WARNING os_collect_config.local [-]
> /var/lib/os-collect-config/local-data not found. Skipping
> Oct 15 22:50:20 localhost os-collect-config: 2015-10-15 22:50:20.007
> 8516 WARNING os_collect_config.local [-] No local metadata found
> (['/var/lib/os-collect-config/local-data'])
>
> Below are my undercloud.conf (masked passwords).
>
>
> [stack at gcloudcon-3 ~]$ cat undercloud.conf
> [DEFAULT]
>
> #
> # From instack-undercloud
> #
>
> # Local file path to the necessary images. The path should be a
> # directory readable by the current user that contains the full set of
> # images. (string value)
> #image_path = .
> image_path = /gcloud/images
>
> # IP information for the interface on the Undercloud that will be
> # handling the PXE boots and DHCP for Overcloud instances. The IP
> # portion of the value will be assigned to the network interface
> # defined by local_interface, with the netmask defined by the prefix
> # portion of the value. (string value)
> #local_ip = 192.0.2.1/24
> local_ip = 10.0.6.40/16
>
> # Network interface on the Undercloud that will be handling the PXE
> # boots and DHCP for Overcloud instances. (string value)
> #local_interface = eth1
> local_interface = eth0
>
> # Network that will be masqueraded for external access, if required.
> # This should be the subnet used for PXE booting. (string value)
> #masquerade_network = 192.0.2.0/24
> masquerade_network = 10.0.6.0/16
>
> # Start of DHCP allocation range for PXE and DHCP of Overcloud
> # instances. (string value)
> #dhcp_start = 192.0.2.5
> dhcp_start = 10.0.6.50
>
> # End of DHCP allocation range for PXE and DHCP of Overcloud
> # instances. (string value)
> #dhcp_end = 192.0.2.24
> dhcp_end = 10.0.6.250
>
> # Network CIDR for the Neutron-managed network for Overcloud
> # instances. This should be the subnet used for PXE booting. (string
> # value)
> #network_cidr = 192.0.2.0/24
> network_cidr = 10.0.6.0/16
>
> # Network gateway for the Neutron-managed network for Overcloud
> # instances. This should match the local_ip above when using
> # masquerading. (string value)
> #network_gateway = 192.0.2.1
> network_gateway = 10.0.6.40
>
> # Network interface on which discovery dnsmasq will listen. If in
> # doubt, use the default value. (string value)
> #discovery_interface = br-ctlplane
>
> # Temporary IP range that will be given to nodes during the discovery
> # process. Should not overlap with the range defined by dhcp_start
> # and dhcp_end, but should be in the same network. (string value)
> #discovery_iprange = 192.0.2.100,192.0.2.120
> discovery_iprange = 10.0.6.251,10.0.6.252
>
> # Whether to run benchmarks when discovering nodes. (boolean value)
> #discovery_runbench = false
>
> # Whether to enable the debug log level for Undercloud OpenStack
> # services. (boolean value)
> undercloud_debug = true
>
>
> [auth]
>
> #
> # From instack-undercloud
> #
>
> # Password used for MySQL databases. If left unset, one will be
> # automatically generated. (string value)
> undercloud_db_password = xxxxxxxxxxxxxx
>
> # Keystone admin token. If left unset, one will be automatically
> # generated. (string value)
> #undercloud_admin_token = <None>
>
> # Keystone admin password. If left unset, one will be automatically
> # generated. (string value)
> undercloud_admin_password = xxxxxxxxxxxxxx
>
> # Glance service password. If left unset, one will be automatically
> # generated. (string value)
> undercloud_glance_password = xxxxxxxxxxxxxx
>
> # Heat db encryption key(must be 8,16 or 32 characters. If left unset,
> # one will be automatically generated. (string value)
> #undercloud_heat_encryption_key = <None>
>
> # Heat service password. If left unset, one will be automatically
> # generated. (string value)
> undercloud_heat_password = xxxxxxxxxxxxxx
>
> # Neutron service password. If left unset, one will be automatically
> # generated. (string value)
> undercloud_neutron_password = xxxxxxxxxxxxxx
>
> # Nova service password. If left unset, one will be automatically
> # generated. (string value)
> undercloud_nova_password = xxxxxxxxxxxxxx
>
> # Ironic service password. If left unset, one will be automatically
> # generated. (string value)
> undercloud_ironic_password = xxxxxxxxxxxxxx
>
> # Tuskar service password. If left unset, one will be automatically
> # generated. (string value)
> undercloud_tuskar_password = xxxxxxxxxxxxxx
>
> # Ceilometer service password. If left unset, one will be
> # automatically generated. (string value)
> undercloud_ceilometer_password = xxxxxxxxxxxxxx
>
> # Ceilometer metering secret. If left unset, one will be automatically
> # generated. (string value)
> #undercloud_ceilometer_metering_secret = <None>
>
> # Ceilometer snmpd user. If left unset, one will be automatically
> # generated. (string value)
> undercloud_ceilometer_snmpd_user = ceilometer
>
> # Ceilometer snmpd password. If left unset, one will be automatically
> # generated. (string value)
> undercloud_ceilometer_snmpd_password = xxxxxxxxxxxxxx
>
> # Swift service password. If left unset, one will be automatically
> # generated. (string value)
> undercloud_swift_password = xxxxxxxxxxxxxx
>
> # Rabbitmq cookie. If left unset, one will be automatically generated.
> # (string value)
> #undercloud_rabbit_cookie = <None>
>
> # Rabbitmq password. If left unset, one will be automatically
> # generated. (string value)
> undercloud_rabbit_password = xxxxxxxxxxxxxx
>
> # Rabbitmq username. If left unset, one will be automatically
> # generated. (string value)
> undercloud_rabbit_username = rabbit
>
> # Heat stack domain admin password. If left unset, one will be
> # automatically generated. (string value)
> undercloud_heat_stack_domain_admin_password = xxxxxxxxxxxxxx
>
> # Swift hash suffix. If left unset, one will be automatically
> # generated. (string value)
> #undercloud_swift_hash_suffix = <None>
>
>
>
> Yes, I am just testing with the basic 1 controller and 1 compute case.
> I can try with setting a timeout as you did.
>
> Thanks,
>
> Erming
>
>
> On 10/15/15, 3:19 PM, Sasha Chuzhoy wrote:
> > Hi Erming,
> > You can also check the log files on nodes for errors (start with
> > /var/log/messages).
> >
> > if things are working, "openstack overcloud deploy --template" will create
> > a nonHA deployment without network isolation consisting of 1 controller
> > and 1 compute.
> > I usually add "--timeout 90", as this period of time is sufficient on my
> > setup for deploying the overcloud.
> >
> > Seeing the IP being different than 192.0.2.x, I wonder what other changes
> > were made to the undercloud.conf?
> >
> > Best regards,
> > Sasha Chuzhoy.
> >
> > ----- Original Message -----
> >> From: "Erming Pei" <erming at ualberta.ca>
> >> To: "Dan Sneddon" <dsneddon at redhat.com>, rdo-list at redhat.com
> >> Sent: Thursday, October 15, 2015 4:03:26 PM
> >> Subject: Re: [Rdo-list] [rdo-manager] Authentication required during
> >> overcloud deployment
> >>
> >> Hi Dan, Sasha,
> >>
> >> Thanks for your answers and hints.
> >> I looked up the heat/etc log files and stack/node status.
> >> Only thing I found by far is "timed out". I don't know what's the reason.
> >> IPMI looks good.
> >>
> >> Tried with HEAT_INCLUDE_PASSWORD=1 but same error message (Please try
> >> again
> >> with option --include-password or export HEAT_INCLUDE_PASSWORD=1
> >> Authentication required)
> >>
> >> BTW. I only followed the exact instruction as shown in the guide:
> >> (openstack
> >> overcloud deploy --templates) No more options. I thought this is good for
> >> a
> >> demo deployment. If not sufficient, which one I should follow? See some of
> >> your discussions, but not very clear. Should I follow the example from
> >> jliberma at redhat.com ?
> >>
> >> Below are my investigation:
> >> By runnig: $ heat resource-list overcloud
> >> Found that just controller and compute are failed: CREATE_FAILED
> >>
> >> Checked the reason it says: resource_status_reason | CREATE aborted
> >>
> >> I then logged into the running overcloud nodes (e.g. the controller):
> >>
> >>
> >> [heat-admin at overcloud-controller-0 ~]$ ifconfig
> >> br-ex: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1500
> >> inet6 fe80::21:5eff:fecd:9df3 prefixlen 64 scopeid 0x20<link>
> >> ether 02:21:5e:cd:9d:f3 txqueuelen 0 (Ethernet)
> >> RX packets 29926 bytes 2364154 (2.2 MiB)
> >> RX errors 0 dropped 0 overruns 0 frame 0
> >> TX packets 81 bytes 25614 (25.0 KiB)
> >> TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0
> >>
> >> enp0s29f0u2: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1500
> >> inet6 fe80::21:5eff:fecd:9df3 prefixlen 64 scopeid 0x20<link>
> >> ether 02:21:5e:cd:9d:f3 txqueuelen 1000 (Ethernet)
> >> RX packets 29956 bytes 1947140 (1.8 MiB)
> >> RX errors 0 dropped 0 overruns 0 frame 0
> >> TX packets 102 bytes 28620 (27.9 KiB)
> >> TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0
> >>
> >> enp11s0f0: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1500
> >> inet 10.0.6.64 netmask 255.255.0.0 broadcast 10.0.255.255
> >> inet6 fe80::221:5eff:fec9:abd8 prefixlen 64 scopeid 0x20<link>
> >> ether 00:21:5e:c9:ab:d8 txqueuelen 1000 (Ethernet)
> >> RX packets 66256 bytes 21109918 (20.1 MiB)
> >> RX errors 0 dropped 0 overruns 0 frame 0
> >> TX packets 35938 bytes 4641202 (4.4 MiB)
> >> TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0
> >>
> >> enp11s0f1: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1500
> >> inet6 fe80::221:5eff:fec9:abda prefixlen 64 scopeid 0x20<link>
> >> ether 00:21:5e:c9:ab:da txqueuelen 1000 (Ethernet)
> >> RX packets 25429 bytes 2004574 (1.9 MiB)
> >> RX errors 0 dropped 0 overruns 0 frame 0
> >> TX packets 6 bytes 532 (532.0 B)
> >> TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0
> >>
> >> ib0: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 2044
> >> inet6 fe80::202:c902:23:baf9 prefixlen 64 scopeid 0x20<link>
> >> Infiniband hardware address can be incorrect! Please read BUGS section in
> >> ifconfig(8).
> >> infiniband 80:00:04:04:FE:80:00:00:00:00:00:00:00:00:00:00:00:00:00:00
> >> txqueuelen 256 (InfiniBand)
> >> RX packets 183678 bytes 10292768 (9.8 MiB)
> >> RX errors 0 dropped 0 overruns 0 frame 0
> >> TX packets 17 bytes 5380 (5.2 KiB)
> >> TX errors 0 dropped 7 overruns 0 carrier 0 collisions 0
> >>
> >> lo: flags=73<UP,LOOPBACK,RUNNING> mtu 65536
> >> inet 127.0.0.1 netmask 255.0.0.0
> >> inet6 ::1 prefixlen 128 scopeid 0x10<host>
> >> loop txqueuelen 0 (Local Loopback)
> >> RX packets 138 bytes 11792 (11.5 KiB)
> >> RX errors 0 dropped 0 overruns 0 frame 0
> >> TX packets 138 bytes 11792 (11.5 KiB)
> >> TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0
> >>
> >> [heat-admin at overcloud-controller-0 ~]$ ovs-vsctl show
> >> ovs-vsctl: unix:/var/run/openvswitch/db.sock: database connection failed
> >> (Permission denied)
> >> [heat-admin at overcloud-controller-0 ~]$ sudo ovs-vsctl show
> >> 76e6f8a7-88cf-4920-b133-b4d15a4b9092
> >> Bridge br-ex
> >> Port br-ex
> >> Interface br-ex
> >> type: internal
> >> Port "enp0s29f0u2"
> >> Interface "enp0s29f0u2"
> >> ovs_version: "2.3.1"
> >> [heat-admin at overcloud-controller-0 ~]$
> >> [heat-admin at overcloud-controller-0 ~]$ ping 10.0.6.65
> >> PING 10.0.6.65 (10.0.6.65) 56(84) bytes of data.
> >> 64 bytes from 10.0.6.65: icmp_seq=1 ttl=64 time=0.176 ms
> >> 64 bytes from 10.0.6.65: icmp_seq=2 ttl=64 time=0.195 ms
> >> ^C
> >> --- 10.0.6.65 ping statistics ---
> >> 2 packets transmitted, 2 received, 0% packet loss, time 999ms
> >> rtt min/avg/max/mdev = 0.176/0.185/0.195/0.016 ms
> >> [heat-admin at overcloud-controller-0 ~]$
> >> [heat-admin at overcloud-controller-0 ~]$ ping 10.0.6.64
> >> PING 10.0.6.64 (10.0.6.64) 56(84) bytes of data.
> >> 64 bytes from 10.0.6.64: icmp_seq=1 ttl=64 time=0.015 ms
> >> ^C
> >> --- 10.0.6.64 ping statistics ---
> >> 1 packets transmitted, 1 received, 0% packet loss, time 0ms
> >> rtt min/avg/max/mdev = 0.015/0.015/0.015/0.000 ms
> >>
> >> [heat-admin at overcloud-controller-0 ~]$ cat /etc/os-net-config/config.json
> >> {"network_config": [{"use_dhcp": true, "type": "ovs_bridge", "name":
> >> "br-ex",
> >> "members": [{"type": "interface", "name": "nic1", "primary": true}]}]}
> >> [heat-admin at overcloud-controller-0 ~]$
> >> [heat-admin at overcloud-controller-0 ~]$
> >> [heat-admin at overcloud-controller-0 ~]$ sudo os-net-config --debug -c
> >> /etc/os-net-config/config.json
> >> [2015/10/15 07:52:08 PM] [INFO] Using config file at:
> >> /etc/os-net-config/config.json
> >> [2015/10/15 07:52:08 PM] [INFO] Using mapping file at:
> >> /etc/os-net-config/mapping.yaml
> >> [2015/10/15 07:52:08 PM] [INFO] Ifcfg net config provider created.
> >> [2015/10/15 07:52:08 PM] [DEBUG] network_config JSON: [{'use_dhcp': True,
> >> 'type': 'ovs_bridge', 'name': 'br-ex', 'members': [{'type': 'interface',
> >> 'name': 'nic1', 'primary': True}]}]
> >> [2015/10/15 07:52:08 PM] [INFO] nic1 mapped to: enp0s29f0u2
> >> [2015/10/15 07:52:08 PM] [INFO] nic2 mapped to: enp11s0f0
> >> [2015/10/15 07:52:08 PM] [INFO] nic3 mapped to: enp11s0f1
> >> [2015/10/15 07:52:08 PM] [INFO] nic4 mapped to: ib0
> >> [2015/10/15 07:52:08 PM] [INFO] adding bridge: br-ex
> >> [2015/10/15 07:52:08 PM] [DEBUG] bridge data: DEVICE=br-ex
> >> ONBOOT=yes
> >> HOTPLUG=no
> >> DEVICETYPE=ovs
> >> TYPE=OVSBridge
> >> OVSBOOTPROTO=dhcp
> >> OVSDHCPINTERFACES="enp0s29f0u2"
> >> OVS_EXTRA="set bridge br-ex other-config:hwaddr=02:21:5e:cd:9d:f3"
> >>
> >> [2015/10/15 07:52:08 PM] [INFO] adding interface: enp0s29f0u2
> >> [2015/10/15 07:52:08 PM] [DEBUG] interface data: DEVICE=enp0s29f0u2
> >> ONBOOT=yes
> >> HOTPLUG=no
> >> DEVICETYPE=ovs
> >> TYPE=OVSPort
> >> OVS_BRIDGE=br-ex
> >> BOOTPROTO=none
> >>
> >> [2015/10/15 07:52:08 PM] [INFO] applying network configs...
> >> [2015/10/15 07:52:08 PM] [DEBUG] Diff file data:
> >> DEVICE=enp0s29f0u2
> >> ONBOOT=yes
> >> HOTPLUG=no
> >> DEVICETYPE=ovs
> >> TYPE=OVSPort
> >> OVS_BRIDGE=br-ex
> >> BOOTPROTO=none
> >>
> >> [2015/10/15 07:52:08 PM] [DEBUG] Diff data:
> >> DEVICE=enp0s29f0u2
> >> ONBOOT=yes
> >> HOTPLUG=no
> >> DEVICETYPE=ovs
> >> TYPE=OVSPort
> >> OVS_BRIDGE=br-ex
> >> BOOTPROTO=none
> >>
> >> [2015/10/15 07:52:08 PM] [DEBUG] Diff file data:
> >>
> >> [2015/10/15 07:52:08 PM] [DEBUG] Diff data:
> >>
> >> [2015/10/15 07:52:08 PM] [DEBUG] Diff file data:
> >> DEVICE=br-ex
> >> ONBOOT=yes
> >> HOTPLUG=no
> >> DEVICETYPE=ovs
> >> TYPE=OVSBridge
> >> OVSBOOTPROTO=dhcp
> >> OVSDHCPINTERFACES="enp0s29f0u2"
> >> OVS_EXTRA="set bridge br-ex other-config:hwaddr=02:21:5e:cd:9d:f3"
> >>
> >> [2015/10/15 07:52:08 PM] [DEBUG] Diff data:
> >> DEVICE=br-ex
> >> ONBOOT=yes
> >> HOTPLUG=no
> >> DEVICETYPE=ovs
> >> TYPE=OVSBridge
> >> OVSBOOTPROTO=dhcp
> >> OVSDHCPINTERFACES="enp0s29f0u2"
> >> OVS_EXTRA="set bridge br-ex other-config:hwaddr=02:21:5e:cd:9d:f3"
> >>
> >> [2015/10/15 07:52:08 PM] [DEBUG] Diff file data:
> >>
> >> [2015/10/15 07:52:08 PM] [DEBUG] Diff data:
> >>
> >>
> >>
> >> [heat-admin at overcloud-controller-0 ~]$ openstack-status
> >> == Nova services ==
> >> openstack-nova-api: inactive (disabled on boot)
> >> openstack-nova-cert: inactive (disabled on boot)
> >> openstack-nova-compute: inactive (disabled on boot)
> >> openstack-nova-network: inactive (disabled on boot)
> >> openstack-nova-scheduler: inactive (disabled on boot)
> >> openstack-nova-conductor: inactive (disabled on boot)
> >> == Glance services ==
> >> openstack-glance-api: inactive (disabled on boot)
> >> openstack-glance-registry: inactive (disabled on boot)
> >> == Keystone service ==
> >> openstack-keystone: inactive (disabled on boot)
> >> == Horizon service ==
> >> openstack-dashboard: uncontactable
> >> == neutron services ==
> >> neutron-server: inactive (disabled on boot)
> >> neutron-dhcp-agent: inactive (disabled on boot)
> >> neutron-l3-agent: inactive (disabled on boot)
> >> neutron-metadata-agent: inactive (disabled on boot)
> >> neutron-lbaas-agent: inactive (disabled on boot)
> >> neutron-openvswitch-agent: inactive (disabled on boot)
> >> neutron-metering-agent: inactive (disabled on boot)
> >> == Swift services ==
> >> openstack-swift-proxy: inactive (disabled on boot)
> >> openstack-swift-account: inactive (disabled on boot)
> >> openstack-swift-container: inactive (disabled on boot)
> >> openstack-swift-object: inactive (disabled on boot)
> >> == Cinder services ==
> >> openstack-cinder-api: inactive (disabled on boot)
> >> openstack-cinder-scheduler: inactive (disabled on boot)
> >> openstack-cinder-volume: inactive (disabled on boot)
> >> openstack-cinder-backup: inactive (disabled on boot)
> >> == Ceilometer services ==
> >> openstack-ceilometer-api: inactive (disabled on boot)
> >> openstack-ceilometer-central: inactive (disabled on boot)
> >> openstack-ceilometer-compute: inactive (disabled on boot)
> >> openstack-ceilometer-collector: inactive (disabled on boot)
> >> openstack-ceilometer-alarm-notifier: inactive (disabled on boot)
> >> openstack-ceilometer-alarm-evaluator: inactive (disabled on boot)
> >> openstack-ceilometer-notification: inactive (disabled on boot)
> >> == Heat services ==
> >> openstack-heat-api: inactive (disabled on boot)
> >> openstack-heat-api-cfn: inactive (disabled on boot)
> >> openstack-heat-api-cloudwatch: inactive (disabled on boot)
> >> openstack-heat-engine: inactive (disabled on boot)
> >> == Support services ==
> >> libvirtd: active
> >> openvswitch: active
> >> dbus: active
> >> rabbitmq-server: inactive (disabled on boot)
> >> memcached: inactive (disabled on boot)
> >> == Keystone users ==
> >> Warning keystonerc not sourced
> >>
> >>
> >>
> >>
> >> Thanks,
> >>
> >> Erming
> >>
> >>
> >> On 10/14/15, 5:23 PM, Dan Sneddon wrote:
> >>
> >>
> >>
> >> On 10/14/2015 03:03 PM, Erming Pei wrote:
> >>
> >>
> >>
> >> Hi,
> >>
> >> I am deploying the overcloud in baremetal way and after a couple of
> >> hours, it showed:
> >>
> >> $ openstack overcloud deploy --templates
> >> Deploying templates in the directory
> >> /usr/share/openstack-tripleo-heat-templates
> >> ^[[A^[[BERROR: openstack ERROR: Authentication failed. Please try again
> >> with option --include-password or export HEAT_INCLUDE_PASSWORD=1
> >> Authentication required
> >>
> >>
> >> But I checked the nodes are now running:
> >>
> >> [stack at gcloudcon-3 ~]$ nova list
> >> +--------------------------------------+-------------------------+--------+------------+-------------+--------------------+
> >>
> >> | ID | Name |
> >> Status | Task State | Power State | Networks |
> >> +--------------------------------------+-------------------------+--------+------------+-------------+--------------------+
> >>
> >> | 1ba04ac0-fe2b-4318-aa31-2e5f4d8422a6 | overcloud-controller-0 |
> >> ACTIVE | - | Running | ctlplane=10.0.6.60 |
> >> | c152ba59-3aed-4fb0-81fa-e3fed7e35cf6 | overcloud-novacompute-0 |
> >> ACTIVE | - | Running | ctlplane=10.0.6.61 |
> >> +--------------------------------------+-------------------------+--------+------------+-------------+--------------------+
> >>
> >>
> >> 1. Should I re-deploy the nodes or there is a way to do update/makeup
> >> for the authentication issue?
> >>
> >> 2.
> >> I don't know how to access to the nodes.
> >> There is not an overcloudrc file produced.
> >>
> >> $ ls overcloud*
> >> overcloud-env.json overcloud-full.initrd overcloud-full.qcow2
> >> overcloud-full.vmlinuz
> >>
> >> overcloud-full.d:
> >> dib-manifests
> >>
> >> Is it via ssh key or password? Should I set the authentication method
> >> somewhere?
> >>
> >>
> >>
> >> Thanks,
> >>
> >> Erming
> >>
> >>
> >> _______________________________________________
> >> Rdo-list mailing list Rdo-list at redhat.com
> >> https://www.redhat.com/mailman/listinfo/rdo-list To unsubscribe:
> >> rdo-list-unsubscribe at redhat.com
> >> This error generally means that something in the deployment got stuck,
> >> and the deployment hung until the token expired after 4 hours. When
> >> that happens, there is no overcloudrc generated (because there is not a
> >> working overcloud). You won't be able to recover with a stack update,
> >> you'll need to perform a stack-delete and redeploy once you know what
> >> went wrong.
> >>
> >> Generally a deployment shouldn't take anywhere near that long, a bare
> >> metal deployment with 6 hosts takes me less than an hour, and less than
> >> 2 including a Ceph deployment. In fact, I usually set a timeout using
> >> the --timeout option, because if it hasn't finished after, say 90
> >> minutes (depending on how complicated the deployment is), then I want
> >> it to bomb out so I can diagnose what went wrong and redeploy.
> >>
> >> Often when a deployment times out it is because there were connectivity
> >> issues between the nodes. Since you can log in to the hosts, you might
> >> want to do some basic network troubleshooting, such as:
> >>
> >> $ ip address # check to see that all the interfaces are there, and
> >> that the IP addresses have been assigned
> >>
> >> $ sudo ovs-vsctl show # make sure that the bridges have the proper
> >> interfaces, vlans, and that all the expected bridges show up
> >>
> >> $ ping <other overcloud nodes> # you can try this on all VLANs to make
> >> sure that any VLAN trunks are working properly
> >>
> >> $ sudo ovs-appctl bond/show # if running bonding, check to see the
> >> bond status
> >>
> >> $ sudo os-net-config --debug -c /etc/os-net-config/config.json # run
> >> the network configuration script again to make sure that it is able to
> >> configure the interfaces without error. WARNING, MAY BE DISRUPTIVE as
> >> this will reset the network interfaces, run on console if possible.
> >>
> >> However, I want to first double-check that you had a valid command
> >> line. You only show "openstack deploy overcloud --templates" in your
> >> original email. You did have a full command-line, right? Refer to the
> >> official installation guide for the right parameters.
> >>
> >>
> >> --
> >> ---------------------------------------------
> >> Erming Pei, Ph.D
> >> Senior System Analyst; Grid/Cloud Specialist
> >>
> >> Research Computing Group
> >> Information Services & Technology
> >> University of Alberta, Canada
> >>
> >> Tel: +1 7804929914 Fax: +1 7804921729
> >> ---------------------------------------------
> >>
> >> _______________________________________________
> >> Rdo-list mailing list
> >> Rdo-list at redhat.com
> >> https://www.redhat.com/mailman/listinfo/rdo-list
> >>
> >> To unsubscribe: rdo-list-unsubscribe at redhat.com
>
>
> --
> ---------------------------------------------
> Erming Pei, Ph.D
> Senior System Analyst; Grid/Cloud Specialist
>
> Research Computing Group
> Information Services & Technology
> University of Alberta, Canada
>
> Tel: +1 7804929914 Fax: +1 7804921729
> ---------------------------------------------
>
>
More information about the dev
mailing list