I had to update manually table bellow for root & nova passwords at FQDN host :-
[root@dfw01 ~(keystone_admin)]$ mysql -u root -p
Enter password:
Welcome to the MariaDB monitor. Commands end with ; or \g.
Your MariaDB connection id is 35
Server version: 5.5.34-MariaDB MariaDB Server
Copyright (c) 2000, 2013, Oracle, Monty Program Ab and others.
Type 'help;' or '\h' for help. Type '\c' to clear the current
input statement.
MariaDB [(none)]> SELECT User, Host, Password FROM mysql.user;
+----------+-------------------+-------------------------------------------+
| User | Host | Password |
+----------+-------------------+-------------------------------------------+
| root | localhost | *E0DC09146F1310B49A34199B04274A9EED6F9EC7 |
| root | dfw01.localdomain | *E0DC09146F1310B49A34199B04274A9EED6F9EC7 | it's
critical
| root | 127.0.0.1 | *E0DC09146F1310B49A34199B04274A9EED6F9EC7 |
| root | ::1 | *E0DC09146F1310B49A34199B04274A9EED6F9EC7 |
| keystone | localhost | *936E8F7AB2E21B47F6C9A7E5D9FE14DBA2255E5A |
| keystone | % | *936E8F7AB2E21B47F6C9A7E5D9FE14DBA2255E5A |
| glance | localhost | *CC67CAF178CB9A07D756302E0BBFA3B0165DFD49 |
| glance | % | *CC67CAF178CB9A07D756302E0BBFA3B0165DFD49 |
| cinder | localhost | *028F8298C041368BA08A280AA8D1EF895CB68D5C |
| cinder | % | *028F8298C041368BA08A280AA8D1EF895CB68D5C |
| neutron | localhost | *4DF421833991170108648F1103CD74FCB66BBE9E |
| neutron | % | *03A31004769F9E4F94ECEEA61AA28D9649084839 |
| nova | localhost | *0BE3B501084D35F4C66DD3AC4569EAE5EA738212 |
| nova | % | *0BE3B501084D35F4C66DD3AC4569EAE5EA738212 |
| nova | dfw01.localdomain | *0BE3B501084D35F4C66DD3AC4569EAE5EA738212 | it's
critical
+----------+-------------------+-------------------------------------------+
15 rows in set (0.00 sec)
Otherwise , nothing is going to work , just "allinone" testing. When it's
done , your schema goes ahead on F20 Two Node Real Cluster. I am going to file a bug
regarding this updates , because I believe it should be done behind the scene. Updated and
inserted rows are responsible for remote connection to controller for nova-compute and
neutron-openswitch-agent services.
Thanks
Boris.
Date: Mon, 10 Feb 2014 10:50:40 +0530
From: kchamart(a)redhat.com
To: bderzhavets(a)hotmail.com
CC: rdo-list(a)redhat.com
Subject: Re: [Rdo-list] Neutron configuration files for a two node Neutron+GRE+OVS
(Please convince your mail client to wrap long lines, it's very
difficult to read your emails.)
On Sun, Feb 09, 2014 at 02:45:00AM -0500, Boris Derzhavets wrote:
[. . .]
> In new attempt on fresh F20 instance Neutron-server may be started only with
>
> [DATABASE]
> sql_connection = mysql://root:password@localhost/ovs_neutron
>
> Block like :-
>
> Port "gre-2"
> Interface "gre-2"
> type: gre
> options: {in_key=flow, local_ip="192.168.1.147",
out_key=flow, remote_ip="192.168.1.157"}
>
> doesn't appear in `ovs-vsctl show` output . Nothing works on Compute
> all Configs are the the same as in first attempt.
>
> The error from mysql, which I get "Access denied fror
> 'root"@'new_hostname' new_hostname as before is in /etc/hosts
>
>
> 192.168.1.147 new_hostname.localdomain new_hostname
>
> and in /etc/hostname
> new_hostname.localdomain
>
> For me it looks like bug for neutron-server to be bind to 127.0.0.1
> ,actually, connected with MariaDB database.
It could possibly be. Please write a clear bug with full details and
proper reproducer steps.
>
>
> I did 2 attempts to reproduce it from scratch building Controller and
> every time Neutron-server start up limitation came up.
> Kashyap, my question to you :-
>
> Am I correct in my conclusions regarding Neutron-Server mysql
> credentials affecting network abilities of Neutron or libvirtd daemon
> is a real carrier for metadata and schema would work only on
> non-default libvirt's network for virtual machines ?
>
I don't follow your question. Please rephrase or if you're convinced,
please write bug with as much clear details as possible
https://wiki.openstack.org/wiki/BugFilingRecommendations
>
> Then working real cluster is a kind of miracle. It's under testing on
> daily basis.
Thanks for testing.
>
> Thanks.
> Boris.
>
> PS. All snapshots done on first Cluster (successfully working in
> meantime with all updates accepted from yum) may be viewed here :-
>
>
http://bderzhavets.blogspot.com/2014/01/setting-up-two-physical-node-open...
>
--
/kashyap