[Rdo-list] Attempt to add Swift to 3 Node HAProxy\Keepalived Controller been per https://github.com/beekhof/osp-ha-deploy/blob/master/HA-keepalived.md

Boris Derzhavets bderzhavets at hotmail.com
Tue Feb 23 18:39:30 UTC 2016


________________________________________
From: Javier Pena <javier.pena at redhat.com>
Sent: Tuesday, February 23, 2016 1:18 PM
To: rdo-list
Cc: Boris Derzhavets
Subject: Re: [Rdo-list] Attempt to add Swift to 3 Node HAProxy\Keepalived Controller been per https://github.com/beekhof/osp-ha-deploy/blob/master/HA-keepalived.md

----- Original Message -----
>
> Cluster (VM based)   is up and running . Keepalived is good shape on all
> nodes
> Follow
> https://github.com/beekhof/osp-ha-deploy/blob/master/keepalived/swift-config.md
>
> On each node :-
>
> [root at hacontroller1 ~(keystone_admin)]# openstack-status | grep swift
> openstack-swift-proxy:                    active
> openstack-swift-account:                active
> openstack-swift-container:              active
> openstack-swift-object:                    active
>
> [root at hacontroller1 ~(keystone_admin)]# netstat -antp | grep 6202
> tcp        0      0 192.169.142.221:6202    0.0.0.0:*               LISTEN
> 19929/python2
> tcp        0      0 192.169.142.221:6202    192.169.142.222:44530   TIME_WAIT
> -
> tcp        0      0 192.169.142.221:6202    192.169.142.221:38513   TIME_WAIT
> -
>
> [root at hacontroller1 ~(keystone_admin)]# ps -ef | grep 19928
>
> root     13985  5991  0 14:51 pts/0    00:00:00 grep --color=auto 19928
> swift    19928     1  0 14:15 ?        00:00:12 /usr/bin/python2
> /usr/bin/swift-object-server /etc/swift/object-server.conf
> swift    19981 19928  0 14:15 ?        00:00:00 /usr/bin/python2
> /usr/bin/swift-object-server /etc/swift/object-server.conf
> swift    19982 19928  0 14:15 ?        00:00:00 /usr/bin/python2
> /usr/bin/swift-object-server /etc/swift/object-server.conf
> swift    19983 19928  0 14:15 ?        00:00:00 /usr/bin/python2
> /usr/bin/swift-object-server /etc/swift/object-server.conf
> [root at hacontroller1 ~(keystone_admin)]# ps -ef | grep 19924
>
> root     14514  5991  0 14:51 pts/0    00:00:00 grep --color=auto 19924
> swift    19924     1  0 14:15 ?        00:00:12 /usr/bin/python2
> /usr/bin/swift-container-server /etc/swift/container-server.conf
> swift    19994 19924  0 14:15 ?        00:00:00 /usr/bin/python2
> /usr/bin/swift-container-server /etc/swift/container-server.conf
> swift    19995 19924  0 14:15 ?        00:00:00 /usr/bin/python2
> /usr/bin/swift-container-server /etc/swift/container-server.conf
>
> [root at hacontroller1 ~(keystone_admin)]# ps -ef | grep 19929
> root     14662  5991  0 14:51 pts/0    00:00:00 grep --color=auto 19929
> swift    19929     1  0 14:15 ?        00:00:12 /usr/bin/python2
> /usr/bin/swift-account-server /etc/swift/account-server.conf
> swift    19985 19929  0 14:15 ?        00:00:00 /usr/bin/python2
> /usr/bin/swift-account-server /etc/swift/account-server.conf
> swift    19986 19929  0 14:15 ?        00:00:00 /usr/bin/python2
> /usr/bin/swift-account-server /etc/swift/account-server.conf
>
> Ports are open on each node.
>
> I am getting :-
>
> [root at hacontroller1 ~(keystone_admin)]# swift list
> Account GET failed:
> http://controller-vip.example.com:8080/v1/AUTH_acdc927b53bd43ae9a7ed657d1309884?format=json
> 503 Service Unavailable  [first 60 chars of response] <html><h1>Service
> Unavailable</h1><p>The server is currently
>
> [root at hacontroller1 ~(keystone_admin)]# netstat -antp | grep 8080
> tcp        0      0 192.169.142.221:8080    0.0.0.0:*               LISTEN
> 19920/python2
> tcp        0      0 192.169.142.220:8080    0.0.0.0:*               LISTEN
> 1569/haproxy
> tcp        0      0 192.169.142.221:60969   192.169.142.220:8080    TIME_WAIT
> -
>
> So , I guess it's not supposed to respond swift.  I am missing something
> here.
>
> [root at hacontroller1 ~(keystone_admin)]# ps -ef | grep 19920
> swift    19920     1  0 14:15 ?        00:00:02 /usr/bin/python2
> /usr/bin/swift-proxy-server /etc/swift/proxy-server.conf
> swift    19996 19920  0 14:15 ?        00:00:00 /usr/bin/python2
> /usr/bin/swift-proxy-server /etc/swift/proxy-server.conf
> swift    19997 19920  0 14:15 ?        00:00:00 /usr/bin/python2
> /usr/bin/swift-proxy-server /etc/swift/proxy-server.conf
> swift    19998 19920  0 14:15 ?        00:00:00 /usr/bin/python2
> /usr/bin/swift-proxy-server /etc/swift/proxy-server.conf
> swift    19999 19920  0 14:15 ?        00:00:00 /usr/bin/python2
> /usr/bin/swift-proxy-server /etc/swift/proxy-server.conf
> swift    20000 19920  0 14:15 ?        00:00:00 /usr/bin/python2
> /usr/bin/swift-proxy-server /etc/swift/proxy-server.conf
> swift    20001 19920  0 14:15 ?        00:00:00 /usr/bin/python2
> /usr/bin/swift-proxy-server /etc/swift/proxy-server.conf
> swift    20002 19920  0 14:15 ?        00:00:00 /usr/bin/python2
> /usr/bin/swift-proxy-server /etc/swift/proxy-server.conf
> swift    20003 19920  0 14:15 ?        00:00:00 /usr/bin/python2
> /usr/bin/swift-proxy-server /etc/swift/proxy-server.conf
> root     29348  5991  0 14:21 pts/0    00:00:00 grep --color=auto 19920
>
> [root at hacontroller1 ~(keystone_admin)]# ps -ef | grep 1569
> root      1569  1547  0 12:33 ?        00:00:22 /usr/sbin/haproxy -f
> /etc/haproxy/haproxy.cfg -p /run/haproxy.pid -Ds
> root     29973  5991  0 14:21 pts/0    00:00:00 grep --color=auto 1569
>
> controller-vip.example.com(VIP) address is 192.169.142.220
> controller-vip.example.com has haproxy listening on 8080  , not
> swift-proxy-server.
> /var/log/swift/swift.log is empty
>
> First error in /var/log/messages
> Feb 23 15:25:37 hacontroller1 proxy-server: ERROR Insufficient Storage
> 192.169.142.222:6202/vdb (txn: tx511a1757780140d08cf15-0056cc4fc0)
> (client_ip: 192.169.142.223)
> Feb 23 15:25:37 hacontroller1 proxy-server: ERROR Insufficient Storage
> 192.169.142.223:6202/vdb (txn: tx511a1757780140d08cf15-0056cc4fc0)
>
> was fixed per
> https://ask.openstack.org/en/question/57608/proxy-server-error-insufficient-storage-10001556002sdb1/
>
> But , I am still getting in /var/log/messages
>
> Feb 23 15:51:02 hacontroller1 object-expirer: Unhandled exception:
> #012Traceback (most recent call last):#012  File
> "/usr/lib/python2.7/site-packages/swift/obj/expirer.py", line 169, in
> run_once#012
> self.swift.get_account_info(self.expiring_objects_account)#012  File
> "/usr/lib/python2.7/site-packages/swift/common/internal_client.py", line
> 358, in get_account_info#012    resp = self.make_request('HEAD', path, {},
> acceptable_statuses)#012  File
> "/usr/lib/python2.7/site-packages/swift/common/internal_client.py", line
> 194, in make_request#012    _('Unexpected response: %s') % resp.status,
> resp)#012UnexpectedResponse: Unexpected response: 503 Service Unavailable
> (txn: txd6ecc9e8f9eb46a284d8a-0056cc55b6)
> Feb 23 15:51:15 hacontroller1 account-server: 192.169.142.223 - -
> [23/Feb/2016:12:51:15 +0000] "HEAD /vdb/3926/.expiring_objects" 507 - "HEAD
> http://localhost/v1/.expiring_objects" "tx088b0d9b2d814c56b3f8b-0056cc55c3"
> "proxy-server 351" 0.0002 "-" 28097 -
>

Hi Boris,

>From the errors above, this looks like a Swift configuration issue. Where did you mount the file system for vdb? In the instructions, they are mounted under /srv/node/vdb . If it is mounted outside /srv, I remember having SELinux issues with it.

Javier,
I have updated haproxy.cfg and finally ( haproxy restart )  got back ( and understood haproxy monitoring) :-

[root at hacontroller2 ~(keystone_admin)]# swift list
Account GET failed: http://controller-vip.example.com:8090/v1/AUTH_acdc927b53bd43ae9a7ed657d1309884?format=json 503 Service Unavailable  [first 60 chars of response] <html><h1>Service Unavailable</h1><p>The server is currently

[root at hacontroller2 ~(keystone_admin)]# netstat -antp | grep 8090
tcp        0      0 192.169.142.220:8090    0.0.0.0:*               LISTEN      27323/haproxy       
tcp        0      0 192.169.142.222:8090    0.0.0.0:*               LISTEN      19645/python2       
tcp        0      0 192.169.142.222:39922   192.169.142.220:8090    TIME_WAIT   -   

On all nodes

[root at hacontroller2 ~(keystone_admin)]# df -h
Filesystem               Size  Used Avail Use% Mounted on
/dev/mapper/centos-root   41G   11G   30G  28% /
devtmpfs                 1.9G     0  1.9G   0% /dev
tmpfs                    1.9G   96K  1.9G   1% /dev/shm
tmpfs                    1.9G   17M  1.9G   1% /run
tmpfs                    1.9G     0  1.9G   0% /sys/fs/cgroup
/dev/vdb1                 20G   33M   20G   1% /srv/node/vdb1
/dev/vda1                497M  167M  330M  34% /boot
            


Also, how big is the vdb disk? In my tests, I used a 8 GB virtual disk, I'm not sure how small we could make it.

 20 GB

Thank you.
Boris.

Regards,
Javier


>
> Please, advise.
>
> Boris.
>
>
> _______________________________________________
> Rdo-list mailing list
> Rdo-list at redhat.com
> https://www.redhat.com/mailman/listinfo/rdo-list
>
> To unsubscribe: rdo-list-unsubscribe at redhat.com
>




More information about the dev mailing list