Totally fixed it.  Thanks man.  I learn something new about systemd 
every day....
On 11/19/15 7:55 PM, Jeff Weber wrote:
 I struggled with this as well until I found out the limits.conf
entries
 don't apply to systemd managed services.
 If you create a /etc/systemd/system/rabbitmq-server.service.d directory
 and place a limits.conf file in there with contents similar to
 [Service]
 LimitNOFILE=4096
 Then reload + restart
 systemctl daemon-reload
 systemctl restart rabbitmq-server
https://fedoraproject.org/wiki/Systemd#How_do_I_customize_a_unit_file.2F_...
 On Thu, Nov 19, 2015 at 10:43 PM, Erich Weiler <weiler(a)soe.ucsc.edu
 <mailto:weiler@soe.ucsc.edu>> wrote:
     Hi Y'all,
     I'm sure someone has encountered this issue...  basically my
     rabbitmq instance on my controller node is running out of file
     descriptors, this is on RHEL 7.  I've upped the max file descriptors
     (nofile) to 1000000 in /etc/security/limits.conf, and my sysctl
     limit for file descriptors is equally huge.  Yet, I can't get my
     rabbitmq process to get it's limit's past 1000 or so:
     [root@os-con-01 ~]# ps -afe | grep rabbit
     rabbitmq  4989     1  4 16:42 ?        00:07:10
     /usr/lib64/erlang/erts-5.10.4/bin/beam.smp -W w -K true -A30 -P
     1048576 -- -root /usr/lib64/erlang -progname erl -- -home
     /var/lib/rabbitmq -- -pa
     /usr/lib/rabbitmq/lib/rabbitmq_server-3.3.5/sbin/../ebin -noshell
     -noinput -s rabbit boot -sname rabbit@os-con-01 -boot start_sasl
     -kernel inet_default_connect_options [{nodelay,true}] -sasl
     errlog_type error -sasl sasl_error_logger false -rabbit error_logger
     {file,"/var/log/rabbitmq/rabbit(a)os-con-01.log"} -rabbit
     sasl_error_logger
     {file,"/var/log/rabbitmq/rabbit(a)os-con-01-sasl.log"} -rabbit
     enabled_plugins_file "/etc/rabbitmq/enabled_plugins" -rabbit
     plugins_dir
     "/usr/lib/rabbitmq/lib/rabbitmq_server-3.3.5/sbin/../plugins"
     -rabbit plugins_expand_dir
     "/var/lib/rabbitmq/mnesia/rabbit@os-con-01-plugins-expand" -os_mon
     start_cpu_sup false -os_mon start_disksup false -os_mon start_memsup
     false -mnesia dir "/var/lib/rabbitmq/mnesia/rabbit@os-con-01"
     -kernel inet_dist_listen_min 25672 -kernel inet_dist_listen_max 25672
     rabbitmq  5004     1  0 16:42 ?        00:00:00
     /usr/lib64/erlang/erts-5.10.4/bin/epmd -daemon
     rabbitmq  5129  4989  0 16:42 ?        00:00:00 inet_gethost 4
     rabbitmq  5130  5129  0 16:42 ?        00:00:00 inet_gethost 4
     root     17470 17403  0 19:34 pts/0    00:00:00 grep --color=auto rabbit
     [root@os-con-01 ~]# cat /proc/4989/limits
     Limit                     Soft Limit           Hard Limit
       Units
     Max cpu time              unlimited            unlimited
     seconds
     Max file size             unlimited            unlimited
     bytes
     Max data size             unlimited            unlimited
     bytes
     Max stack size            8388608              unlimited
     bytes
     Max core file size        0                    unlimited
     bytes
     Max resident set          unlimited            unlimited
     bytes
     Max processes             127788               127788 processes
     Max open files            1024                 4096
       files
     Max locked memory         65536                65536
     bytes
     Max address space         unlimited            unlimited
     bytes
     Max file locks            unlimited            unlimited
     locks
     Max pending signals       127788               127788
       signals
     Max msgqueue size         819200               819200
       bytes
     Max nice priority         0                    0
     Max realtime priority     0                    0
     Max realtime timeout      unlimited            unlimited            us
     [root@os-con-01 ~]#
     This is causing huge problems in my OpenStack cluster (Kilo
     Release). I've read that you can set this limit in
     /etc/rabbitmq/rabbitmq-env.conf or /etc/rabbitmq/rabbitmq.config but
     no matter what I do there I get nothing, after restarting rabbitmq
     many times.  Does this have something to do with systemd?
     [root@os-con-01 ~]# rabbitmqctl status
     Status of node 'rabbit@os-con-01' ...
     [{pid,4989},
       {running_applications,[{rabbit,"RabbitMQ","3.3.5"},
                              {os_mon,"CPO  CXC 138 46","2.2.14"},
                              {mnesia,"MNESIA  CXC 138 12","4.11"},
                              {xmerl,"XML parser","1.3.6"},
                              {sasl,"SASL  CXC 138 11","2.3.4"},
                              {stdlib,"ERTS  CXC 138 10","1.19.4"},
                              {kernel,"ERTS  CXC 138 10","2.16.4"}]},
       {os,{unix,linux}},
       {erlang_version,"Erlang R16B03-1 (erts-5.10.4) [source] [64-bit]
     [smp:32:32] [async-threads:30] [hipe] [kernel-poll:true]\n"},
       {memory,[{total,645523200},
                {connection_procs,32257624},
                {queue_procs,48513416},
                {plugins,0},
                {other_proc,15448376},
                {mnesia,1209984},
                {mgmt_db,0},
                {msg_index,292800},
                {other_ets,1991744},
                {binary,517865992},
                {code,16698259},
                {atom,602729},
                {other_system,10642276}]},
       {alarms,[]},
       {listeners,[{clustering,25672,"::"},{amqp,5672,"::"}]},
       {vm_memory_high_watermark,0.4},
       {vm_memory_limit,13406973132},
       {disk_free_limit,50000000},
       {disk_free,37354610688},
       {file_descriptors,[{total_limit,924}, <-----  ??????
                          {total_used,831},
                          {sockets_limit,829},
                          {sockets_used,829}]},
       {processes,[{limit,1048576},{used,8121}]},
       {run_queue,0},
       {uptime,10537}]
     ...done.
     Anyone know how to get the file descriptor limits up for rabbitmq?
     I've only got like 40 nodes in my OpenStack cluster, and it's
     choking, and I need to add several hundred more nodes...
     Any help much appreciated!!!  I looked around the list and couldn't
     find anything on this, and I've RTFM'd as much as I could...
     cheers,
     erich
     _______________________________________________
     Rdo-list mailing list
     Rdo-list(a)redhat.com <mailto:Rdo-list@redhat.com>
     
https://www.redhat.com/mailman/listinfo/rdo-list
     To unsubscribe: rdo-list-unsubscribe(a)redhat.com
     <mailto:rdo-list-unsubscribe@redhat.com>