The Problem
I'm getting the following error in my Gateway log file.
<Thu May 23 08:15:48> INFO: Translator ConManager accept() failed. Details: 'checkListenPortActivity()'; ''; errno=24 (Too many open files)
Potential Causes of File Limit Errors:
While SystemD provides automation for system reboots, it also enforces security measures that limit resource consumption. The error may stem from the following:
SystemD Service Constraints: Limits on how many files a daemon can open while the SystemD control script owns the process.
User-Level ulimits: Constraints specific to the user account running the process.
Global/Kernel ulimits: System-wide limits defined within the Linux kernel or global configuration.
If you are bypassing the gatewayctl script to leverage SystemD service files directly, you must explicitly define resource limits within your Service Unit file.
LimitNOFILE=<value>LimitNPROC=<value>
Note: Because "ulimits" can be set at various levels (user, global, or kernel), consult your RHEL SysAdmin to ensure these values align with your specific environment's requirements.
To Investigate further:
Use the commands below:
- To get the open files limit for the user running the netprobe process.
ulimit -a
#Sample output below
testuser@TestingEnvironment:~$ ulimit -a
real-time non-blocking time (microseconds, -R) unlimited
core file size (blocks, -c) 0
data seg size (kbytes, -d) unlimited
scheduling priority (-e) 0
file size (blocks, -f) unlimited
pending signals (-i) 31445
max locked memory (kbytes, -l) 1016536
max memory size (kbytes, -m) unlimited
open files (-n) 1024
pipe size (512 bytes, -p) 8
POSIX message queues (bytes, -q) 819200
real-time priority (-r) 0
stack size (kbytes, -s) 8192
cpu time (seconds, -t) unlimited
max user processes (-u) 31445
virtual memory (kbytes, -v) unlimited
file locks (-x) unlimited
ulimit -n
#Sample output below
testuser@TestingEnvironment:~$ ulimit -n
1024
- To get the max open files limit for the netprobe process itself:
cat /proc/<PID>/limits
#Sample output below
testuser@TestingEnvironment:~$ cat /proc/4193/limits
Limit Soft Limit Hard Limit Units
Max cpu time unlimited unlimited seconds
Max file size unlimited unlimited bytes
Max data size unlimited unlimited bytes
Max stack size 8388608 unlimited bytes
Max core file size 0 unlimited bytes
Max resident set unlimited unlimited bytes
Max processes 31445 31445 processes
Max open files 1024 1048576 files
Max locked memory 1040932864 1040932864 bytes
Max address space unlimited unlimited bytes
Max file locks unlimited unlimited locks
Max pending signals 31445 31445 signals
Max msgqueue size 819200 819200 bytes
Max nice priority 0 0
Max realtime priority 0 0
Max realtime timeout unlimited unlimited us cat /proc/<PID>/limits | grep "Max open files"
#Sample output below
testuser@TestingEnvironment:~$ cat /proc/4193/limits | grep "Max open files"
Max open files 1024 1048576 files
- The number of entries in this directory corresponds to the number of open file descriptors used by the netprobe.
ls -l /proc/<PID>/fd | wc -l
#Sample output below
testuser@TestingEnvironment:~$ ls -l /proc/4193/fd | wc -l
6
ls -l /proc/<PID>/fd
#Sample output below
testuser@TestingEnvironment:~$ ls -l /proc/4193/fd
total 0
lrwx------ 1 testuser testuser 64 Feb 6 12:56 0 -> /dev/pts/1
lrwx------ 1 testuser testuser 64 Feb 6 12:56 1 -> /dev/pts/1
lrwx------ 1 testuser testuser 64 Feb 6 12:56 2 -> /dev/pts/1
l-wx------ 1 testuser testuser 64 Feb 6 12:56 3 -> /home/testuser/netprobe731/netprobe/netprobe.log
lrwx------ 1 testuser testuser 64 Feb 6 12:56 4 -> 'socket:[35317]'
The Solution:
The error message "too many open files" is a system/OS issue telling the netprobe it has run out of resources. Please work with your infra/Unix team to adjust the limits.
You can also edit your /etc/systemd/system/<gateway start script> and add the following to your [Service] section:
LimitNOFILE=#####
LimitNPROC=#####Where # is the recommended maximum number of open files and processes allowed/determined by your Systems Administrator.
Comments
0 comments
Please sign in to leave a comment.