open file limits on RedHat

Homepage Clovertech Forums Read Only Archives Cloverleaf Cloverleaf open file limits on RedHat

  • Creator
    Topic
  • #52300
    Mike Ellert
    Participant

    I’m running CL5.8Rev3 on Redhat.

    Lawson support has confirmed there is an issue leaving file descriptors open when the outbound save files are opened and the existing files are zero length.  It only seems to happen on the idx files as of Rev3 but doing save cycles and subsequently closing and opening a thread will cause the number of used files by the engine process to grow.

    When it hits 1024 open files (confirmed using lsof), the process will start to throw errors about “too many files open” and will panic soon after.  I verified this by repeatedly recreating the conditions that cause file handles to be left open and actively monitoring the process at the same time.

    Our soft and hard open file limits for the hci user and system are set to 20,000.

    My question is, why does the engine process seem to ‘ignore’ the user limits for open files?

    Any thoughts would be appreciated.

Viewing 6 reply threads
  • Author
    Replies
    • #73727
      Ron Ridley
      Participant

      Besides checking your ulimit for those users:

      ulimit -n

      Try also checking your sysctl settings:

      sysctl fs.nr_open

      sysctl fs.file-max

      Do any of the above values seem to be at that 1024 file limit?

    • #73728
      Mike Ellert
      Participant

      Thanks for the response Ron

      file-max is set to 100,000

      ulimit -n reports 20,000 for the hci user.

      I can’t find any entry for nr_open but some quick research says that it defaults to 1024*1024.

      After our last failure, we booted the machine – all adjustments to file-max and the user hard and soft limits were made since the previous boot.  I will push the process this morning to get to the 1024 limit and see if it is still a problem and report later.

    • #73729
      Mike Ellert
      Participant

      No luck.  The process still cannot exceed 1024 open files.

    • #73730
      Ron Archambault
      Participant

      I

    • #73731
      Mike Ellert
      Participant

      We have them set in there at 20,000

    • #73732
      Ron Ridley
      Participant

      To see if it is a kernel mis-reporting values or if it is an issue with CL reading the ulimit changes, can you recreate the opening of more than 1024 files with another application?

      Here’s an example using python:

      import os

      f = {}

      #create a lot of files and open them

      for x in range(1050):

         f[x] = open(‘%s’ % x, ‘w’)

      # Close the open files

      for x in range(1050):

         f[x].close()

      # Remove created files

      for x in iter(f):

         os.unlink(x)

    • #73733
      Mike Ellert
      Participant

      Genious Ron – just freakin’ genious.  Nice idea!

      I checked using your python script – I modified it to wait for input so I could double dog check the files were simultaneously open.  Your script was able to open 1065 files at once.

      I’ll forward the findings on to Lawson support.

      Thank you.

Viewing 6 reply threads
  • The forum ‘Cloverleaf’ is closed to new topics and replies.

Forum Statistics

Registered Users
5,117
Forums
28
Topics
9,293
Replies
34,435
Topic Tags
286
Empty Topic Tags
10