Large Scanned Documents

Clovertech Forums Read Only Archives Cloverleaf Cloverleaf Large Scanned Documents

  • Creator
    Topic
  • #53363
    Tim Wanner
    Participant

      I’m curious how other facilities process large documents.  In the course of migrating an AIX server from CLv 5.5 to CLv 5.7, we’ve hit an obstacle trying to process large (50 to 100 meg) documents.  I’d like to know how other users process large documents.  Writing to file?

      Our existing process worked well on 5.5. AIX 5.1.  We’ve verified all server settings but still get the error can’t alloc xxxxxxxx bytes.

    Viewing 5 reply threads
    • Author
      Replies
      • #77425
        Richard Hart
        Participant

          Hi Tim.

          We use HTTP rather than HL7 to receive PDF documents, but the logic should be very similar.

          The messages are parsed in the first Cloverleaf thread (from the IB) and the document is removed and stored using message unique identifiers.

          When the message is to be sent outbound from Cloverleaf, the document is added back to the message.

        • #77426
          David Barr
          Participant

            We don’t usually send large files through Cloverleaf.

            When I had to do this in the past, I had to be careful with any TCL code that processed each message. If you call “msgget”, that makes a separate copy of the message in memory, and that would cause the error that you mentioned. If you have to use TCL, you might be able to specify the offset and length parameters to msgget and process the message in smaller chunks.

          • #77427
            Russ Ross
            Participant

              First I would like to credit Steve Fraser for finding this information out today and letting me know about it.

              Steve built a test to purposely find out if 4 times the worst case HL7 claims file would break our AIX 5.3 Cloverleaf 5.6rev2 process.

              In his test case he created a complex 8 megabyte claim message that consumed 412 megabytes of process memory when translated which he learned exceed the AIX 5.3 default of 256 megabytes.

              His xlate test crashed the process with the follwoing error message

              Code:


              [pti :sign:WARN/0:t_stevef_xlate:11/14/2012 10:58:46] Thread 1 received signal 11
              [pti :sign:WARN/0:t_stevef_xlate:11/14/2012 10:58:46] PC = 0xf014
              unable to alloc 28 bytes

              which sounded alot like what I see in this clovertech post.

              Steve Fraser’s search for a solution led him to this URL

              <a href="http://pic.dhe.ibm.com/infocenter/tivihelp/v2r1/index.jsp?topic=%2Fcom.ibm.itame3.doc_5.1%2Fam51_perftune113.htm&#8221; class=”bbcode_url”>http://pic.dhe.ibm.com/infocenter/tivihelp/v2r1/index.jsp?topic=%2Fcom.ibm.itame3.doc_5.1%2Fam51_perftune113.htm

              and discoverd even though the /etc/security/limits settings might be large enough that the environment variable ( LDR_CNTRL ) hidden from us was restricting us.

              In our case we were able to get the test to work by running the following command at an xterm prompt

              Code:


              export LDR_CNTRL=MAXDATA=0x20000000
              # start_process (which Steve told me was the xlt tester )
              unset LDR_CNTRL

              this increased our AIX data segment from 256MB to 512MB allowing our test to consume the 412MBs it needed to run without crashing the process.

              We just learned of this method today and don’t have advice on how to utilize it.

              We will need to determine if we use this, what size do we increase to and the method we will us to do it.

              We are also preparing to upgrade to AIX 7.1 Cloverleaf 6.0 so might wait to see how that LPAR perfoms first.

              I did modify our start host server script to include the following comments to make sure I don’t completely forget what was discovered today if we get around to using it.

              Code:

              #!/usr/bin/ksh

              rm -f $HCIROOT/server/.exit

              cd $HCIROOT/server/logs
              myDISPLAY=$DISPLAY
              unset DISPLAY

              # here is an environment variable that can be set if a process uses to much paging space
              # that is discussed at this URL:  http://pic.dhe.ibm.com/infocenter/tivihelp/v2r1/index.jsp?topic=%2Fcom.ibm.itame3.doc_5.1%2Fam51_perftune113.htm
              # In this example the process memory limit is set to 512 megabytes,
              # which was big enough to process an 8 megabyte HL7 claim that took 412 megabytes of process memory
              # which was 4 times bigger than our worst case
              #
              # export LDR_CNTRL=MAXDATA=0x20000000
              #

              hciss -s h
              export DISPLAY=$myDISPLAY

              # might need to unset the LDR_CNTRL when done
              #
              # unset LDR_CNTRL
              #

              This is just a thought of how we might use it and not a recommendation because we have yet to discuss and investigate sufficiently at this point.

              Even though that is our case, you might have immediate value of this knowledge and before we get to that point so I’m sharing something that might lead you to a solution.

              Russ Ross
              RussRoss318@gmail.com

            • #77428
              Tim Wanner
              Participant

                We had tried this code with a recommendation from IBM, with no success, until we applied the latest software updates to 6.1 AIX.  We are still in POC stage but it appears this is the answer.  We are able to alloc up to a gig to a particular process.  You have to use a switch to designate shared memory, but this is great stuff.. Thanks Russ for the post.

              • #77429
                David Barr
                Participant

                  When I previously tried increasing the process memory limit on AIX, that didn’t prevent me from running into a compiler driven memory limit of the hciengine process. This was probably on version 5.2 or so. I think the limit was about 64 megs.

                • #77430
                  Dan Ullom
                  Participant

                    We were unable to make LDR_CNTRL or ldedit work until Infor suggested applying the shared memory flag to hciengine only.  In practice we found it much easier to apply the LDR_CNTRL MAXDATA setting permanently using ldedit.  This setup seems less error prone because you don’t have to worry about setting an environment variable every time the process is bounced.

                    To permanently bless the hciengine and hcitcl binaries using ldedit stop all running Cloverleaf processes and make a backup of hciengine and hcitcl found at $HCIROOT/bin.

                    Then run:

                    shell> ldedit -b maxdata=0x40000000/dsa hciengine

                    shell> ldedit -b maxdata=0x20000000 hcitcl

                    If you get any warnings about “file in use” check for running processes.  Anything running tcl will likely have hcitcl locked.  Stale shared memory created before the settings change can also cause problems.  Stop the shared memory or just reboot.

                    Having done this we are now able to control the maximum memory usage of our engines on AIX.  This applies the fix to every Cloverleaf engine on the box, so handle with care.

                Viewing 5 reply threads
                • The forum ‘Cloverleaf’ is closed to new topics and replies.