› Clovertech Forums › Read Only Archives › Cloverleaf › Operating Systems › Memory Maintenance for QDXi 3.8.1 on AIX 5.1
We are processing batch X12 files. Our problems occur either before or after translation in our tcl.
My basic question is: how does QDXi handle memory? Is there something we can add to our tcl to release memory? We use global variables in all our scripts. Will using metadata save memory?
Any feedback will be appreciated.
I just happened to be going through the older messages when I came across this one. Ironically we are having similar issues intermittantly with one of our tcl procs that we call from cron for some reporting. Occasionnaly we will get a core dump and the message (if captured) that is returned is “unable to alloc …. bytes.”
I am in the process of rewriting the proc that should be less memory hungry, but I am curious.
Did you ever find a resolution?
If not anyone got any ideas?
Thanks.
Ryan Spires
We are running QDXi 3.8.1 on AIX 5.1, and we are experiencing hciengine abends. The last message in the process log file before abending is: “unable to alloc 8388613 bytes”. The number of bytes changes from one message to another. This allocation is of internal memory not disc space. Our discs have plenty of room. AIX abends with a Signal 6.
We are processing batch X12 files. Our problems occur either before or after translation in our tcl.
My basic question is: how does QDXi handle memory? Is there something we can add to our tcl to release memory? We use global variables in all our scripts. Will using metadata save memory?
Any feedback will be appreciated.
We get a core dump with our errors too, but it means nothing to me.
What kind of changes are you doing to be more memory effiect?
Brian
The proc currently loops through the list of files and pulls the entire contents of the files into memory.
I am considering rewriting the proc so that it reads the data a line at a time.
I am hoping this eliminate the issues I am having.
Thanks,
Ryan
I am processing X12 files. I may have an enrollment file with several hundred members or a claim file with a couple thousand claims in it. We have a tcl that counts the number of lines, and splits the file into smaller files if the size limit is exceeded.
I believe our problem lies in processing all those smaller files. After we translate all those files, we have to merge them back into a single file. Hence the use of global variables and metedata. We have to maintain data values throught the TPS stack in order to properly reassemble the file, name it correctly and put the output in the proper directory.
When the process dies in the middle of processing all these files, we bounce the process. We put the same file through, and it processes fine. So it seems as if some memory consumption is building. I’m trying to find out what’s doing it.
Any suggestions?
Brian
At any given time I should not have more than a single message plus a list of msg types and counters in memory.
That is my thought anyways…
As you process your files, can you do an append to the existing file vs combining the files in memory…. I think you might be able to conserve some there… Just guessing.
Welcome to other thoughts and suggestions 😆
Ryan
You can run ‘ulimit -a’ to get a list of your user resource limits. Here are my settings:
$ ulimit -a
time(seconds) unlimited
file(blocks) 2097151
data(kbytes) 131072
stack(kbytes) 98304
memory(kbytes) 98304
coredump(blocks) 2097151
nofiles(descriptors) 2000
Hope this helps.
Steve
From the log of one of the processes having this problem …
[prod:prod:INFO/0: STARTUP_TID:03/22/2007 14:23:59] Copyright 1993-2006, Quovadx Inc.
[prod:prod:INFO/0: STARTUP_TID:03/22/2007 14:23:59] CLOVERLEAF(R) Integration Services 5.4.1P
[prod:prod:INFO/0: STARTUP_TID:03/22/2007 14:23:59] Linked by root on host=(AIX lion 2 5 000FA26D4C00) at Fri Jun 9 11:24:59 2006 in /usr/work/jerickso/cloverrel/cloverleaf/engine/main (build 1)
[prod:prod:INFO/0: STARTUP_TID:03/22/2007 14:23:59] Started at Thu Mar 22 14:23:59 2007
[prod:prod:INFO/0: STARTUP_TID:03/22/2007 14:23:59] Engine process is 684468 on host saturndr
[prod:prod:INFO/1: dx49_cmd:03/22/2007 14:24:00] Msg space limit is 0
[prod:prod:INFO/0: dx49_cmd:03/22/2007 14:24:00] DiskQue Minimum # of Messages/que: 50
[prod:prod:INFO/0: dx49_cmd:03/22/2007 14:24:00] DiskQue Virtual Memory percent:75.000000
[prod:prod:INFO/0: dx49_cmd:03/22/2007 14:24:00] Applying EO config: ”
[prod:prod:INFO/0: dx49_xlate:03/22/2007 14:24:00] Applying EO config: ”
[prod:prod:INFO/0: dx49scmpn_in:03/22/2007 14:24:01] Applying EO config: ”
[cmd :cmd :INFO/0: dx49_cmd:03/22/2007 14:24:01] Receiving a command
[cmd :cmd :INFO/0: dx49_cmd:03/22/2007 14:24:01] Receiving a command
[cmd :cmd :INFO/0: dx49_cmd:03/22/2007 14:24:01] Received command: ‘dx49_xlate xrel_post’
[cmd :cmd :INFO/0: dx49_xlate:03/22/2007 14:24:01] Doing ‘xrel_post’ command with args ‘
[cmd :cmd :INFO/0: dx49_cmd:03/22/2007 14:24:01] Receiving a command
[cmd :cmd :INFO/0: dx49_cmd:03/22/2007 14:24:01] Command client went away. Closing connection.
[cmd :cmd :INFO/0: dx49_cmd:03/22/2007 14:24:02] Receiving a command
[cmd :cmd :INFO/0: dx49_cmd:03/22/2007 14:24:02] Receiving a command
[cmd :cmd :INFO/0: dx49_cmd:03/22/2007 14:24:02] Received command: ‘dx49_xlate xrel_post’
[cmd :cmd :INFO/0: dx49_xlate:03/22/2007 14:24:02] Doing ‘xrel_post’ command with args ‘
[cmd :cmd :INFO/0: dx49_cmd:03/22/2007 14:24:02] Receiving a command
[cmd :cmd :INFO/0: dx49_cmd:03/22/2007 14:24:02] Command client went away. Closing connection.
unable to alloc 25165816 bytes
… then core dump.
tcsh|saturndr> ulimit -a
time(seconds) unlimited
file(blocks) unlimited
data(kbytes) unlimited
stack(kbytes) 4194304
memory(kbytes) unlimited
coredump(blocks) unlimited
nofiles(descriptors) unlimited
-- Max Drown (Infor)
I have received this error recently when I had an infinite loop situation. If I read your log correctly your translation has a post tcl proc that runs. You might try and look to see if you have any code that could be placed in a loop and make sure there is nothing that would stop it from getting out.
John
John Mercogliano
Sentara Healthcare
Hampton Roads, VA