Homepage › Clovertech Forums › Read Only Archives › Cloverleaf › Operating Systems › Memory Maintenance for QDXi 3.8.1 on AIX 5.1
- This topic has 8 replies, 5 voices, and was last updated 17 years, 6 months ago by John Mercogliano.
-
CreatorTopic
-
September 29, 2005 at 4:52 pm #48061Brian DavisParticipant
We are running QDXi 3.8.1 on AIX 5.1, and we are experiencing hciengine abends. The last message in the process log file before abending is: “unable to alloc 8388613 bytes”. The number of bytes changes from one message to another. This allocation is of internal memory not disc space. Our discs have plenty of room. AIX abends with a Signal 6. We are processing batch X12 files. Our problems occur either before or after translation in our tcl.
My basic question is: how does QDXi handle memory? Is there something we can add to our tcl to release memory? We use global variables in all our scripts. Will using metadata save memory?
Any feedback will be appreciated.
-
CreatorTopic
-
AuthorReplies
-
-
October 27, 2005 at 12:07 pm #57497Ryan SpiresParticipant
Brian, I just happened to be going through the older messages when I came across this one. Ironically we are having similar issues intermittantly with one of our tcl procs that we call from cron for some reporting. Occasionnaly we will get a core dump and the message (if captured) that is returned is “unable to alloc …. bytes.”
I am in the process of rewriting the proc that should be less memory hungry, but I am curious.
Did you ever find a resolution?
If not anyone got any ideas?
Thanks.
Ryan Spires
Brian Davis wrote:We are running QDXi 3.8.1 on AIX 5.1, and we are experiencing hciengine abends. The last message in the process log file before abending is: “unable to alloc 8388613 bytes”. The number of bytes changes from one message to another. This allocation is of internal memory not disc space. Our discs have plenty of room. AIX abends with a Signal 6.
We are processing batch X12 files. Our problems occur either before or after translation in our tcl.
My basic question is: how does QDXi handle memory? Is there something we can add to our tcl to release memory? We use global variables in all our scripts. Will using metadata save memory?
Any feedback will be appreciated.
-
October 27, 2005 at 12:29 pm #57498Brian DavisParticipant
I have received no information from this post. I would like to know what manages memory better, global variables or metadata. We get a core dump with our errors too, but it means nothing to me.
What kind of changes are you doing to be more memory effiect?
Brian
-
October 27, 2005 at 2:10 pm #57499Ryan SpiresParticipant
We have a proc that is ran from cron nightly that gathers statistics about the number and types of messages being processed. This proc reads files that are saved as part of an inbound-tps for a number of threads. The proc currently loops through the list of files and pulls the entire contents of the files into memory.
I am considering rewriting the proc so that it reads the data a line at a time.
I am hoping this eliminate the issues I am having.
Thanks,
Ryan
-
October 27, 2005 at 2:38 pm #57500Brian DavisParticipant
You may have to split that file into smaller chuncks. This exactually what we do now. I am processing X12 files. I may have an enrollment file with several hundred members or a claim file with a couple thousand claims in it. We have a tcl that counts the number of lines, and splits the file into smaller files if the size limit is exceeded.
I believe our problem lies in processing all those smaller files. After we translate all those files, we have to merge them back into a single file. Hence the use of global variables and metedata. We have to maintain data values throught the TPS stack in order to properly reassemble the file, name it correctly and put the output in the proper directory.
When the process dies in the middle of processing all these files, we bounce the process. We put the same file through, and it processes fine. So it seems as if some memory consumption is building. I’m trying to find out what’s doing it.
Any suggestions?
Brian
-
October 27, 2005 at 3:22 pm #57501Ryan SpiresParticipant
Fortunately we won’t have to worry about splitting the files as I am only interested in maintaining a list of msg types and an incrementing number of occurances. I won’t actually be sending the data of all those files. So I should be ok with reading a line at a time into the same variable. identify the data i need. Store the precise data in a list with its counter. At any given time I should not have more than a single message plus a list of msg types and counters in memory.
That is my thought anyways…
As you process your files, can you do an append to the existing file vs combining the files in memory…. I think you might be able to conserve some there… Just guessing.
Welcome to other thoughts and suggestions 😆
Ryan
-
October 27, 2005 at 3:27 pm #57502Steve CarterParticipant
It sounds like you’re bumping into one of the ‘ulimit’ parameters. I had the same problem when I was building a rather large file for a resend. I wound up breaking it into several files. You can run ‘ulimit -a’ to get a list of your user resource limits. Here are my settings:
$ ulimit -a
time(seconds) unlimited
file(blocks) 2097151
data(kbytes) 131072
stack(kbytes) 98304
memory(kbytes) 98304
coredump(blocks) 2097151
nofiles(descriptors) 2000
Hope this helps.
Steve
-
March 22, 2007 at 6:29 pm #57503Max Drown (Infor)Keymaster
We are having this problem as well. Can anyone at Quovadx shed some light or offers some ideas on how to resolve this? From the log of one of the processes having this problem …
Code:[prod:prod:INFO/0: STARTUP_TID:03/22/2007 14:23:59] Copyright 1993-2006, Quovadx Inc.
[prod:prod:INFO/0: STARTUP_TID:03/22/2007 14:23:59] CLOVERLEAF(R) Integration Services 5.4.1P
[prod:prod:INFO/0: STARTUP_TID:03/22/2007 14:23:59] Linked by root on host=(AIX lion 2 5 000FA26D4C00) at Fri Jun 9 11:24:59 2006 in /usr/work/jerickso/cloverrel/cloverleaf/engine/main (build 1)
[prod:prod:INFO/0: STARTUP_TID:03/22/2007 14:23:59] Started at Thu Mar 22 14:23:59 2007
[prod:prod:INFO/0: STARTUP_TID:03/22/2007 14:23:59] Engine process is 684468 on host saturndr
[prod:prod:INFO/1: dx49_cmd:03/22/2007 14:24:00] Msg space limit is 0
[prod:prod:INFO/0: dx49_cmd:03/22/2007 14:24:00] DiskQue Minimum # of Messages/que: 50
[prod:prod:INFO/0: dx49_cmd:03/22/2007 14:24:00] DiskQue Virtual Memory percent:75.000000
[prod:prod:INFO/0: dx49_cmd:03/22/2007 14:24:00] Applying EO config: ”
[prod:prod:INFO/0: dx49_xlate:03/22/2007 14:24:00] Applying EO config: ”
[prod:prod:INFO/0: dx49scmpn_in:03/22/2007 14:24:01] Applying EO config: ”
[cmd :cmd :INFO/0: dx49_cmd:03/22/2007 14:24:01] Receiving a command
[cmd :cmd :INFO/0: dx49_cmd:03/22/2007 14:24:01] Receiving a command
[cmd :cmd :INFO/0: dx49_cmd:03/22/2007 14:24:01] Received command: ‘dx49_xlate xrel_post’
[cmd :cmd :INFO/0: dx49_xlate:03/22/2007 14:24:01] Doing ‘xrel_post’ command with args ‘‘
[cmd :cmd :INFO/0: dx49_cmd:03/22/2007 14:24:01] Receiving a command
[cmd :cmd :INFO/0: dx49_cmd:03/22/2007 14:24:01] Command client went away. Closing connection.
[cmd :cmd :INFO/0: dx49_cmd:03/22/2007 14:24:02] Receiving a command
[cmd :cmd :INFO/0: dx49_cmd:03/22/2007 14:24:02] Receiving a command
[cmd :cmd :INFO/0: dx49_cmd:03/22/2007 14:24:02] Received command: ‘dx49_xlate xrel_post’
[cmd :cmd :INFO/0: dx49_xlate:03/22/2007 14:24:02] Doing ‘xrel_post’ command with args ‘‘
[cmd :cmd :INFO/0: dx49_cmd:03/22/2007 14:24:02] Receiving a command
[cmd :cmd :INFO/0: dx49_cmd:03/22/2007 14:24:02] Command client went away. Closing connection.
unable to alloc 25165816 bytes… then core dump.
Code:tcsh|saturndr> ulimit -a
time(seconds) unlimited
file(blocks) unlimited
data(kbytes) unlimited
stack(kbytes) 4194304
memory(kbytes) unlimited
coredump(blocks) unlimited
nofiles(descriptors) unlimited-- Max Drown (Infor)
-
March 23, 2007 at 7:46 pm #57504John MercoglianoParticipant
Max, I have received this error recently when I had an infinite loop situation. If I read your log correctly your translation has a post tcl proc that runs. You might try and look to see if you have any code that could be placed in a loop and make sure there is nothing that would stop it from getting out.
John
John Mercogliano
Sentara Healthcare
Hampton Roads, VA
-
-
AuthorReplies
- The forum ‘Operating Systems’ is closed to new topics and replies.