Homepage › Clovertech Forums › Read Only Archives › Cloverleaf › Cloverleaf › ftp: Want to log start and end of file processing
- This topic has 5 replies, 4 voices, and was last updated 14 years, 8 months ago by Michael Hertel.
-
CreatorTopic
-
January 11, 2010 at 11:03 pm #51477Jennifer HardestyParticipant
We use fileset-ftp for batch billing and some of our processes pick up multiple files at a time. Our current tcl proc will echo out the names of the files it finds at the scheduled time, but currently there’s no logging that notes when the process starts processing a file or finishes the file and we want to be able to do that as a validation method. Does anyone know how this can be done?
-
CreatorTopic
-
AuthorReplies
-
-
January 11, 2010 at 11:54 pm #70455Jim KosloskeyParticipant
I have never done what you want (I am betting others may have) but I think this should be possible.
What you will need is some Tcl.
For the Fileset protocols there are keyed list entries in the USERDATA Metadata field of each message. Some of these entries describe the ib fileid (and on later releases of Cloverleaf(R) the ib directory as well).
One could check each message’s ib fileid and compare it to a global variable. The global variable would be set to null at startup. Then if the ib fileid in a message is different than the global, one would know one has started on a new file. Logging could be done and the global reset.
I would probably use the delete upoc to indicate end of file as when that upoc is given control you know you are finished with that file and it can be deleted. In order for this technique to work, I would place a file in the directory that is never deleted that is always placed last in the retrieval list (this could either be a name that naturally sorts last or you could use the dirparse upoc to make sure it is last).I would prefer this upoc location and technique as that will take care of the last file (since Cloverleaf(R) currently does not indicate that it is done processing the inbound files scanned).
So in the delete upoc you could log the end of the file named in the deletion list (really only one file at a time) unless it is the last sorted known file name that would indicate end of all files for this directory scan to you.
I would also make sure the permissions on the ‘end of files’ file are restricitive enough so that it would have to be purposely deleted.
You could probably also get creative and check for the presence of that file in your directory scan upoc proc and abort if it is not founs.
email: jim.kosloskey@jim-kosloskey.com 29+ years Cloverleaf, 59 years IT - old fart.
-
January 12, 2010 at 6:53 pm #70456Michael HertelParticipant
We use tpsDirParse and set the dedug at a sufficient level.
Code:######################################################################
# Name: vm_tpsDirParse_argv
# Purpose:
# UPoC type: tps
# Args: tps keyedlist containing the following keys:
# MODE run mode (”start”, “run” or “time”)
# MSGID message handle
# ARGS user-supplied arguments:
#
#
# Returns: tps disposition list:
#
#proc vm_tpsDirParse_argv { args } {
keylget args MODE mode
global HciSite HciConnName filenames
set procname [lindex [info level [info level]] 0]
set module “$HciSite/$HciConnName/$procname: ”set dispList {}
switch -exact — $mode {
start {
# Perform special init functions
# N.B.: there may or may not be a MSGID key in args
}run {
keylget args MSGID mh
keylget args CONTEXT ctx
keylget args ARGS.DEBUG debug
keylget args ARGS.PATTERN pattern
if {![cequal $ctx “fileset_ibdirparse”]} {
vmbatch_log_event “$module ERROR $procname used in wrong context $ctx”
vmbatch_log_event “$module ERROR Context should be fileset_ibdirparse”
return “{KILL $mh}”
}if {$debug >= 1} {echo “$module: Started [clock format [clock seconds] -format “%D %T”]”}
if {$debug >= 3} {echo “$module: running with args ‘[info level 0]'”}set fileList [msgget $mh]
if {$debug >= 3} {echo “$module: fileList=’$fileList'”}
set newFileList “”# compare each file name with the pattern
foreach fileName $fileList {
if { [regexp — $pattern $fileName] } {
lappend newFileList $fileName
if {$debug >= 1} {echo “$module: Processed ‘$fileName'”}
} else {
if {$debug >= 2} {echo “$module: Skipped ‘$fileName'”}
}
}
set newFileList [lsort $newFileList]
set filenames $newFileList
if {[cequal pmc [crange $HciConnName 0 2]]} {
set filenames [lvarpop newFileList 0]
set newFileList $filenames
}
msgset $mh $newFileList
if {$debug >= 3} {echo “$module: newFileList=’$newFileList'”}
if {$debug >= 1} {echo “$module: Ended [clock format [clock seconds] -format “%D %T”]”}lappend dispList “CONTINUE $mh”
}time {
# Timer-based processing
# N.B.: there may or may not be a MSGID key in args
}
shutdown {
# Doing some clean-up work
}
}return $dispList
} -
January 13, 2010 at 5:49 pm #70457Russ RossParticipant
One thing I want to make you aware of that I found out the hard way.
The time of completion for the inbound thread is not the same as the time of completion for the outbound thread.
In my case, I had pharmacy charges in excess of $4 million per file coming in where each message had many repeating FT1 segments.
It took the outbound thread 10 times longer to finish processing the file than the inbound thread.
For example, the inbound file was done processing and deleted in one minute but it took 10 minutes for all the messages to flow out the outbound thread becuase one inbound message exlopeded into many messages to the outbound system.
This was a problem for me since I had my intellengence on the inbound thread route to determine when the file was done and the batch process was automatically shutdown a minute later.
The method I had designed was to append an EOF record to the batch file and have a Netconfig route to invoke shutdown action one minute later when the EOF record was encoutered.
To correct my flaw I should of taken the EOF action later downstream in the send okay TPS stack.
Since I was pressed for time (sound familiar) as always I simply preprocessed the file to have one FT1 segment per messages so inbound and outbound processing time would be almost the same, which ends up working fine in that case.
It is on my wish list for 10 years now to really fix it but seems the more you can do the more work flows to you so even less chance to spend time on it these days.
Russ Ross
RussRoss318@gmail.com -
January 13, 2010 at 8:16 pm #70458Jennifer HardestyParticipantMichael Hertel wrote:Code:
if {[cequal pmc [crange $HciConnName 0 2]]} {
Michael — Where does “pmc” come from? What is it meant to represent? Is it another global?
Russ — I have to admit that the reason I am interested in adding this type of logging functionality to our batch billing processes is similar to what you described.
Last week, we had one instance where, during a manual intervention, after the log noted that four files had been found for processing, the user/sys admin for the application went ahead and deleted the files, assuming they had already been picked up by our application. So, only two files were actually picked up and processed. We had to manually do the whole thing over.
Then, later that day, after the same user/system admin for the application didn’t delete the files after the second run, when the real batch billing run occurred, the files were still out there and though I stopped the threads, I was unable to determine just by eyeing things up, which files had actually been reprocessed and thus duplicated.
(Of course, part of the issue is that Cloverleaf’s sign-on doesn’t have permission to delete files on that server, though it is supposed to.)
Part of the fall out from all of this is that we need more detailed logging to show exactly what Cloverleaf is doing and when and part of the fall out is that we need to somehow have better control over our users. Ha!
-
January 13, 2010 at 8:23 pm #70459Michael HertelParticipant
pmc = the name of a thread we used. You can ignore / delete that portion.
-
-
AuthorReplies
- The forum ‘Cloverleaf’ is closed to new topics and replies.