fileset-local lost messages

Clovertech Forums Cloverleaf fileset-local lost messages

  • Creator
    Topic
  • #112198
    philip rake
    Participant

      I’m having an issue with messages going missing (3 out of 200). Using the  fileset-local protocol to poll a local directory and grab messages (style-single, read=1, scan=3, max msg=1. Directory Parse=fileset_dir_parse.tcl).  Every so often it loses a message. The missing message has a place marker in ShowMsgs, and SMAT, but its empty.  As a control, I have the foreign system writing the same files to another directory, and that’s how I know they’re not empty. If I replay them, they work.  Each file has a message with segments separated by CR. There’s no NL at the end. Time between message varies, but no closer than 5 seconds. Any ideas?

    Viewing 6 reply threads
    • Author
      Replies
      • #112199
        Jim Kosloskey
        Participant

          What release of Cloverleaf?

          Do you know if the source system is still writing to the file while Cloverleaf is picking it up?

          Are there supposed to be multiple logical messages per file (you have style as single so I think this means Cloverleaf will read the entire file as one message). If supposed to be multiple logical messages do you have an IB Tps proc to split the messages up?

          Is there any IB Tps code at all?

          Just some thoughts off the top of my head.

          email: jim.kosloskey@jim-kosloskey.com 29+ years Cloverleaf, 59 years IT - old fart.

        • #112200
          David Barr
          Participant

            This can happen if Cloverleaf picks up the file before it is done being created. For example, the FTP server (or some other process) opens a file for writing, Cloverleaf sees the empty file and sends it as a message (and probably deletes it), then the other process writes data to the file, then the other process closes the file and it goes away because the directory entry on the filesystem has already been removed by Cloverleaf.

            There are a few ways of dealing with this. You can have the first process create the file in a different folder or with a suffix that Cloverleaf will ignore and then rename the file into the pickup location. Because moving a file (or changing the name) happen after the data is already in the file, Cloverleaf will never get an empty file.

            Another option is to write a Dir Parse proc that checks the modification time of the file and doesn’t return filenames for anything that’s been modified in the last 30 seconds or so.

            You could also use a managed file transfer product instead of Cloverleaf for your file based interfaces. MFT products have a lot of improvements related to file handling that Cloverleaf doesn’t. I’ve talked about this in other posts, so I won’t go into a lot of detail here. The product that I like to use to pick up files has a text entry field that lets you specify how long to wait for file modifications to stop before picking up the file. It also has simple GUI options for other things like filename patterns to match, file size limits to filter on, etc. These are all things you have to write your own code for in Cloverleaf.

            • This reply was modified 5 years, 3 months ago by David Barr.
          • #112205
            philip rake
            Participant

              Thank you both for your responses.  Its 5.8 and one message per file.  Concerning the Directory Parse option David mentions, its the canned script that does this –

              set msg_list [msgget $mh]
              set msg_list [split $msg_list ” “]
              set msg_list [lsort -increasing $msg_list]
              msgset $mh $msg_list
              return “{CONTINUE $mh}”
              }

              time {
              # Timer-based processing
              # N.B.: there may or may not be a MSGID key in args
              }

               

              I would put something in there to pull in and evaluate the file’s time stamp?

              Thanks, again,

              Phil

            • #112210
              David Barr
              Participant

                Yeah, probably something like this:

                run {
                # 'run' mode always has a MSGID; fetch and process it

                keylget args MSGID mh
                keylget args ARGS.DIR dir
                set files [msgget $mh]
                set outfiles {}
                foreach filename $files {
                file stat $dir/$filename statvar
                set age [expr [clock seconds]-$statvar(mtime)]
                if { $age < 30 } {
                echo not sending because modification time too recent
                } else {
                echo sending file
                lappend outfiles $filename
                }
                }
                msgset $mh [join $outfiles " "]
                lappend dispList "CONTINUE $mh"
                }

              • #112213
                Charlie Bursell
                Participant

                  I think David’s first suggestion would be much better if you have control over the file’s creation.

                  Put the file in a temporary location and then move the file when complete.  The move is an atomic operation and you would always get complete file.   Just a thought

                • #112279
                  philip rake
                  Participant

                    I went a slightly different route with this and wrote a .tcl that errors the blank message. Then a shell script that processes the hcidbdump -e to resend the good message. Works great, until I try to Linux cron it.  It either can’t find hcidbdump, or I get a error (hcidbdump: error while loading shared libraries: libxerces-c.so.27: cannot open shared object file: No such file or directory). Is it possible to run hcidbdump  from cron invoked script?

                    • #112281
                      Dustin Sayes
                      Participant

                        try adding some information to your cronjob.

                        here is a sample cron job that runs a .sh.

                        sourcing the /etc/profile, hci .profile, setting root and site, before running the script

                        @midnight source /etc/profile; source /home/hci/.profile; setroot; setsite “yoursite”; “dir to your .sh”

                         

                    • #112304
                      philip rake
                      Participant

                        That seems to have done the trick, thank you! I also added MAILTO=”” so I can run it every 10 minutes without filling up the mail.  Hopefully this all works, thanks, all.

                        • #112306
                          Dustin Sayes
                          Participant

                            Excellent. Nice job on getting your .sh to run and to resole your issue.

                            I do think it would be best to resolve your issue by addressing the problem, which sounds like at file read time, rather than implementing an additional routine to fix the issue.

                            • This reply was modified 5 years, 3 months ago by Dustin Sayes.
                      Viewing 6 reply threads
                      • You must be logged in to reply to this topic.