FTP protocol setup–slowness issue

Homepage Clovertech Forums Read Only Archives Cloverleaf Cloverleaf FTP protocol setup–slowness issue

  • Creator
    Topic
  • #53708
    Jason Gross
    Participant

    Hello,

    We’re using Cloverleaf and the FTP Protocol to pull in a CSV file from a network server, make a very easy concatenation and then save the file to another server via the FTP Protocol. The file is about 5000 lines and 250KB. This all works fine.

    What I don’t like is that this takes 5-10 minutes for the output file to get all the lines written to it. We are reading it in as NL and outputting it as NL with append checked. Each line is written to the SMAT logs on both sides as a separate transaction, is there a way to have this be one transaction logged with 5000 lines? I would assume that this would speed up processing as well.

    When I changed this from NL to Single it would process the header line and part of the 2nd line and that was it. Anybody have any ideas on what makes the write part of this so slow and whether these can be logged as a single transaction? Thanks!

Viewing 13 reply threads
  • Author
    Replies
    • #78683
      Jim Kosloskey
      Participant

      Jason,

      What are your read times and number messages read settings on the inbound connction?

      email: jim.kosloskey@jim-kosloskey.com 29+ years Cloverleaf, 59 years IT - old fart.

    • #78684
      Jason Gross
      Participant

      Read interval: 5

      Scan interval: 30

      Max messages: 2000000

      Same for outbound but I assume those can be wiped out there.

    • #78685
      Michael Hertel
      Participant

      I would suggest using at both ends instead of nl or single.

    • #78686
      Jim Kosloskey
      Participant

      Try lowering your max messages it could be the load of 5000 all at once is impacting your engine.

      Try lowering it to somethink like 1000.

      email: jim.kosloskey@jim-kosloskey.com 29+ years Cloverleaf, 59 years IT - old fart.

    • #78687
      Jason Gross
      Participant

      Thanks for the suggestions guys.

      I changed it to 1000 but that didn’t help. In watching it now, it’s closer to 20 minutes to process it. It’s shouldn’t be any kind of a load on our server but I’ll move it to our prod server that handles millions of messages/day and see if it processes faster.

      An example of the first few lines are below. Message 1 below is what I get in the xlate testing tool when I change it to “single” or “EOF”. I’m not sure why it keeps thinking the file ends after the first two fields on the first line of data.

      emp_num,badge_num,credit_limit,emp_name,allow_charge

      “1685434”,1024470,293.00,”Flinstone, Fred”,Y,Y

      “1976741”,1152297,297.80,”Jetson, Judy”,Y,Y

      “1977402”,1118059,300.00,”Elroy, Lawrence”,Y,Y

      MESSAGE 1

              emp_num: ch >emp_num<

            badge_num: ch >badge_num<

         credit_limit: ch >credit_limit<

             emp_name: ch >emp_name<

         allow_charge: ch >allow_charge”168584″<

      allow_charge_two: ch >101124470<

    • #78688
      Michael Hertel
      Participant

      You didn’t say earlier that you ran this through an Xlate.

      You said you pick up the file, concatenate and send.

      So you need to read the file as individual messages, not as a single blob, right?

      Per your example, each line ends with cr/lf.

      The first thing you need to do is get the end of line formats correct.

      You can use nl to read each record individually but you will have to preprocess or compensate for the cr’s.

      Since you are reading and writing individual records, there is probably no way to speed the processing up except to put your max records back up to a higher number than 5000 and possibly remove the “use recovery database” feature on the inbound thread.

      Just my opinion…

    • #78689
      Jim Kosloskey
      Participant

      Apparently this is an export from something like Excel and the first record is a header record. It appears that record is flawed as it does not appear to have as many elements as the data records which follow.

      That is possibly related to your question about the first message.

      As Michael indicated you need to accomodate the additional end of record character.

      This can easily be done by adding a corresponding field to your layout (VRL I am guessing).

      This may or may not affect the performance you are seeing.

      I have done some similar things and do not see the length of time you are experiencing. First make sure all of your specifications line up for the file being received then, using a smaller file, turn up the engine output and carefully view the log for anything that looks like a bottleneck.

      I would also do some tests on the filesystem that is your outbound to see if that is just a very slow I/O <– Not high probability I would think.

      email: jim.kosloskey@jim-kosloskey.com 29+ years Cloverleaf, 59 years IT - old fart.

    • #78690
      Jason Gross
      Participant

      Thanks a lot guys will continue to look at it.

      I recognize there isn’t a header there, but they said they don’t need it. The VRL setup does have all the fields identified. The inbound is read immediately but the xlate processing and output is the slow part. I’ve pasted the file to the file system and it’s almost immediate so it’s not I/O for the file share.

      I’m not great with all of this, we typically handle HL7 messages with xlates and tcl procs using TCP/IP, not VRLs and FTP. I’m not following how to account for the characters. I understand there’s a CR/LF at the end of each line, but how do you account for or do anything with that since you can’t see it? Since what I have works in 20 minutes (which isn’t a problem) we’ll go ahead with it as it is for now.

    • #78691
      Mary Bodoh-Stone
      Participant

      I am having this same issue.  I am working with lab charges and about 20,000 charges are dropped about 1:30am.  I have many xlates on the front end with several pre-procs (creates a charge file) and then another xlate with no pre-procs ; it takes every charge we get and generates a tab delimited file on another server via the ftp protocol which we then feed into a crystal report.  This file (with no pre-procs) seems to take forever to process. The charge file I’m guessing some nights is done 2hrs before the crystal file which has taken  6 hrs.  In my ftp protocal fileset options I have the read interval set to 3 and the max messges set to 2000.  The outbound is style nl which I’m pretty sure I needed that format for the crystal file.  In my charge file ftp options I have the read interval set to 5, max 2000 and outbound file stype hl7.    

      How can we speed this up?  Thanks.

    • #78692
      Jason Gross
      Participant

      Hi Mary–I guess we could just message each other:)

      I have not found a way to speed it up, I think it just takes that long to process the items individually. In a way it makes sense as if we had to replay 5000 normal hl7 messages it would also take that long to process individually through the xlates, update the file, etc.

    • #78693
      James Cobane
      Participant

      You might want to try throttling your read interval more; bump it up to 20 with the max messages at 1000.  The engine takes priority of consuming messages over translation, so with the configuration of ‘Read Interval 5’, ‘Max Messages 1000’ you’re still going to take in all 5000 messages within a 25 second period and then will sort of bottleneck in translation.

      Jim Cobane

      Henry Ford Health

    • #78694
      Mary Bodoh-Stone
      Participant

      Thanks James.

      What do you suggest for the read/response timeout on the FTP Options tab?  I have it set to 3.  

      I also had the Close Connection After Write checked which I think was wrong and slowed us down.

    • #78695
      James Cobane
      Participant

      Mary,

      I have typically left the default values for the Read/Response timeout.

      Thanks,

      Jim Cobane

      Henry Ford Health

    • #78696
      Corey Lewis
      Participant

      Does the Cloverleaf engine have a limit to the number of FTP’s it can send out at 1 time?  I ask because we had a process that was FTP’ing records to a remote system.  I resent messages that will be FTP’d to the same remote system.  My messages didn’t start processing until the previous messages being FTP’d were finished.

      Just wondering if it where to start looking, on the CL engine or the remote system?

      Corey

Viewing 13 reply threads
  • The forum ‘Cloverleaf’ is closed to new topics and replies.

Forum Statistics

Registered Users
5,117
Forums
28
Topics
9,293
Replies
34,435
Topic Tags
286
Empty Topic Tags
10