Site structuring

  • Creator
    Topic
  • #51749
    Vaughn Skinner
    Participant

    Does anyone have experience with many routes between the same two threads?  Are there any issues with message collisions?

    Currently we do xlate processing based upon MSH-6 and then use a different outbound thread which just saves the file to a directory.  We would like to change this to one inbound and one outbound thread and then use a tcl script to route the messages to the appropriate outbound directories.

    We are a lab which doesn’t have an unlimited license and would like to not need an outbound thread for each client.

Viewing 30 reply threads
  • Author
    Replies
    • #71562
      Levy Lazarre
      Participant

      Vaughn,

      If my understanding is correct, what you are trying to achieve is feasible. Make your outbound thread a Protocol:Upoc and in the “Write TPS” section, just place a Tcl script that writes the message as a file in the different directories. Such a script gives you complete control over the outbound file paths and names.

    • #71563
      Russ Ross
      Participant

      I have already dynamically changed file names on the fly of an outbound thread but I haven’t changed the directory name.

      Russ Ross
      RussRoss318@gmail.com

    • #71564
      James Cobane
      Participant

      Try:

      msgmetaset $mh DRIVERCTL “{FILESET {{FTPOBDIR $new_dirname} {OBFILE $new_name}}}”

    • #71565
      Vaughn Skinner
      Participant

      Thank you all for your feedback.  This will really help us.

    • #71566
      Levy Lazarre
      Participant

      The command suggested by James to set the outbound directory apparently only applies to “fileset-ftp” and therefore would not help someone who is using  the “fileset-local” protocol.

      The only reason to implement the DRIVERCTL scheme is if you are using the “fileset-local” or “fileset-ftp” protocol and you must override the default filename that is in NetConfig. However, if you use a “Protocol:upoc” thread as the outbound thread, you bypass this process altogether because you set the file name and outbound directory directly in your “Write TPS”.

      Here is an example of how I dynamically set the file name and outbound directory per message using a UPOC thread. In this particular instance, the messages were coming so fast that even appending the time to the second wasn’t sufficient to produce unique file names, so I also appended a counter to achieve uniqueness:

      file name = prefix_datetime_counter

      Code:



      run {
                 # ‘run’ mode always has a MSGID; fetch and process it
                 keylget args MSGID mh            
                 set msg [msgget $mh]          
                 
                 # File name prefix for pick tickets
                 set prefix pick

                 # File path
                 set filepath “/home/hci/hsmpick”

                 # The date to append to the file name
                 set today [clock format [clock seconds] -format “%Y%m%d%H%M%S”]

                 # To make sure that the file name is unique, we will append
                 # a 4-digit number from a counter.
                 
                 # Counter file name
                 set ctrfile pickticketcounter
                 
                 # Create counter file in the process directory if it doesn’t exist already.
                 # If it does, just retrieve the next value of the counter.
                 
                 if [file exists $ctrfile.ctr] {
                     set cnt [format %04u [CtrNextValue $ctrfile]]
                 } else {
                    CtrInitCounter $ctrfile 1 9999 1
                    set cnt [format %04u [CtrNextValue $ctrfile]]
                 }

                  set underscore “_”
                  set filename $prefix$underscore$today$cnt

                 # Open the file for writing.
                             
                 set fh [open “$filepath/$filename” “w”]

                 # Write the message to the file and close the file
                   
                 puts -nonewline $fh $msg
                 close $fh
                 lappend dispList “CONTINUE $mh”
             }

      This scheme gives me complete control over the file name and destination. Here I used a set prefix but evidently this is just a tps, so you can parse the message and pull any data field you wish to use as a prefix (medical record number, account number, doctor’s id …).

      I also used a fixed destination, but you can enclose the “set filepath” command in an if clause and route the message according to the contents of a field in the message, therefore achieving dynamic routing per message.

      If you wish, you can also put error trapping and notification around the “write” statement.

      Basically, you get your message from the inbound thread, you do your things in your Xlate, and you send the message to the Upoc thread which takes care of the dynamic naming and routing. All you have to do is specify your tcl script in the “Write TPS” section of the thread.

    • #71567
      Jim Kosloskey
      Participant

      Vaughn,

      If Fileset/Local use OBDIR in the metadata.

      As in most situations with Cloverleaf(R) because of its flexibility there is more than one way to solve this problem.

      Personally I would use the Fileset option and manipulate the metadata if forced to do this.

      In reality, I would be hesitant to do this approach over multiple threads – but – I am not constrained by thread count.

      It may be time in your business model to reconsider a thread count based license if that option is available or increase the thread count.

      If you have multiple site configurations one possibility is to migrate to Cloverleaf(R) 5.8 where you do not need to define thread pairs for cross site communication thus freeing up 2 precious threads per cross site integration.

      With the proposed solution, while it can be accomplished using multiple techniques, my concern would be two-fold:

      1. There could be a performance issue as you have essentially extended your single inbound queue all the way out to your outbound. If you used routing to multiple threads, then messages would be distributed to a new set of queues as they arrive and then be delivered as fast as each outbound destination can handle them. With this proposed architecture, all messages would be dependent on the slowest final destination.

      2. Ease of maintenance. With this proposed architecture the actual activity will be hidden inside a Tcl proc instead of being exposed in the GUI for analysis. If your shop is and always will be a one man shop with the same man forever that might not be a problem. If any new manpower became responsible for the integration, the learning curve would be steeper.

      All of the above said, you need to determine what will work best in your estimation for your environment given your situation.

      You have at least 3 viable alternatives – choose one.

      email: jim.kosloskey@jim-kosloskey.com 29+ years Cloverleaf, 59 years IT - old fart.

    • #71568
      Vaughn Skinner
      Participant

      Jim,

      Thank you for the feedback.  What we have now is one inbound routing thread that routes to about 30 outbound client threads.  The change would be to merge the 30 client outbound threads into one that uses tcl only for determining the output directory and uses the message trxid to determine the outbound directory.  It will be a little more obscure for viewing in the GUI, but easily seen in the inbound thread routing configuration where we have the xlates configured.

      Can an inbound thread handle multiple messages at the same time?  If not, then we will not have any performance decrease.

      Cost is a large factor in this decision.

      Thank you.

    • #71569
      Jim Kosloskey
      Participant

      Vaughn,

      No an inbound thread cannot handle multiple messages at one time.

      Let us know how this turns out for you.

      I fully understand the cost issue.

      email: jim.kosloskey@jim-kosloskey.com 29+ years Cloverleaf, 59 years IT - old fart.

    • #71570
      Jeff Dinsmore
      Participant

      I have some questions related to this discussion…

      I’m new to Cloverleaf and I’m trying to untangle complex routing in some existing interfaces – lots of passing messages through socket-based tunnels from one process to another. It’s difficult to see what’s being sent to/from where.

      I’m told that inter-process threads are a bad idea because they can slow things down.

      Also the number of threads per process should be kept below 20 or so to keep them humming.

      True?

      Assuming so, I’m considering writing messages to files to move them between processes. I’m hoping this could simplify some complex routing.

      Is this a viable approach? Can Cloverleaf read/write large numbers of files effectively? Assuming all inbound messages are written as files that are then picked up by a thread that processes and sends them outbound, am I begging for trouble?

      Are there other, simpler and/or more efficient methods to accomplish this?

      Thanks,

      Jeff.

      Jeff Dinsmore
      Chesapeake Regional Healthcare

    • #71571
      Vaughn Skinner
      Participant

      Jeff,

      I had the same issue you did when entering into a pre-existing cloverleaf environment.  We changed everything to use files between sections of cloverleaf and then wrote a queue manager to move the files.  This allowed us to create archives between each cloverleaf section that were searchable with normal windows/unix tools.  This made it much easier to figure out what was going on.

      We found that we needed to use the tclproc checkDir to make sure that the inbound directory files were fully written.  Otherwise, cloverleaf could pick up files before they were completely written.

    • #71572
      Jeff Dinsmore
      Participant

      Vaughan,

      How many threads are you running?

      How many messages per day?

      Any performance improvement/degradation over the previous method?

      Jeff Dinsmore
      Chesapeake Regional Healthcare

    • #71573
      James Cobane
      Participant

      Jeff/Vaughn,

      With respect to the inter-process threads and converting these to file-based, you would be introducing more of a bottle-neck with I/O if you convert them.  I think the concern over inter-process threads is unwarranted/overstated.  Much of the reason for breaking out threads into configurations where IB & OB are in the same process is for granularity and modularity of the Xlate threads, rather than performance.  We have many cross-process threads (IB in process A, OB in process B) and run without issue.  I would not recommend changing threads that are communicating inter-process via tcp/ip connections to file-based.

      Also, with respect to the number of threads per process, there is not a hard & fast number that I’m aware of.  You need to look more at what is going on in the particular process (what types of procs, translations, volumes, etc.) to determine if you can add more threads or not.  One process may have only 2 threads but is doing a boat-load of work, and have another process that is doing very little with 30 threads.  All processes are not created equal.  ðŸ™‚

      Thanks,

      Jim Cobane

      Henry Ford Health

    • #71574
      Jeff Dinsmore
      Participant

      It’s the inter-process tcp/ip tunneling that I’m not embracing. It may be expedient, but it seems to be convoluted and hard to follow.

      Perhaps it’s my newbie-ness showing, but I’m looking for an architecture that’s more intuitve.[/quote]

      Jeff Dinsmore
      Chesapeake Regional Healthcare

    • #71575
      Levy Lazarre
      Participant

      I agree with Jim’s comments. I have used inter-process communication (one thread in process A sends to another thread in process B) for several years and have never experienced any problem.

      There were also instances where I didn’t want to use inter-process communication. I avoided it by just using an additional thread in process A, so the first thread in process A sends to the additional thread in process A which then sends via TCP/IP to the thread on process B that is just listening on a port. I feel that this implementation is cleaner and will give much better performance than writing and moving files where you have to worry about disk I/O, file contention, and scheduling issues.

    • #71576
      Michael Hertel
      Participant

      I agree with Jeff.

      If you’re using SAN disks, disk I/O shouldn’t be an issue either.

      There seem to be a few “old wive’s tales” lately.

      I’d like to know from Charlie or Rob Abbott if we should still be concerned about interprocess processing or not and max number of threads per process.

    • #71577
      Russ Ross
      Participant

      Jeffrey wrote:


      It’s the inter-process tcp/ip tunneling that I’m not embracing.
      It may be expedient, but it seems to be convoluted and hard to follow.
      [code]
      It’s the inter-process tcp/ip tunneling that I’m not embracing.
      It may be expedient, but it seems to be convoluted and hard to follow.

      Russ Ross
      RussRoss318@gmail.com

    • #71578
      Jeff Dinsmore
      Participant

      For the record, ours is a new Cloverleaf implementation. Just a handful of interfaces are live, so now is the time to get the architecture right.

      So, let’s clarify – just to be sure I’m understanding.

      Let’s assume that I have

      ProcessA

       inboundQ

       inboundR

       inboundS

      ProcessB

       inboundT

       inboundU

      ProcessC

       outoundX

      Inbounds Q,S,U need to go to outbound X.

      Is it acceptable from a performance standpoint to route directly from Q,S,U to X – or should I be routing them via TCP connection from each inbound process to X?

      Assuming both are viable, are there any practical limits to either of these approaches? Might either be more applicable dependent on message volume?

      Thanks!

      Jeff.

      Jeff Dinsmore
      Chesapeake Regional Healthcare

    • #71579
      Russ Ross
      Participant

      What I see as your first big bottleneck is that you have more than one inbound thread in a sinlge process.

      This will cause what we call message starvation because when one inbound interface is busy the others will have to wait their turn, causing messages to que on the sending system(s).

      Unfortunately, when you have many inbound interfaces going to one outbound interface this is also not so good and creates a similar message starvation bottle neck.

      If you are able to negotiate using more that one port on the foreign system so you can have one outbound interface being feed by one inbound interface I believe that would be a worhtwhile improvement.

      There are times when you might be forced to have many inbounds to one outbound and we do have one such case in our over one thousand interfaces.

      What I evolved to over time in our case to improve the message starvation bottleneck for a many to one integration was something like this:

      Code:


      inbound_site_1
      process_A (inbound_A -> tcp_jump_send_outbound_site_4)

      inbound_site_2
      process_B (inbound_B -> tcp_jump_send_outbound_site_4)

      inbound_site_3
      process_C (inbound_C -> tcp_jump_send_outbound_site_4)

      outbound_site_4
      process_D (tcp_jump_receive_inbound_A -> outbound_for_all_of_them)
      process_E (tcp_jump_receive_inbound_B —-^
      process_F (tcp_jump_receive_inbound_C —-^

      You will notice in this integration in outbound_site_4 I do have cross process routing and a may to one integration is one of our very few endorsements of doing this.

      Since each inbound is in its own process it reduced the message starvation and allows inbound interfaces to receive messages more independent of each other.

      This is a last resort for when negotiation with the vendor falls apart and you aren’t able to have one outbound port for each inbound source.

      We have over one thousand interfaces and I’ve only had to do one of these many to one integrations after explaining the downside.

      You will also find on-call support when troubleshooting will become easier when you make each inbound interface its own integration to its own outbound interfaces instead of one that is shared.

      Russ Ross
      RussRoss318@gmail.com

    • #71580
      Jeff Dinsmore
      Participant

      I’m not sure I understand the rationale for multiple sites – and the distinction between multiple sites and multiple processes within one site.

      Do we assume the sites to be on different physical servers?

      It seems to me it would be easier to visualize/manage with everything in one site.

      Jeff Dinsmore
      Chesapeake Regional Healthcare

    • #71581
      Russ Ross
      Participant

      In my example the sites are all on the same server.

      There can be some cases when this isn’t true but few and far between.

      I agree it is easier to visualize in one site but that preference is no longer important enough to be a deciding factor for us.

      We had grown to the point where it was no longer a choice and had to rework things to maximize message throughput.

      We move 10+ million messages a day on our prodcution cloverleaf server altogether.

      A couple of nice benifits about more granualar sites is fewer interfaces sharing the same recovery data base; cloverleaf upgrades are now done for fewer interfaces at a time and only have 2 minutes of downtime the way we do it now.

      Russ Ross
      RussRoss318@gmail.com

    • #71582
      Jeff Dinsmore
      Participant

      So, if I’m reading this correctly…

      Let’s assume my message volume is not such that is mandates multiple sites.

      I should:

      A) Limit inbound threads to as few as possible per process to keep them accepting messages as quickly as possible.

      B) Limit cross-process routing as much as is practical.

      C) Use localhost tcp connections as the primary transport for inter-process communication.

      I’m assuming that it also makes sense to minimize total threads per process so that they can be serviced adequately.

      Assuming I use processes liberally, is there any point at which I should be concernd with having too many processes and should consider more threads per process to reduce the total process count? At what point would I need to consider multiple sites?

      I understand this is dependent on the message volumes and processing requirements of each individual interface, but I’m looking for a basic equation to start building this engine in a coherent, efficient, managable, scalable way.

      Jeff Dinsmore
      Chesapeake Regional Healthcare

    • #71583
      Russ Ross
      Participant

      Jeff you wrote

      Quote:

      I should:

      A) Limit inbound threads to as few as possible per process to keep them accepting messages as quickly as possible.

      B) Limit cross-process routing as much as is practical.

      C) Use localhost tcp connections as the primary transport for inter-process communication.

      I’m assuming that it also makes sense to minimize total threads per process so that they can be serviced adequately.

      I would say yes to what you wrote.

      As far as division of interfaces into various sites, put them where it makes sense, but avoid letting easy to see be such an influence unless you expect a small number of interfaces overall and into the future.

      If you break things into their own process in the same site it does make it easier to break them into their own site later on, so keep that in mind when laying out processes in a large site.

      Russ Ross
      RussRoss318@gmail.com

    • #71584
      Michael Hertel
      Participant

      One thing I’d add is that there is one lockmanager per site.

      In the past, that had been a bottleneck.

    • #71585
      Russ Ross
      Participant

      I agree with the one database per site issue and why I had said:

      Code:

      A couple of nice benifits about more granualar sites is fewer interfaces sharing the same recovery data base;

      Russ Ross
      RussRoss318@gmail.com

    • #71586
      Jeff Dinsmore
      Participant

      I assume the lock manager is the lm process.

      What other site-level process should concern me?

      I see hcimonitord and hcid as well, along with a Java process with a mighty long cmd line… are these of any interest?

      Jeff Dinsmore
      Chesapeake Regional Healthcare

    • #71587
      Richard Hart
      Participant

      We have about 70 production sites and these are setup with:

         one process per site with the same name as the site; and

         each site communicating with one application; and

         each site containing translations specific to the application.

      As has been discussed, monitoring and maintenance (if done ‘correctly’!) becomes a lot easier and if an application that receives messages has an issue and is unavailable for a long time, no other sites are affected.

      We use the PDL tcp_acknak for inter-site communication saving significant time over HL7 ACKing. Our thread naming standards also make it clear if the threads are inter-site or external communication.

      I should point out that we are on Unix and use scripts for monitoring, housekeeping, code migration and (for ops) a menu for site control.

    • #71588
      Jeff Dinsmore
      Participant

      I see that tcp_acknak PDL for site/site or process/process communication is much faster – and faster is good.

      Are there any drawbacks/risks to using this method over HL7 ACK/NAK-ing for inter-site or inter-process messaging?

      Any experiences good or bad with using the Multi-Server TCP/IP type?

      Jeff Dinsmore
      Chesapeake Regional Healthcare

    • #71589
      Richard Hart
      Participant

      Hi Jeff.

      We performed extensive testing on this PDL when we started using it around 2002 (Cl 3.3.x).  We have never lost messages!

      I’ve used the Multi-Server mode when load testing in test, sending multiple PAS outputs to the same port and it is slower – as you would expect – than sending to a single port, but it appeared to work very well.

    • #71590
      Bob Richardson
      Participant

      Greetings,

      We are an AIX Unix 5.3 TL11 running CIS5.6R2 shop.

      I have been following this very interesting discussion on Cloverleaf implementation of sites/threads and balancing and would like to add the following:

      Isolate the original TCP inbound (or other protocol) and raw route the data to your outbounds, that is, distribute only with no additional processing to your multiple sites.   You may consider multi outbound bridges (jumps) here.

      Perform all TCL custom logic and Xlation activity in a dedicated (separate) process for the application(s) which will receive the final manipulated messages.  Avoid inter-processing processing unless you think there is no other alternative.   We have found that if one inbound gets stuck processing state 1 messages then the process is busy trying to handle that traffic and all else gets starved.   This really becomes significant when you have an HTTP protocol thread using SSL (TclCurl) within a process.

      This will relieve the original inbound from the extra overhead involved in Cloverleaf’s core intensive Xlation processing.   We have found this to be efficient and a performance improvement for us.   We too have multiple sites with about 350 interfaces isolated on a single host right now and growing.

      I hope this helps you out.

      By the way is that “acknak.pdl” available from Healthvision?   We would be very interested in using it in place of our existing MLP guy for our bridges (jump) connections.

      Thanks in advance!

    • #71591
      Michael Hertel
      Participant

      Here’s our version:

      Code:

      /* $Id: tcp_acknak.pdl,v 1.1 1995/02/19 22:55:58 streepy Exp $ */
      /*
      *
      */

      define driver tcp_acknak;
         version: “1.0”;
      end driver;

      /* This driver manages the transmission of messages using the tcpip
      * with a 4-byte length encoded structure.  The length placed in the
      * encoding is EXCLUSIVE of the encoding bytes.
      *
      *   – Upon receiving an IB data message, a one character OB message is
      *    sent.  The message is the ASCII value ACK (0x06).
      *
      *  – Upon sending an OB data message, a one character message is
      *    expected in return.  The IB reply message is the ASCII value ACK
      *    (0x06).  The current driver does nothing with NAKs other than to
      *    print a message saying that a “Negative ack received.”
      *
      * The phrase basic-msg recognizes this message format.  Once recognized,
      * the message data will be available from the ‘data’ field.
      */

      define phrase basic-msg;
         field msglen = fixed-array( 4, any );
         length-encoded { encoding:network(bytes:4), store-in: msglen } =
             begin
                 field data = variable-array( any );
             end;
      end phrase;

      define phrase ack-msg;
                 ;
      end phrase;

      define phrase nak-msg;
                 ;
      end phrase;

      /**********************************************************************
      * End of declarative section, TCL management functions start here.   *
      **********************************************************************/

      #{#

      # This is a standard ack/nak protocol; use the acknak style.

      hci_pd_msg_style acknak phrase:basic-msg
                             field:data
                             ackphrase:ack-msg
                             nakphrase:nak-msg
                             rtimeout:30000

      #}#

    • #71592
      Jim Kosloskey
      Participant

      Regarding HTTPS and blocking.

      That is true the HTTPS protocol is a blocking protocol.

      In our use we always make sure it is in its own process (usually just a localhost receiving thread and the HTTPS outbound thread if PUTting) even if that means we will have multiple processes in the same site.

      We also do the same thing with our ODBC integrations which are blocking.

      As to the inter-site, inter-process acks, another option is to use the TCP/IP protocol with length encoding and have an Inbound UPoC Tcl proc produce the ack.

      email: jim.kosloskey@jim-kosloskey.com 29+ years Cloverleaf, 59 years IT - old fart.

Viewing 30 reply threads
  • The forum ‘Cloverleaf’ is closed to new topics and replies.

Forum Statistics

Registered Users
5,117
Forums
28
Topics
9,292
Replies
34,435
Topic Tags
286
Empty Topic Tags
10