document conversion interfaces into Epic Gallery

Clovertech Forums Cloverleaf document conversion interfaces into Epic Gallery

  • Creator
    Topic
  • #122010
    David Barr
    Participant

      I’m working on a data conversion into Epic Gallery. I’ll be sending tens of millions of messages with MDM^T02 message types and base64 encoded PDFs. My initial tests showed that Epic was quite slow processing these messages, and we want to try setting up multiple interfaces and send multiple messages at once.

      I’m thinking of having one inbound thread that reads the messages without the PDF content to minimize the message size in the recovery database. I could put the filename in the message and wait to add the PDF content until I do this in a prewrite TPS proc in the outbound thread. I’m hoping that would keep the recovery database size small, but I’m not sure how Cloverleaf handles the message queues. My messages would go to the OB post-TPS queue before the PDF content is added, the prewrite would add the content, then the message would go to the forward queue. I don’t know if there’s a limit to how many messages go into the forward queue. Hopefully it would only be one message from the OB post-TPS queue until that message is acknowledged.

      To handle sending messages in parallel I plan on creating multiple outbound interfaces (probably 4-10 of them). One idea would be to route the messages to all outbound threads and have a route TPS proc that would kill the messages unless the counter for the message was appropriate for that specific route. I could do a modulus operation on the counter based on the number of routes to figure out which route to send to.

      One problem is that over long periods of time some of the outbound threads would process more messages than others, and the queue sizes would get out of balance, so maybe it would be possible for the inbound thread to check the queue sizes and send the messages to the smallest queue either by setting a field in the message (or USERDATA) that the route TPS checks or by explicitly routing the message in metadata.

      I was also looking at the reference guides, and it looks like there are some “Disk Based Queueing” mechanisms that I may be able to use to handle large queues of large messages, but I haven’t tried that before.

      Has anyone done anything like this in terms of load balancing messages over multiple interfaces? Does anyone have suggestions?

    Viewing 3 reply threads
    • Author
      Replies
      • #122011
        James Cobane
        Participant

          David,

          From the Epic documentation, in this scenario where we need to create DCS records for the documents (rather than update existing ones), you can either:

          1. Use a fully HL7 option, where the messages contain the base64 encoded PDF in OBX-5

          2. Use a Kuiper utility to migrate the binary data to the blob server, and use HL7 to file the metadata to create/update the DCS records

          To use option 1, the HL7 would contain the PDFs already encoded in OBX-5.  But, it sounds like you may want to consider option 2.  You should contact your Epic TS.

          Both of these options are described in a bit more detail here: Move Files to the Web BLOB

          Jim Cobane – Henry Ford Health

          • This reply was modified 3 weeks, 5 days ago by James Cobane.
          • This reply was modified 3 weeks, 5 days ago by James Cobane.
        • #122014
          David Barr
          Participant

            Do you know about how the forward queue works? If that’s going to be a problem I could probably make a loopback interface to another thread, await replies, and have the second interface add the PDF and forward the reply from Epic. But if the forward queue can’t get big I don’t want to bother with that.

          • #122018
            Jason Russell
            Participant

              Cloverleaf prioritizes inbound traffic, and if that traffic never stops, it may cause the outbounds to slow down too much. This also gives your outbound routes time to process everything so they don’t become clogged up or unbalanced. We would never put more than around a hundred thousand messages (ESPECIALLY PDFs!) at a time. Time it, see how long it takes to process them out completely and set your pickup timer to do that.

              You should be able to control that with a fileset local, and throttle so you only pick up x number of messages every y seconds (depending on if they’re in a single file, or single message/file).

              That’s a big load, and it’s going to take a few days to process it all through.

            • #122031
              David Barr
              Participant

                We’re still trying to make option 1 work (from your post above), and it’s going well.

                I ran a test yesterday with 8 outbound threads on Cloverleaf. I had about 85,000 messages. I created the messages according to the Epic specs, but I only included the pathname to the document rather than the contents in the message. When I was building the file, I also put a thread ID (CONV1-CONV8) in MSH-5. I resent all the messages to the inbound thread at once. I had a filter on each route based on MSH-5 so that each outbound thread would process a separate set of messages. The outbound thread would add the base64 encoded document contents to the message in a prewrite proc to delay bloating the messages in Cloverleaf for as long as possible. Also, every outbound thread was in a separate process so that work of reading the document into the message wouldn’t block activity on other threads.

                On the Epic side, I set up 8 inbound interfaces to process the messages, and I used the system defaults (IC_IN_TABLE) to route the messages to separate interfaces based on MSH-5. Also, I had all my Cloverleaf threads sending to the same IP/port and there was a load balancer to spread out the work of saving to the web blob server.

                For this test we were able to process about 1000-3000 messages per minute which may be fast enough for our needs. We can add more threads, Interconnect servers and Bridges interfaces if we need better performance.

                Our Epic TS didn’t like option 2 because there’s more complication of associating the files on the blob server to the HL7 messages.

            Viewing 3 reply threads
            • You must be logged in to reply to this topic.