I’m working on a data conversion into Epic Gallery. I’ll be sending tens of millions of messages with MDM^T02 message types and base64 encoded PDFs. My initial tests showed that Epic was quite slow processing these messages, and we want to try setting up multiple interfaces and send multiple messages at once.
I’m thinking of having one inbound thread that reads the messages without the PDF content to minimize the message size in the recovery database. I could put the filename in the message and wait to add the PDF content until I do this in a prewrite TPS proc in the outbound thread. I’m hoping that would keep the recovery database size small, but I’m not sure how Cloverleaf handles the message queues. My messages would go to the OB post-TPS queue before the PDF content is added, the prewrite would add the content, then the message would go to the forward queue. I don’t know if there’s a limit to how many messages go into the forward queue. Hopefully it would only be one message from the OB post-TPS queue until that message is acknowledged.
To handle sending messages in parallel I plan on creating multiple outbound interfaces (probably 4-10 of them). One idea would be to route the messages to all outbound threads and have a route TPS proc that would kill the messages unless the counter for the message was appropriate for that specific route. I could do a modulus operation on the counter based on the number of routes to figure out which route to send to.
One problem is that over long periods of time some of the outbound threads would process more messages than others, and the queue sizes would get out of balance, so maybe it would be possible for the inbound thread to check the queue sizes and send the messages to the smallest queue either by setting a field in the message (or USERDATA) that the route TPS checks or by explicitly routing the message in metadata.
I was also looking at the reference guides, and it looks like there are some “Disk Based Queueing” mechanisms that I may be able to use to handle large queues of large messages, but I haven’t tried that before.
Has anyone done anything like this in terms of load balancing messages over multiple interfaces? Does anyone have suggestions?