I don’t believe chaining by itself will afford any performance enhancement.
I have used it when I needed to ‘stack’ Xlates. While I saw no obvious degradation I also observed no enhancement to throughtput.
Branching as I understand it is really intended for the situation where it is desired to have all the messages inbound ‘normalized’ prior to any real transformation. In that situation the potential to improve performance exists I think. I am not so sure that satisfies you scenario though.
I personally have never deployed branching other than in an experimental test.
Perhaps the issue is better addressed at the source system. Why is a ‘dump’ of real time messages happening in a real time environment?
Are the messages actually of use down stream?
If they are needed downstream, are identifiable, and are not time dependent, perhaps routing the messages unfiltered to a Fileset Protocol to be picked up later under controlled circumstances where throttling can be applied might improve things.
The best outcome would be if during that time, the sending system could ‘pace’ the messages to a more acceptable arrival rate – or not send at all (if not needed downstream).
Others may have faced this same situation and hopefully will provide their insight as to how they handled it.
email: jim.kosloskey@jim-kosloskey.com 29+ years Cloverleaf, 59 years IT - old fart.