Use Recovery database – Lite?

Homepage Clovertech Forums Read Only Archives Cloverleaf Cloverleaf Use Recovery database – Lite?

  • Creator
    Topic
  • #51196
    Bob Moriarty
    Participant

    Assuming that cloverleaf is a bit of a disk I/O hog because of the many writes to the recovery database, and also assuming that SAN/WAN latencies could be problematic, perhaps more-so in a virtualized environment, is there any way to limit the number of writes to the recovery db? (Aside from deselecting the option.) Perhaps just write messages in a specified subset of recovery database states to the recovery database?

    Or, the broader question – how to decrease disk I/O?

    This is probably covered in an advanced class – which I have not taken. I get all my advanced info from this list.  😀

Viewing 4 reply threads
  • Author
    Replies
    • #69160
      Jim Kosloskey
      Participant

      Bob,

      SMAT and logging contribute to the I/O load.

      You could reduce your use of those.

      If you can tolerate it reduce the number of SMAT files (that will reduce the arrival rate to the Disk subsystem)

      Make sure you have a minimum log size set (that will reduce the arrival rate to the Disk subsystem).

      Cycle your logs and SMAT files routinely (that will reduce the consumption of Disk space).

      email: jim.kosloskey@jim-kosloskey.com

    • #69161
      Russ Ross
      Participant

      Also, avoid doing static raw routes of all messages when possible and only route the desired message types that are needed by the downstream interface as far upstream as possible.

      In our case I was able to reduce unnecessary message flow by a factor of about 4 by doing this.

      The huge resource waste I encountered when arriving at MDACC from static raw routes of everything had caused us to exceed the capacity of our cloverleaf server.

      I was able to get things under control and survive several more years with under sized hardware by working more effeciently.

      Note:  If a source systems sends messages that don’t get routed somewhere then cloverleaf will issue an error so I static raw route those messages to hcitpsmsgkill.

      Another way to get more throughput on your cloverleaf server that is inevitable once you grow large enough is dividing your interfaces into more than one site.

      This allows for more parallel processing to occur and less competition for the same cloverleaf recovery database.

      Back when I had serial disks instead of SAN disks I was alos able to improve effecincy a great deal by dividing my disk controllers over 4 parts of my serial disk and placing various sites evenly across the controllers and disks.

      I like you also found that disk I/O was the biggest bottle neck.

      If you haven’t already done so make sure you don’t have any excessive writing to process log files.

      On a seperate test server that is okay but will cost you on a heavily used production server.

      You can always turn up EO config output as needed and then turn it back off when done.

      Russ Ross
      RussRoss318@gmail.com

    • #69162
      Russ Ross
      Participant

      I remembered another way we have reduced unecessary disk I/O which has other advantages, too.

      We stopped using bulkcopy and do field by field copies of only the fields that the receiving system will be using.

      Russ Ross
      RussRoss318@gmail.com

    • #69163

      BULKCOPY uses I/O (disk read/write)? I assumed it was all done in memory? How does it work exactly?

      -- Max Drown (Infor)

    • #69164
      Russ Ross
      Participant

      Max Drown wrote:

      Code:

      BULKCOPY uses I/O (disk read/write)? I assumed it was all done in memory? How does it work exactly?

      I was referring to the downstream resource consumption that inherently accompanies the use of bulkcopy and not the actual in memory copy.

      Since bulkcopy copies the entire message as much as possible your messages typically are much larger than they need to be for the receiving system.

      This equates to extra disk I/O as unnecissarily larger messages are written to SMAT and the recovery database repeatedly.

      The larger message size inherient with bulk copy also reduces all other overhead of message flow like network traffic.

      For example, let’s say your typical ADT message is 2,000 bytes using bulkcopy and when you switch to field by field copy of what is used by the foreign system your same ADT message is now 800 bytes.

      Each piece might seem small but add them all together and you just reduced certain resource waste in more than half.

      Being the largest Cancer facility in the world with a huge message multiplier (millions upon millions of messages flowing thru our cloverleaf server each day) makes it easy for us to see the resource savings without needing a stop watch.

      However, we have discovered there are additional benifits to doing field by field copy that make it worthwhile even without the resource savings.

      Actually I say we but credit really goes to our team mate (Jim Kosloskey) for educating us on the advantages of using field by field copy over bulkcopy.

      Most of the additional benefits I elude to have to do with granularity of control and maintainability and support after go-live.

      Bulkcopy’s main advantage as I see it is to get the integration done quickly up front; doing field by field copy takes me about an extra 2 days of xlate work up front but I also get much more familiar with the xlate tool, HL7 pathing, interation, etc.

      I also tend to find doing a field by field copy makes me look at each field in more detail resulting in better project team analysis of the integration and spec writing.

      Normally we have to make something like this a team standard for everyone to adhere to a consistant methodology, but I think the entire team has voluntarily endorsed using field by field copy instead of bulkcopy, even those of us like me that grew up on bulkcopy.

      Russ Ross
      RussRoss318@gmail.com

Viewing 4 reply threads
  • The forum ‘Cloverleaf’ is closed to new topics and replies.

Forum Statistics

Registered Users
4,966
Forums
28
Topics
9,105
Replies
33,625
Topic Tags
248