Running hcidbdump -r every 10 minutes

Clovertech Forums Read Only Archives Cloverleaf Cloverleaf Running hcidbdump -r every 10 minutes

  • Creator
    Topic
  • #53668
    Arslan Khan
    Participant

      We have a situation here where we get message queue build up under recovery DB. We have identified the cause and are working on interface reconfigurations to fix this.

      Meanwhile, till we fix the root cause, we are thinking about placing a monitor on recovery DB queue length by running “hcidbdump -r” command , say every 10 minutes. The intent is to get the message count and route it to a pager/email.

      As we all know that recovery DB is not the happiest kid in town, do you think that this, executing this command so often, may cause some potential DB corruption issues (which may result in data loss)?

      We are on Windows 2008/CL 5.8.4. Message volume is around 1.5 mil/day (divided among 20 different sites).

      I’d love to hear your comments/concerns in this regard.

      Thanks.

    Viewing 4 reply threads
    • Author
      Replies
      • #78494
        James Cobane
        Participant

          Hi Arslan, 🙂

          Is there any reason you can’t use the standard queue depth alerts (Outbound, pre-xlate, or post-xlate) to provide this notification?  Is it a specific thread (or group of threads) that are the culprits?  If you can utilize the Alerts for this, it would probably be preferable over running ‘hcidbdump’ commands frequently.  If you were on AIX, I’d be less nervous about running the commands, but with Windows….well, it’s Windows… 😉

          Thanks,

          Jim Cobane

          Henry Ford Health

        • #78495
          Arslan Khan
          Participant

            Hi Jim,

            Well, the thing is, some times messages got stuck under recovery, but not on the destination side, so we’ll never get any queue depth alerts etc. Also, this could happen across any of the 20 sites….

            hcidbump is the only command that can provide us this monitoring capability (now, how to monitor the report would be another nightmare for support).

            We are trying to stay away from this option as much as possible, but still want to see if anyone out there is doing something similar….

            And indeed, just the thought about running this on windows is making me nervous!! 🙂

            I hope HFHS is not giving you any tough time 🙂

          • #78496
            James Cobane
            Participant

              What state are these messages in the recovery database (i.e. state 5, state 7, other)?

            • #78497
              Arslan Khan
              Participant

                I think I have tried that option but it did not work for me (could be windows?? don’t know yet). I’ll re-try this one more time and see if I am able to generate alerts based on message states.

                Thanks for coming to the rescue again!! 🙂

              • #78498
                Bob Richardson
                Participant

                  Greetings,

                  Look at the online documentation on Inbound Queue Depths and perhaps Transactions per second (this one is tricky and you may wish to check with INFOR support configuring one of those – uses Xlate counts).

                  We have one of those on our busiest most high volume inbound thread from Epic Systems ADT.

                  Your messages may be getting stuck in the Inbound queues and/or translate queues depending on volumes.

                  You can also look at process configuration and check out translation throttling too.

                  hcidbdump is high overhead and can take time to execute especially for 5.8 as it can delay significantly when the databases grow over 500M.

                  Hope this gives you some options to explore.

                  And then… you are on Windows especially at those volume levels and number of sites.

                  Good luck, good hunting!

              Viewing 4 reply threads
              • The forum ‘Cloverleaf’ is closed to new topics and replies.