The finer points of DB Initialization

Clovertech Forums Cloverleaf The finer points of DB Initialization

  • Creator
    Topic
  • #113159
    Jay Clements
    Participant

      We recently had an issue where our recovery DB became very large due to an interface downtime on a thread where the messages coming across were very large – larger than we usually expect. Every month we do a patching process where we do a cleanup including reinitialization of the databases using the command HCIDBINIT -iC . In order to fix the large unstable database problem we had to do a destructive db init using HCIDNINIT -ACf

      I understand the -iC clears the ICL registry and the control files. What I don’t understand is that all our other site databases are “normal” size and we, at least during my tenure (4 years), have never done a destructive reinit of databases. We do “clear” the recovery and error database using hcidbdump as needed and during recovery.

      I’ve also found http://174.138.42.153/forums/topic/how-big-is-your-recovery-error-db/ this post which seems to suggest that using hcidbdump does not actually reduce database size. Im talking about hcidbdump -e -D and -r -D. I could be misunderstanding that post though.

      My questions:

      • Does hcidbdump actually reduce recover/error database size?
      • Should we be doing a hcidbinit -ACf every patching cycle – after shutting down inbounds, clearing Qs, etc, etc?
      • I have both documents from infor regarding destructive and nondestructive DB init processes Im just not understanding the distinction of when to use them.

       

      Thanks

      Rob

      • This topic was modified 5 years, 2 months ago by Jay Clements.
      • This topic was modified 5 years, 2 months ago by Jay Clements.
    Viewing 2 reply threads
    • Author
      Replies
      • #113162

        Nope, hcidbdump does not reduce the file size of the database. The database file will grow to accommodate the size of the queues as they grow, but the size will not automatically reduce or shrink down again when the queue empties out. This is why it is a best practice to keep queues lean and mean, i.e., try to never let queue sizes get too large. For example, use alerts to notify if messages are queuing up or if messages are showing up in the error database and then resolve the root causes quickly.

        -- Max Drown (Infor)

      • #113163
        Jay Clements
        Participant

          We absolutely do that. What we didn’t know is that the upstream source had started sending huge pdfs which went along the wire  to the downstream receiver. The huge file would break the downstream receiver but they had an automated system the would “fix” the problem on their end after a certain amount of resends. So basically the interface would queue up and then dequeue all under our threshold of alerts. That’s the best we can figure out at least. We’ve been doing this for 20 years since quovadx and never had this type of problem before. We’ve really dug into the Db structure and are trying to understand if its better to fully clear all our databases, at least on the site with such large files coming through. Which is how I’m leaning now.

          • This reply was modified 5 years, 2 months ago by Jay Clements.
        • #113165
          Jay Clements
          Participant

            I also am having difficulty with the idea that even with the hcidbdump of our recovery and error databases for a given site (not the one with the huge files) messages are accumulating in the databases without reduction. I have an rLogM2K file on our site that handles document embedded messages (also, not the ones that are causing problems) and that’s 66mb and its been running for years using the standard cleanup process. I just cant see how the recovery database would be that relatively small if the only init we use is hcidbinit -iC. I think im missing a piece of this puzzle somewhere.

             

             

            • This reply was modified 5 years, 2 months ago by Jay Clements.
            • This reply was modified 5 years, 2 months ago by Jay Clements.
            • #114040
              Ben Ware
              Participant

                Hey Rob, my strategy has always been to try and keep the db sizes under 100 mb. If we are already taking a downtime for something else (emr upgrade, server reboot, etc.) and other conditions are ideal (edb is being maintained and rdb is empty and I don’t have to save off files and replay them) then I will reinit everything that I can. If I get into a situation where the db size has gotten much larger than 100 mb and there is concern that it is negatively impacting performance, then I might opt to take a brief downtime for a site outside of a already scheduled downtime.  The hcidbinit gives you a lot of options though. I’ve been in situations where I was planning to do a hcidbinit -ACf but there were less than ideal conditions with either the edb (unaddressed errors) or rdb (a system unexpectedly down and messages queued up) and I chose to only reinit one of the two and came back around and did the other when conditions were more ideal.

          Viewing 2 reply threads
          • You must be logged in to reply to this topic.