Alert Type: tcl and tclalert template ??

Clovertech Forums Read Only Archives Cloverleaf Cloverleaf Alert Type: tcl and tclalert template ??

  • Creator
    Topic
  • #53407
    Jennifer Hardesty
    Participant

      Has anyone used the tcl Alert Type and the tclalert template from the script editor toolbar?

      I have a tclproc which runs fine as a standalone.  All it does is generate the number of messages in the recovery database.  I want to incorporate it into one of the tclalert templates so that alerts can be set up to run at set intervals on certain sites and if the recovery db gets to over a certain size on those sites, a page is sent to the duty officer.

      We have had a recent problem since our go-live with Epic with such an increased volume that some of our vendors haven’t been able to keep up and while we are trying to troubleshoot how to resolve that, I would like to determine where the bottlenecks are.  Things have gotten so bad as to have the recovery db stop writing and the logs just have and hour or so gap…very bad.

      Thanks in advance!

      Jennifer Hardesty

      MMC

      — Currently Working Nightshifts in the Go-Live Command Center —

    Viewing 3 reply threads
    • Author
      Replies
      • #77585
        Keith McLeod
        Participant

          Jennifer,

          I too work on EPIC accounts.  They send an unusual amount of ADT_A31 messages compared to any other system I have ever worked with.  One mechanism I use is to split up the downstream system to those that use ADT_A31’s and those who don’t.  For instance, Lab orders were beating the ADT to the lab system because of the back up of messages.  The lab said they didn’t need A31 messages as long as the ADT_A08 would suffice.  We have been working at eliminating ADT_A31 messages whenever possible. I split mine into 5 ADT processes. One for distribution, 3 for non ADT_A31 systems and one for ADT_A31 systems.

          Another issue is if they do an historical import, make sure they set the flag not to generate the ADT_A31 message.  This can cause a trigger of litterally hundreds of thousands of ADT_A31 messages instantly, burying your engine and effectively backing up production for hours… It took me a while to get them to use the flag.

          This may not help with your question specifically, but may help otherwise.Hope this helps…. We have nearly 100000 ADT per day, with 85000 being ADT_A31 messages.  We have been after EPIC for quite some time now.  We found circumstances where one patient update triggered over 300 ADT_A31 messages.  This is just wrong.

        • #77586
          Vince Angulo
          Participant

            Same experience with Epic.  You can also expect even more messages after each upgrade.

            We found our bottleneck was at state 7, because our traditional method of suppressing messages in the translate was inappropriate for Epic ADT volumes.  We

          • #77587
            Jennifer Hardesty
            Participant

              Unfortunately, your advice on the flag is way too late.  I wish I’d know about that flag when I was throttling those backed up messages over a week’s period during several “surprise” uploads. 😛

              We did manage to convince them last week to only trigger A31s on Patient Level updates and A08s on Visit Level updates and this cut the A31 traffic down quite a bit, but it’s still pretty high and I haven’t been able to convince anyone they don’t want A31s.  In fact, one of my peers is creating A31s for every merge message “just in case” during the conversion the application didn’t get the updated info or something.  I didn’t understand the logic and it exasperated me at the thought of contributing to wickedness.

              During the testing phase, the volume of messages was so alarming, we had to have an emergency upgrade to our Cloverleaf test server.  We also upgraded the prod server, but I kept saying that I didn’t think it would be enough and 48 hours before Epic actually went live, when they began admitting the patients, it almost brought down our production server.  We were adding disc space and memory incrementally just trying to stem the tide even as my peers were moving the new sites into place.

              I have process-based cyclesaves with file size monitors on all of the Epic process logs.  If anything gets over 1.25G, the process cycles.  We had to do that because when we were only cycling each site only once a day, everything connected to Epic would back up and fail several times a day.

              Jennifer Hardesty

              Systems Integration Specialist

              MMC

              (still working the night shift in the Command Center for Epic Go-Live, week 2)

            • #77588
              Mitchell Rawlins
              Participant

                I took the approach of “don’t ask; tell.”

                Epic triggers A31 on patient-specific (essentially No-Add) data, and A08 on visit-specific (any of your over-time) data.  I’ve seen very few workflows that don’t generate both patient and visit specific changes, so we trigger an A08 for nearly every A31.  Right before a merge is where we have the most likelihood of A31s without an A08.  It’s probably easier to fix failed merges than to fix a broken engine.

                Our lab folks wanted the A31s, but they couldn’t use them because there’s no visit ID, and that’s a mandatory field in their system.  Visit ID was mandatory, so I blocked A31s and they didn’t notice.

                You may not be able to pull that off, but at least during our go-live it was more important to keep messages flowing, even if it required more manual clean-up on the end systems than normal.

                I don’t remember us having too much trouble with message volumes during go-live.  It’s important to only route the message types that are needed, and A31s are the biggest ones to worry about.

                Good Luck!

            Viewing 3 reply threads
            • The forum ‘Cloverleaf’ is closed to new topics and replies.