Forum Replies Created
-
AuthorReplies
-
If you were going to recommend a book or online class to a complete Java beginner who’s main interest is using it with Cloverleaf and possibly other Integration-type tools, what would you recommend?
Jennifer Hardesty
Cloverleaf Developer
I’m seeing these when hcialert2email fails.
Sorry for my slowness in getting back.
My original post was sort of an “all-inclusive”
team grump regarding the change. As we all know, every site and even every individual has their preferences. We here at MMC have spent years tweaking the logs so they appear exactly the way we wish. If you noted in my example, our part of the information already has the thread names as well as date/time stamps. We have very detailed logs which include the messages, the ack/naks, tclprocs that process or supress them as well as the reasons why they are supressed.I figure, if you wanted that kind of information in your logs, you should have put it there in the first place. 😉 I find that having it thrust on us now and in such an intrusive manner is rather inconsiderate and impolite. One of our analysts almost considered it a “show-stopper”. (I’m not kidding. I talked her out of that.)
As you said, one person’s improvement is another person’s junk -er- nightmare -er- albatross.
We
have upgraded our telnet services this week so we’re trying Xming and Kitty. Now everyone is using the same product at least. So we’re not restricted to the small window. And I have found a more appropriate way to strip the inconsistent number of starting characters out when looking at the logs:cut
set process [file tail [pwd]]
is the most direct way I know. The other way I’ve seen it done is:
set process “[lindex [split [pwd] /] end]”
You keep your SMAT files in a separate area outside of the sites?
Also, out of curiosity, about what size are your SMAT files before they get cycled at the 3 hour limit?
I see. I suppose that method does work well if you
A.) Are not using a master site method where all of your tclprocs, Xlates, etc. are in one global location.
B.) Have unlimited space to have unlimited copies of every file in every site, including the smat files for however long you archive them — some of our sites keep smats for up to 90 days.
Well, that’s why I was asking what methods folks use. What keeps you protected in those emergencies when you have to back everything out? Do you have a manual system in place and if so, what does it involve?
I’m just curious.
Russ,
What you describe would work fine for being able to back out of any issues with a NetConfig where a route has changed or a pre-proc has been added to a route or even a port has been altered , but what about restoring a Xlate, tclproc, variant, Table, or even an alert?
🙂
re: header information — what kind? Does it need to be per message or per file?
Also, is this excel friendly? Would you mind sharing? Not the Xlate, just the variant.
🙂
I’m looking to do this too. I don’t suppose you got any offline answers.
Unfortunately, your advice on the flag is way too late. I wish I’d know about that flag when I was throttling those backed up messages over a week’s period during several “surprise” uploads. 😛
We did manage to convince them last week to only trigger A31s on Patient Level updates and A08s on Visit Level updates and this cut the A31 traffic down quite a bit, but it’s still pretty high and I haven’t been able to convince anyone they don’t want A31s. In fact, one of my peers is creating A31s for every merge message “just in case” during the conversion the application didn’t get the updated info or something. I didn’t understand the logic and it exasperated me at the thought of contributing to wickedness.
During the testing phase, the volume of messages was so alarming, we had to have an emergency upgrade to our Cloverleaf test server. We also upgraded the prod server, but I kept saying that I didn’t think it would be enough and 48 hours before Epic actually went live, when they began admitting the patients, it almost brought down our production server. We were adding disc space and memory incrementally just trying to stem the tide even as my peers were moving the new sites into place.
I have process-based cyclesaves with file size monitors on all of the Epic process logs. If anything gets over 1.25G, the process cycles. We had to do that because when we were only cycling each site only once a day, everything connected to Epic would back up and fail several times a day.
Jennifer Hardesty
Systems Integration Specialist
MMC
(still working the night shift in the Command Center for Epic Go-Live, week 2)
September 11, 2012 at 7:11 pm in reply to: Using tcp_acknak.pdl & mlp_tcp.pdl? Disc space issues? #77099James, I understand what you’re saying about configuring both localhost threads to match, but what do you do when you have the following scenario:
process_a
foreign_inbound => hs1_adt_thread
=> hs2_adt_thread
=> hs3_adt_thread
process_b
hr1_adt_thread => js2_site_hub_thread
hr1_dx_thread =>
We would want to configure foreign_inbound with mlp_tcp.pdl because it is an incoming connection and requires proper ack/nak’ing. However, the client/server pair hs1_adt_thread/hr1_adt_thread and hr1_dx_thread and js2_site_hub_thread with their hypothetical matches are all “local” and can be configured with no or minimal ack/nak’ing — tcp_acknak.pdl.
However, in some cases, we have found that having mlp_tcp.pdl as the server in a thread such as foreign_inbound routing to a client with tcl_acknak.pdl, errors are generated. Sometimes, it’s only a single error per message received and sometimes, the error spins out of control and fills up the log at a rate of a few per minute.
So what are we doing wrong?
I’ve written a tcl proc to break up the large message into individual messages. However, when I attempt to run it in the Testing Tool, it doesn’t ever come back with anything.
This is becoming extremely stressful. There are apparently over 31,000 items in this single message. I don’t understand why Lawson can’t give us the items in batches of 10,000.
It’s bad enough that we are supposed to check 2x an hour for this file because the users want the update “real time”. If they are going to send that much data every time, multiple times a day…per facility…Gosh, I thought the increased traffic from Epic was going to endanger the server, but maybe the Lawson inventory updates will bring it down…
March 8, 2012 at 6:25 pm in reply to: tcl script to sort dumped recovery db messages by timestamp? #76183Oh, I love/hate when it’s something as simple as that.
I’ve been beating my head trying to figure out the simplest way to do this. 🙄
Jim, you are my BFF today! 😀
Thank you, thank you!
March 2, 2012 at 3:25 pm in reply to: Alerts Continue to Fire after Multiple hcimonitord Resets #70448That doesn’t appear to be our problem. 🙁
Early Monday morning, the production server had to be brought down rather ungracefully and the resuscitation was even less pretty. It took around four hours working with Cloverleaf tech support to bring all four sites back up.
(The root cause apparently has to do with some sort of runaway “process” on the server itself which eventually sucked up all the CPU memory and hung everything.)
Anyway here’s the weird thing: Since the recovery, all of the “Last Received” alerts on Site 1only (note: this is not a problem on any other site) falsely fireeverysingletime.There are no problems with any other types of alerts. They all appear to be working fine.
I have tried deleting them all and then adding them back in. I’ve tried changing the the frequency of the checks, changing the email routing, etc. It doesn’t matter, they still trigger falsely.
And tech support believes they are “ghost” messages left over from the four hours when we were down on Monday. 🙄
We’ve reinitialized Site 1 multiple times. We’ve cleaned it, bounced it, stopped and restarted the lock manager and the monitor daemon.
You can see that data is flowing through all of those threads. They are all up to the minute. Yet, we need these alerts to work for a reason. Our on-call officers rely on those alerts and I am at a total loss.
-
AuthorReplies