Forum Replies Created
-
AuthorReplies
-
Well, at CUHC we reboot the AIX server every 5 years whether it needs it or not.
Other than that we check the error database each weekday and address any errors found, keep an eye on the disk space usage, and sometimes when the major applications have a donwtime we will initialize the databases (provided that they are empty).
We have scripts in place that cycle process logs and SMAT files, and also to archive / prune the old files.
The production server is backed up each weekday and the test server every Saturday.
We do not do any kind of hardware PM (like in the old days) – today’s budget-focused operations allow only for swapping parts if they break.
This probably not the exact help you’re looking for but…
I would sit down with the two vendors and show/explain the way an HL7 order conversation should work per the standard. Then send them back to their desks to correct their interface so the engine does not need to base the handling of an asynchronous message on the values/contents of another asychronous message (that may never come) or refund the money paid for the non-compliant interface.
But that’s just my opinion – I could be wrong.
In my experience the comm daemon does actually connect to Cloverleaf. For a time we had both plain-text and RTF reports coming from the Outgoing Documentation interface – the plain-text messages came into Cloverleaf via the comm daemon connection while the RTF messages came in via the EPS connection. You can use netstat to confirm that there is a connection from the comm daemon even though there is no traffic.
That is “normal” Epic behavior. EPS breaks the connection after each message – giving the Up/Opening flapping. The Bridges comm daemon will only connect when started or when it has a plain-text message to send. Unless there is a plain text message to be sent the comm daemon will not reconnect after the process is cycled.
March 8, 2012 at 5:53 pm in reply to: tcl script to sort dumped recovery db messages by timestamp? #76181Here is what I would do –
Write a tcl script to read from the file, parse out the message date/timestamp and sequence number, format that info into a sort-friendly stub (like yyyymmddhhmmssxxxxx), and write out the message into a new file with the sort-friendly stub prepended.
Use the sort command to get that file into the desired order.
Write a tcl script to read from the sorted file, strip off the sort-friendly stub, and write out the message. This last file is then ready to re-send.
Kind of a brute force approach (and I’m sure that others already have conceived a single-line perl/awk/sed command to do all of the above) but workable.
Good luck!
Continuing off topic…This brings to mind that time that I had to interface an order entry application to a radiology application but the order entry app had no interface, no development tools, and no vendor. What it did have was the ability to send a copy of each “order report” to a specific workstation and its direct-attached printer (as well as to the radiology department workstation and printer).
A workstation was set up in the computer room and the order app configured to send a copy of all rad orders to that workstation. Because the app had a custom printer driver which required feedback from the very-specific printer model, a “parallel duplicator” was attached to the workstation and one of the very-specific printers. The “duplicator” would also send a copy of the parallel stream to a second parallel port which was attached to “parallel to serial” converter. The serial output was then attached to a serial port on the Cloverleaf box.
Using a custom async PDL, some TCL, and an xlate, the printed rad order was translated to an HL7 order message and sent to the radiology app over DecNet.
Good times, good times…
The error “invalid command name” makes this look like a whitespace/continuation problem. The interpreter is parsing out the “resync:” phrase as a command – not an argument to the hci_pd_msg_style command. Check to make sure that there is nothing except a carriage return after the “” on the line with “field:data”. I suggest deleting the lines with “field:” and “resync:” and retyping them. Or do away with the continuation completely and just have one line with all the arguments –
hci_pd_msg_style basic phrase:basic-msg field:data resync:\x2
That should eliminate the runtime error.
Now as to what should be the resync character – I agree with Jim that the vendor should be using the “tried and true” MLP characters.
You may want to set the advanced scheduling seconds to a value (like 0 or 1) as well. If it is just asterisk then the task will try to run every second of the specified minute – probably not what you want.
Please note that this is NOT an offical Quovadx fix – just what worked for us. The newest version of the JRE – including the DST changes in the rt.jar file – can be downloaded from Sun:
http://java.sun.com/products/archive/j2se/1.3.1_20/index.html Copying the rt.jar file from the installed JRE into the Quovadx directory and restarting the IDE fixed the displayed times.
April 14, 2006 at 8:24 pm in reply to: Vendor requesting to use the same port # for 2 interfaces #58474They come back to the sending IP:port combination. Think of the connections as a virtual circuit, linking a unique sending IP:port to a unique receiving IP:port.
The outbound to Fred will be:
From QDXIP:ephemeralPort1 to FredIP:12345
The outbound to George will be:
From QDXIP:ephemeralPort2 to GeorgeIP:12345
So messages sent to Fred will leave from QDXIP:ephemeralPort1 and go to FredIP:12345. When Fred sends an ack it will go back the way it came and be received by the “outbound to Fred” thread on QDXIP:ephemeralPort1.
Same with messages sent to George – they will leave from QDXIP:ephemeralPort2 and go to GeorgeIP:12345. When George sends an ack it will go back the way it came and be received by the “outbound to George” thread on QDXIP:ephemeralPort2.
It should look like – IF 0(0).PV1.00146.[0] eq =KAT
Or better yet –
COPY =KAT -> @tmpkat
IF 0(0).PV1.00146.[0] eq @tmpkat
Increase the Max Messages parameter. What you have now is telling the engine to process 1 message (Max Messages) every 5 seconds (Read Interval). We have been working with Cliff Warren and the SIS (actually the SIV*) files were being closed by SYST. That has been changed in the test environment and it cleared up the problem there. The transaction volume in test isn’t high enough to give me the full blown “warm and fuzzies” but I do look forward to the change being applied in production tonight. I am holding off on scheduling a “bounce” or putting in alerts that cycle the thread until I’m sure that the mainframe side has exhausted all their options.
I certainly appreciate the suggestions – thanks!
The mainframe folks don’t want us to time out on the Await Reply because then they would potentially have multiple copies of the message (and the resulting application errors that they would have to address). They are receiving the message but since their files are closed they delay sending the ACK until the files are open again. We also have a firewall between us and I’m fairly certain that the firewall is timing out the connection and preventing the ACK from getting back to Cloverleaf once they send it. Thanks for the suggestion though.
Joe’s suggestion was not quite what I had in mind (I did post an enhancement request that covers what I wanted to do) but still greatly appreciated. In case it’s helpful to anyone else we did find a work-around – by having the target files “allocated” on the MVS mainframe the FTP server on the mainframe was able to use the record lengths from the catalog instead of defaulting (and truncating) to 80 bytes.
-
AuthorReplies