Forum Replies Created
-
AuthorReplies
-
Is this connection over VPN ? Then it could be a whole different host of problems. We have had problems with VPN connections and I have heard similar opinion on this forum. What seems to happen is that the underlying connection seems to dissapper while both the sender and the reciver think they have active connections at the protocol level.
January 21, 2010 at 9:57 pm in reply to: Are you running CL in a virtual environment? i.e. VMWare #68437The VMware server setup we have for the interface manager is as follows. VMware version is 3.5 update 4
2x Dell 2950 poweredge servers
We have seen an occasional process panic when the receiving vendor is troubleshooting/working on his listening ports and gets his listner into a debugging mode. Seems like erratic responses back from the receiving thread can send the cloverleaf process eventually to panic.
I would proceed as follows:
1. Make sure all threads in the process are marked not to start automatically.
2. start the process back
3. Start one out thread say X and monitor recovery DB : hcidbdump -r -d X to see messages to this thread are moving out. When all done, repeat step 3 for the next out thread till you get done with all out thread. Then start each in thread one after the other giving time and minitoring the recover queue for anything stuck.
I recall the cloverleaf developers I spoke to pretty much said it is not easy to map a specific panic error to a definite cause but the general recovery process remains as outlined above.
If you take a msg file that is X0D terminated at segment ends and x0D x0A terminted at EOF and edit it in 7Edit it preserves X0D terminated at segment ends but removes the trailing x0A and EOF. after you edit in 7Edit open the file in a HEX editor like XVI32 and insert a x0A at the end of file an dit will work.
If you do a hcidbdump -r -d you should see one message in state 14. You will need to delete the message, save it and let the messages flow. Howver the correct sequence for doing this is:
1. Stop the remote server that is getting the data.
2. Stop the out thread ie the client thread sending the date from Cloverleaf to Epic. ie the OUT thread
3. Then hcidbdump -r -s 14 -d and save the message and delete it from queue
4. Start the remote server port
6. then start your OUT thread.
Chances are your remote server errors on something, recovers but never sends back an ACK so your OUT thread is left tryibg to resend it over and over again
Jim Kosloskey wrote:You could stop the thread. Then if you have an alert checing for that condition it will trigger. That alert can email/page someone.
Jim,
What is the command to kill a thread from a TPS UPOC ?
I tried hcicmd but that did not work.
Thanks
A view HL7 format option on the SMAT screen where we can choose a variant and map it to a HL7 file to quicky respond to queries about mesages/received/sent thru the engine. Would also be nice to select by Hl7 Fields in SMAT
Bill,
I recall from an earlier post I read, an explanation of how the Dir Parse and Deletion Procs work. I cannot seem to find the link. But here it goes: You can use the Dir. Parse tps and choose which files in the configured directory you want to process.
At the point of Receive Reply the original message is is state 14. If you CONTINUE at this point, in 5.5 there is not much that can happen to that message except possibly go to an undefined state, maybe. If you stop and start the thread this message may resume as a new read message on the thread and then follow the Route to state 5 where you want it. I would check and see what happens to the State 14 message when you CONTINUE. In any case, I would be wary of CONTINUE on state 14.
Hi Vince, We recently did an upgrade from 5.2 AIX to 5.5 Linux VMVARE and had practically no downtime. I presented a talk on this at the recent healthvision user conference in Dallas TX on Oct 17. I have enclosed the PPT.
You could dump the USERDATA portion on th message in inbound TPS and look at it via a hex editor but that would still give you what the engine formatted from what came in. I think what you are asking is to dump what the PDL gets. It may be possible to dump the contents at the PDL stage (assuming you use mltcp.pdl) but I have;nt messed with PDLs. You could always out a network snooper utility that sniffs an incomming port and dumps it to a log file, and this would be outside cloverleaf. Cehck this link http://www.netmon.org/tools.htm for network sniffer tools
I am assuming theses are TCP/IP pdl threads. If so is the problem on a client thread or a server thread ? If it is a client then it will always try to reconnect if a connection is bad. When it does try to reconnect the SQL server at the other end must respond to that reconnection request. If it does not the problem is in the SQL server that it does not respond to a recoonect request. If it is a server thread, then you should turn on multi-server mode and make sure you have DRIVERCTL on. Also your reply recovery_33 proc has to be updated to extract the DRIVERCTL field from the message and send it over in the reply. Please post if it is a server and I can send you the reply proc you need. Also If you are the client and dont really need persistant connections look at the solution offered in :
https://usspvlclovertch2.infor.com/viewtopic.php?t=4014
You can set up a non-persistant connection that closes after an ack if nothing else works.
I would make sure the recovery and error queues are empty. Also is it possible that some file got corrupted in the installation ? It is possible to delete bin and lib directories. If the code uses dyamic linking (I am not sure if the engine does that) it could just be a wrong version of the library (DLL) that it is looking at.
February 6, 2009 at 6:04 am in reply to: Difference in XLATE results under LINUX QDX5.5 vs AIX 5.5 #66772John, Thanks for your suggestion. I looked at the formats and we were using the default 5.2 (integrator/formats) for ADT_04 for both input and output. I do not want to change the 5.5 formats with the 5.2 formats if possible. As of yesterady our support was saying that this may be a QDX5.2 bug that never got patched and 5.5 is running correctly!
-
AuthorReplies