Forum Replies Created
-
AuthorReplies
-
This happens with earlier versions of CL as well; we’re runnning 5.7 R2 on AIX, and it occurs. We follow the same actions as Peter describes. Thanks, Jim. I’ll check with the network folks. I just tried again, and still get the same error. Mike,
I’m not sure what version you are running. But, if you’re running version 5.7r1 or below, there’s a script in this package that will do what you are needing. The script name is netconfig.pl. You could modify it as needed.
https://usspvlclovertch2.infor.com/download.php?id=51
Joe
Tim,
Max Drown posted a stategy that you can use to prevent the timeout here:
As a temporary workaround, I renamed the hcitpstestshowbydisp in file tpstest.tcl (which is the proc that I use most when testing TPS) to: ahcitpstestshowbydisp and this alleviates the issue when changing test procs. It always appears at the top of the list. If you use another proc more often, you could rename it to precede with an “a.” Of course, you’ll need to mktclindex after the rename.
As a temporary workaround, I renamed the hcitpstestshowbydisp in file tpstest.tcl (which is the proc that I use most when testing TPS) to: ahcitpstestshowbydisp and this alleviates the issue when changing test procs. It always appears at the top of the list. If you use another proc more often, you could rename it to precede with an “a.” Of course, you’ll need to mktclindex after the rename.
It looks like the LWP::Simple Perl module is not installed. I’d have your OS folks install that mod, and retry.
Excellent. Thanks, Rob. I appreciate the prompt feedback.
See, I knew it would be something very simple. 🙂 Thanks Michael.
Excellent. That’s what I was hoping to hear. Thanks, Jim.
Joe
Thank you, Charlie. Any assistance would be appreciated. We encountered this incident again Friday night (8/29) for the second time in less than two weeks.
Joe
Hi Charlie.
Your response regarding granting enough cycles triggered another question. We’ve been running CL 5.6 Rev 2 on a virtual server (VMware + ESX hosts + EMC Clarion SAN) under Red Hat 5.0.
Recently, we had an incident where the SAN array which the Cloverleaf virtual server is stored on experienced higher than normal write cache flushes. Due to the fact that the ESX host timed out waiting for a write confirmation from the SAN array, the ESX host hosting the vitrual server sent SCSI abort commands to the SAN array. In turn, Red Hat was unable to write to its local drive in a timely manner. As a protective measure, Red Hat went into a read-only mode, which of course brought down all the Cloverleaf processes.
I was curious if you or any others have heard of / experienced this behavior in a virtual server / SAN environment. Oddly, none of the other virtual servers (hosting a variety of applications) writing to the same SAN reacted in a similar manner during this incident.
Thanks, in advance.
I would agree, if all your threads were located in only one site. But, if you have threads in other sites that are sending orders into the engine (which you bring up first) to send to a receiving system, but threads that deliver ADT to that same receiving system are located in another site that you bring up later, you may have problems.
This is problably obvious and assumed. I like this script, but I think it should only be used in a production environment after confirming that all inbound data to a site has been stopped, and there are no messages pending in the recovery database.
You can also start all processes and threads in a site with one command / script, but that could be dangerous and cost resend time later, as there are many production systems that require ADT from the HIS prior to receiving orders from other external systems, as well as other workflows that might mandate a controlled start of processes and threads.
-
AuthorReplies