Forum Replies Created
-
AuthorReplies
-
March 25, 2026 at 11:04 am in reply to: Delivery to xlate thread dohbeds_out_xlate failed. Requeuing msg. iclErr=3 #122416
yes – we ran our normal recycle at 00:53 and had 48 normal startup messages in the log, down to here:
[prod:prod:INFO/0:dohbeds_out_cmd:03/23/2026 00:53:27] Log History feature is enabled.
Then we started to see the icl messages:
[xlt :thre:ERR /0:dohbeds_out_xlate:03/23/2026 00:53:27] XLATE ICL server open failed, iclErr=1
[pd :pdtd:ERR /0:fr_dohbeds_db2:03/23/2026 00:53:30] Sending ICL cmd to ‘dohbeds_out_xlate’ failed, iclErr=3
There were no more of these messages and there were a few more normal process startup messages, ending with this:
[cmd :cmd :INFO/0:dohbeds_out_cmd:03/23/2026 00:53:43] Command client went away. Closing connection.
At 2:16:18 there was an application msg processed. This would have been the first application activity since startup, and it was expected. There was debugging output (sorry for the length):
[tcl :out :INFO/0:fr_dohbeds_db2:03/23/2026 02:16:18] tpsPrintMsg: 2026-03-23 02:05:04.0,2026-03-23 02:16:04.607,,CRH,ready,ip,9C902EC5-7F26-F111-8882-005056957E5C,Yes,12,0,0,30,26.90,56,51,1.45,5,21,26,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,267,295,62,67,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,210,235,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,36,41,0,0,0,0,46,51,16,21,0,0,0,0,61,0,1.45,12,26.90,51,312
[tcl :out :INFO/0:fr_dohbeds_db2:03/23/2026 02:16:18] DOH_BedAvail_sending_Update updated guid: 9C902EC5-7F26-F111-8882-005056957E5C status: sending result: 0Right after this, we got the “delivery failed” message:
[pd :pdtd:ERR /0:fr_dohbeds_db2:03/23/2026 02:16:18] [0.0.7755725] Delivery to xlate thread dohbeds_out_xlate failed. Requeuing msg. iclErr=3
At this point we started getting approximately 500 of these per second.
There were two more sets of application messages, as expected, just a few minutes apart, as expected. There were error messages interspersed between the application messages.
Then the error messages continued until 04:13. it seems like we were getting around 3100 error messages per second.
At 04:13 the last line was truncated, so I think it stopped there – space, memory, whatever – just stopped, no crash, no panic, just stopped.
I restarted around 12:20 and all was fine, all application messages that attempted to go, did go through, successfully.
I have disabled the debugging code, in case that contributed to the issue, but its been running without changes since September.
I do have ksh shell scripts that monitor some process logs, looking for Java errors, so I could look for this also and raise an alert or maybe just recycle the process.
Peter
Peter Heggie
PeterHeggie@crouse.orgMarch 24, 2026 at 1:58 pm in reply to: Delivery to xlate thread dohbeds_out_xlate failed. Requeuing msg. iclErr=3 #122413we run shell scripts on AIX that recycle all sites, all processes.
the last successful message transmission was around 00:18 and the process was recycled at 00:55. And about a minute after that the log started filling up with those entries. Will have to see if the entries continued to post or stopped at some point.
Peter Heggie
PeterHeggie@crouse.orgMarch 24, 2026 at 9:14 am in reply to: Delivery to xlate thread dohbeds_out_xlate failed. Requeuing msg. iclErr=3 #122411thank you – yes we recycle the processes every night. Also I was able to tail and head the process log and I found those messages starting showing up in the log fairly close to the top of the log. I have debug statements in procs at various UPOCs, and I did see some of these write to the log before the error messages started. I looked at the debug output and I didn’t see any obvious problems like non-printable characters, etc. There are only three messages sent every other hour, each around 6500 bytes, so not a lot of volume. Copies are written to the log. They contain a lot of square brackets and curly braces but that shouldn’t be a problem. Its been running like this since last September, never saw this error before. And yes we are on AIX as well.
Peter
Peter Heggie
PeterHeggie@crouse.orgIs there any information in the process log that shows the error? Can you share that information?
Peter Heggie
PeterHeggie@crouse.orgI always notice your car!
Very cool. Not too many mid-engine cars with that big of an engine.
I had an X 1/9.. well, three of them. I got mine up to 85 going downhill…
Congratulations!
Peter Heggie
PeterHeggie@crouse.orgFebruary 23, 2026 at 12:04 pm in reply to: Use database connection defined in site preferences #122302We use Cloverleaf tables, that are database tables, from within shell scripts. That being said, we still need Cloverleaf. The Cloverleaf tables have to live somewhere, and maybe someone has figured out how to use a Cloverleaf table whose definition is not stored in Cloverleaf, but we have not.
Sometimes we use our Master site to hold Cloverleaf database tables, just because the tables are not related to any particular interface or business process that is associated with a Cloverleaf site. I believe the Network Monitor has to be bounced once after creating the table in the site. But we also store such Cloverleaf DB tables in an “application” site, the example below uses site “chargeprd”.
Here is an example. We have a shell script that functions as the job (parent level processing), it calls a utility script (Generic_Charges_Report_New.tcl) which invokes the DB table:
dbl_prGenericCharges_Report
which is an Advanced Database Lookup stored procedure:
{?=CALL prGenericCharges_Report( <@StartDateTimeString> , <@EndDateTimeString>,<@Source> , @Count1V OUT , @Count2V OUT , @RCode OUT , @RDesc OUT )};
IN: @StartDateTimeString,@EndDateTimeString,@Source
OUT: _CLRC_,RS_tstamp,RS_mrn,RS_ecd,RS_c_svccode,RS_c_svcdate,RS_prlocation,OUT_@Count1V,OUT_@Count2V,OUT_@RCode,OUT_@RDesc
Notice that the output has both record set (RS) data and stored procedure output (OUT) variables. Don’t know if you have had both RS and OUT data on the same stored procedure but you will notice that the result data has multiple rows, and each row has multiple data items (the RS items). But tacked on the end of each row will be the output items (OUT), and those values will be the same on each row (so they are kind of like duplicate data).
with this code:
set result [dblookup -maxrow 999999 dbl_prGenericCharges_Report $startdatetime $enddatetime $datasource]
if {$debug} {echo “[gts] $result RESULTS”}
if {$debug} {debugw “[gts] $result RESULTS”}set rows [split “$result” “\n\r”]
set numrows [llength $rows]
if {$debug} {echo “[gts] $module results had $numrows rows”}
if {$debug} {debugw “[gts] $module results had $numrows rows”}fyi – ‘debugw’ writes the information to a file, while ‘echo’ writes to the parent shell script. Don’t forget to set -MAXROW to a high number, to get all your output rows.
The utility script goes on to process each row in a loop. fyi – the utility script is TCL ; the parent script is a UNIX KSH shell script
This is from the parent ‘job’ shell script. Variable clsfx is the environment suffix, tst or prd.
prc=0
echo “omtstart-date +%Y%m%d%H%M%S”# set site charge – to get to table dbl_prGenericCharges_Report
setsite charge${clsfx}; clCheckSite “$prc” “setsite” “charge${clsfx}” ; prc=$?# run query SQL – this produces output to a file
Generic_Charges_Report_New.tcl “$startdatetime” “$enddatetime” “$email_fr” “$email_to” “$email_su” “$runtime” “$datasource” ; prc=$?# set audit return code for the entire job and echo a timestamp to the log
omtendWhen you run TCL in a batch script you need something like this at the top:
#! /usr/bin/ksh
# The following line is seen as a continuecomment by Tcl\
exec $QUOVADX_INSTALL_DIR/integrator/bin/hcitcl “$0″ ${1+”$@”}At the bottom of the TCL script, after all the subroutines, is the main TCL script code, here is a snippet:
# main routine
set startdatetime [lindex $argv 0 ]
set enddatetime [lindex $argv 1 ]
set email_fr [lindex $argv 2 ]
set email_to [lindex $argv 3 ]
set email_su [lindex $argv 4 ]
set runtime [lindex $argv 5 ]
set datasource [lindex $argv 6]When you are building and testing, what is very helpful is to use this utility on the command line: hcitcl
it creates another shell / command line, where you can set variables and then invoke your dblookup stored procedure call. You will get the results back to your screen (or to a variable). This is really helpful, to make sure the database part of your solution is working, before running it out of a bigger script. Don’t forget to do a setsite first.
hope this helps
Peter
Peter Heggie
PeterHeggie@crouse.orgI believe this falls under the “fun with indexes” category. You will need to maintain separate indexes/variables for the NTE segments vs the OBX segments. As you mentioned, you may want to “insert” a new OBX segment at the top. You will have math functions to increment counters that are used as the OBX indexes. This means that your OBX index will end up being greater than your NTE index.
Example – if you know there will be five patient identification OBX segments, write those out first, incrementing the index for each OBX, from 0 through 4. Then when you get into your NTE-based loop, use that same OBX index, currently valued at 4, and add 1 to it for each NTE segment that you are copying over to an OBX.
This will take some time to work out, but everything is addressable.
Peter Heggie
PeterHeggie@crouse.orgTo start Cloverleaf, start the lock manager and monitor first, then start the processes. I think it is desirable to have the lock manager running before messages are processed through threads.
it is interesting that our HACMP environment, configured by Infor, has a shell script that determines all the processes in a site and does a kill -9 on the PIDs, then a kill -9 on the lock manager and the monitor. And does that for all sites. Then kills the host server. So nine sites with a total of 400 threads will end in about eight seconds. Never lost a message, never had a problem. We use Recovery Databases for everything.
Peter Heggie
PeterHeggie@crouse.orgThis is how we use LIKE – if that is what you are looking for?
select status from dbo.prl_charges WITH (NOLOCK) where keyvalue like ‘%’ + <keyvalue> + ‘%’
in_column_name=keyvalue
out_column_name=statusPeter Heggie
PeterHeggie@crouse.orgWe had bad (character) actors.. but from one specific source. A doctor cut and pasted dictation/consult notes from Word into a text reader that sends them as an ORU into Cloverleaf. This included some strange formatting characters that did not convert to ASCII . We tried education, to no effect. We ended up creating a TCL proc that performed string maps. Every few weeks we added more string mapping From and To characters. That was almost two years ago and we still get that input but the TCL catches it and fixes it.
Peter Heggie
PeterHeggie@crouse.orgCan EPIC take in lab results in an older version of HL7? Our current lab vendor sends us results in v2.3. So not all data is discrete and not all meta-data is discrete. Does that mean documents like CCDAs, sent through Care Everywhere, could not be ingested in other EMRs, because some of the data is textual, not discrete?
Peter Heggie
PeterHeggie@crouse.orgThis is great detail – thank you. Our current financials/ADT interface from our EMR is 2.7, so I think we are good there. But our clinical interfaces – orders, results – are 2.4, so we may have a lot of work to do with clinical interfaces.
Is a TS person an EPIC employee? I’m wondering where the line is, between what we would do and what they would do.
Right now, with Cerner/Oracle, we have Cloverleaf connecting to Openlink. We don’t do any programming in Openlink. On rare occasions, maybe five or six times in the last eight years, we have had Cerner make changes to Openlink. But 99.9% of the time, we do everything in Cloverleaf, when it comes to translation and transformation of interface data to and from ancillary systems. As far as doing everything in Cloverleaf, does that remain the same? And there is a potential for less interfaces if some of our modalities are part of the EPIC component set?
The ACK work sounds interesting; we only do the immediate, “message received” ACK for the most part, except for a state registry interface, which sends us application ACKs.
Peter
Peter Heggie
PeterHeggie@crouse.orgDecember 26, 2024 at 8:31 am in reply to: needing pdl for Experian eligibility epic query interface going thru cloverleaf #121721adding email
Peter Heggie
PeterHeggie@crouse.orgadding email
Peter Heggie
PeterHeggie@crouse.orgDecember 23, 2024 at 11:40 am in reply to: needing pdl for Experian eligibility epic query interface going thru cloverleaf #121706Hi Nancy,
Sorry to hijack this thread, but I couldn’t help notice that the above eligibility verification functionality is something that we are also looking at implementing. We looked at an Experian package called eCare Next Base Platform, and with it, Premium Eligibility Services. We also use HDX and the above package included costs for HDX configuration.
But what you are describing sounds more automated, and faster, than what we are looking at. I’m just wondering if this is actually a different service than the Experian eCare. It sounds like you have a direct tcp/ip connection. I assume you have a VPN?
And for anyone else (!), is there a similar service using web services or FHIR? The tcp/ip flavor seems easier?
Peter
Peter Heggie
PeterHeggie@crouse.org -
AuthorReplies