Forum Replies Created
-
AuthorReplies
-
A lot of this is going to depend on how you’re pulling the flat file in. Are you reading the file by line or the entire file at once? If you’re reading by line and not EOF you should be able to have a simple TPS (or in your xlate) that just looks at the first element and simply either use xlateStrTrimLeft 0 (you only need one) or the full tcl: [string trimleft $var 0]
If you’re pulling in the entire file, you can do this in a TPS before you xlate (if it’s only at the beginning of a line):
set messageList [split $msg \n]
foreach line $messageList {
lappend newmsg [string trimleft $line 0]
}
set newmsg [join $newmsg \n]
msgset $mh $newmsg
Considering you have the newline last you shouldn’t need to split on both \r and \n, just split and join on \n, that should preserve it all.
This also assumes you’re using the TPS template from within the GUI to build the TPS with the correct arguments. You would put the script in the prexlate tps script box. Just insert that code in the ‘run’ portion of the switch statement and you should be good to go.
I don’t think anyone comes over here often, they stay in the main Cloverleaf forums. But the answer to your question is yes. Under administrator (or someone who has the rights), you can create a view that only has specific sites, and assign it to specific groups/persons. I would say do this under Administrator. You’ll also need to have at least basic security set up to do this. I don’t think it can be done if you don’t have basic security.
In our instance, you can click on the list view (where you can control the threads/processes), and if you’re not in command view, click on the (usually green) button that shows the status, and you can “view process logs”. If you’re in command view, you can click on the last icon to view the logs. I could find the commands in the dashboard view, but not a way to view the logs.
I think the next step is to track it down to a specific process causing your issue. You’d have to use top or another utility to grab the one that is getting heavy on the RAM and then trace to the specific process to see if you can find out /why/ it’s getting big.
Here is one I put together in KSH before we really got to rolling in Cloverleaf (I’d probably do it in TCL now):
A few notes, the comment is ‘eGate to cloverleaf’ but it is a basic two item CSV file. the default was %default%,<value>, so we would grab and set that as necessary. This also created the name as a copy with .tbl as the extension, regardless of the previous extension. I wrote this rather quickly with limited comments as it was a initially a throwaway script for converting mass tables. I need to update it since we use it for csvs now when updating larger tables. It doesn’t take long even on tables with a few thousand lines, though TCL would probably parse it faster. Note this is on RHEL and I can’t guarantee POSIX compliance on the script, though I don’t think there’s much to be non-compliant on.
#!/bin/ksh
#Table conversion from eGate to cloverleaf.tableSet=””
clTable=””
clTableName=””
egTableName=””
defVal=””
default=””
userName=”$(who am i | awk -F’ ‘ ‘{print $1}’)”
datetime=”$(date +’%B %d, %Y %r EDT’)”
prologue=”# Translation lookup table\n\
#\n \
prologue\n \
who: ${userName}\n \
date: ${datetime}\n \
outname: output\n \
inname: input\n \
bidir: 0\n \
type: tbl\n \
version: 6.0\n \
end_prologue”if [[ ! -f $1 ]]; then
echo “$1 is not a valid file”
else
egTableName=$1
clTableName=$(echo $egTableName | sed -E ‘s/(.*\.)(.*)/\1tbl/’)
if grep -q “%default%” $egTableName; then
defVal=$(grep “%default%” $egTableName | awk -F’,’ ‘{print $2}’)
fi
default=”#\ndflt_passthrough=0\ndflt=${defVal}\ndflt_encoded=false”
clTable=”${prologue}\n${default}”
while read line; do
input=”$(echo ${line} | cut -d’,’ -f1)”
output=”$(echo ${line} | cut -d’,’ -f2)”
firstChar=”$(echo ${line} | cut -b 1)”
if [[ ( “${firstChar}” != “#” ) && ( “${firstChar}” != “%” ) ]]; then
clTable=”${clTable}\n#\n${input}\n${output}\nencoded=0,0″
fidone < “${egTableName}”
printf “${clTable}” > “${clTableName}”
fiThis has been my biggest frustration with Cloverleaf (and by extension TCL) — Their documentation is sparse and lacking at best. It’s one of those things they should really hire a team to overhaul their specs but likely won’t. It’s there for the most part but incredibly minimalistic. There’s rarely any examples, and much like TCL documentation, it goes from zero to incredibly complex with no in-between, so a lot of the basic examples are missing.
Secondly, you’re going to have two ways to access the documentation (both, ironically how Jim shows above). It’s either going to be on-premises (installed by default on 19.1 and previous) and online/web/cloud (however you want to think about it) based, where the documentation is stored behind the infor portal.
If you install the documentation on-site, you never get updates. If you use the cloud-based, you (currently) have to log in every time to get the documentation. Either way, it’s still not great.
If you have an infor account, you can access the support documents here: https://docs.infor.com/en-us/clis/2025.x
That has a drop down that gives you the last 3 versions (20.x, 2022.x, 2025.x). You’ll pick which version and which location you support (on-prem/non-infor cloud or infor cloud based hosting). If you’re on a version prior to that, you will access it how Jim showed above, and it will pull the locally stored documentation (which always gave me fits, but that was a different story).
The clovertech forums are client driven forums for questions and answers. I don’t see Infor personnel participate in here often, if at all.
Thanks for that. That helps a lot. That’s definitely not a communication method I’ve seen nor is very common anymore. I almost thought it was DICOM, but that’s not the case here. I’ll leave it to the more experienced (Jim), and hope we never have to cross that bridge as it looks convoluted and written on levels we shouldn’t have to be thinking about anymore (looking at their split between Low Level and High Level Protocol). I’ll follow this thread, good luck!
I’m super curious about this now, as it seems they’re mixing terminology a bit. Things like ‘frames’ are a TCP term and are transport layer items that we don’t particularly care about. I’m also curious about the client/host configuration, a lot of things just seem ‘off’ with the way they’re being described. I can’t immediately pull specs since we don’t use the system.
What data type is this (HL7, FHIR, etc)? What is ENQ, STX (I’m assuming start transmission), ETB (end block?). Are they trying to run on the transport level with TCP/IP or in the application level, in which case they shouldn’t be worrying about the TCP/IP.
Secondly, unless you have data waiting, ImmuLink can’t be the initiator to ‘get data’. If there is a result, they should just send the result, you send an acknowledgement (standard) and send the result to the LIS. If they are expecting data (getting orders, etc) they need to be the server to be available to get a message when it gets sent.
I may be missing specific experience here, but the request isn’t making a lot of sense.
What is the high level data flow? Is there an order or a message TO them to get some sort of message back (Order and Result), or are they sending a message and expecting a status (Message and ACK)?
The CLV > ACK sounds suspiciously like TCP/IP work that should already be standardized and in this day and age shouldn’t be modified. This is why I’m a bit confused as to what they’re actual request is.
The xlateInVals is going to be a list of the inputs. Each line is a separate list item. Since it’s one line, you’ll need to split it. If you’re confident it will always be two sets of strings with no additional spaces you can make that assumption, however, if they don’t do that (for whatever reason) it could cause problems. The simplest pre-proc is the following:
set xlateOutVals [split [lindex $xlateInVals 0] { }]
Then set your destination as you have above. lindex will remove the {}’s around your input. Split will split it into a list, on spaces. setting it to xlateOutVals will send it out.
This is because it is putting the input in a set of braces {} as there is a space to show it’s together. This creates a lot of potential issues, and this is not assuming any characters (braces, quotes, protected items \$, etc). There are ways to get around this, but if you trust your data that one line will do what you need. Adding additional spaces will throw this off.
For the other data, you should really have a descriptor in OBX-3 you can use to only do this to the specific data you want. OBX-3 should have something like GESTATIONAL_AGE, BIRTH_WEIGHT or some other moniker that you can use to make sure you’re only doing this to the birth weight. If they don’t that really complicates things and makes it a lot harder to do what you need to do (but it can be done).
I’m seeing your reply, and quite frankly, that’s not how HL7 works as a whole. You don’t ‘exchange’ orders and results. From a very high level standpoint, the ordering system sends an order. This is a Dr, Tech, someone saying “I have this specimen, and I need someone to look at it and give me their findings”. This is sent to the lab system. This populates their system with basic patient data and specimen data. The specimen is then looked at by someone, and they record their findings. At this point, the lab system then sends a result. What they are saying sounds like a ‘synchronous’ feed where you send a message then wait for them to send a reply. This isn’t how Orders and Results work. This works in other situations (Pumps, certain queries, etc), but not here. You send them a message, then they send you a message.
From a more technical standpoint, HL7 has an “acknowledgement” system, much like TCP/IP but on a higher layer in the OSI model. System A sends System B a message. System B then responds with an ACK (acknowledgement) or NACK (non-acknowledgement) that says they got the message. In most situations (probably 90%+) the sending system will not send another message until they get some form of ACK (whether ACK or NACK). Some systems will send over and over. Some will error (Cloverleaf default), some will shut down. this prevents messages from going out of order.
The rare situations where you get a ‘response’ they’re not results. They’re different types of ACKS where you’re forwarding an immediate response back to the sending system. This is a messy rabbit hole I had to dive in to get some infusion pumps working correctly.
I would say if they were sending an immediate response to your order it could possibly work, however what they’re suggesting isn’t feasible. I just came off of eGate — It would be a hard “no” simply because the engine couldn’t do it.
I’m curious as to who this lab system is, and if they’re new to the game. None of the lab systems I’ve worked with would ever request this (Mako, LabCorp, Arup, there’s a few others).
I have to agree with David. HL7 in and of itself is not a bidirectional data type. You’re not routing acknowledgements, you’re routing actual messages. This creates a mess when you’re trying to acknowledge the incoming messages and not acknowledge the acknowledgements. It could probably be done the way you want, but you’re talking a complex setup where you have to investigate each message to see if it’s an acknowledgement or an actual message, and then send (or not send) an ACK back for the non-ACK messages. It makes things incredibly overly complex and difficult to troubleshoot.
You should (as David said) have two sets of interface groups. Orders and results. You send the orders to them, then when they’re done, they send results to wherever it needs to go on a different set of interfaces.
To be fair, their documentation matches TCL’s documentation which is not very good either. A good technical writer would help tremendously.
Documentation is under Services User > Reference Guide > Engine NetConfig interface extension or you can search for NCI. There is another recent thread about this as well. This is the only documentation available, and I think it became available in 19 or 2022. It is still pretty barebones.
This is definitely a work in progress; it functions as it sits (for us at least). I didn’t use the keylget pairs at the time, not sure if I will or not considering how oddly some of the setup is. This distinguishes from TCP/IP, File based (ftp, fileset local, etc). I’m working on adding a switch that adds a ‘connected’ column where it will parse netstat data to show if a thread is connected.
A lot of the script relies on our naming schemes to function properly, and I haven’t pushed some of script into procs yet. As I said, work in progress.
Attachments:
You must be logged in to view attached files.Documentation is under Services User > Reference Guide > Engine NetConfig interface extension or you can search for NCI. I’ve been working on a listing script since it only parses partial data, with searching functionality so I can narrow down to specific threads within a site. It’s a bit more tailored to us specifically, but it is useful. However, the netconfig script doesn’t get super granular. I was actually writing my own script at one point that did what this does in a similar fashion when I stumbled across this.
I’ll drop my script, but there’s some stuff specific to how we operate and how our environment was set up that wouldn’t be applicable to others. Let me grab the file and upload it.
-
AuthorReplies