› Clovertech Forums › Read Only Archives › Cloverleaf › Cloverleaf › How do I test tps code?
I am trying to move code fragments into procedures so that I can reuse them. My problem is that I don’t understand how to test these bits. I had some code that looked up a value in a table and if the value wasn’t found then the message was killed. I have to do this in a number of different HL7 messages so I moved the code into a tps proc.
When testing a translation that uses this new proc, I get errors about not being able to access the MODE variable. It looks like the args parameter is not being set when running the translation in the testing environment so the message handle, MODE etc is not available.
Is my understanding of the problem correct? I’ll create a simple translation that illustrates the problem I’m having and post actual error messages if you think that will help.
How do I write these procedures so that they can be tested?
Thanks for your help,
Dennis
If I understand correctly you are trying to execute tps type Tcl procs in an Xlate.
If that is correct, then you need to know tps procs are by definition not designed to be invoked from inside an Xlate.
Xltp type procs can be executed from inside an Xlate and require a particular construct in order to converse with the Xlate.
As an aside, I will recommend you do message filtering (killing messages) before the Xlate. For maximum flexibility I like to do my filtering in message routing.
Jim Kosloskey
email: jim.kosloskey@jim-kosloskey.com 29+ years Cloverleaf, 59 years IT - old fart.
I’ll echo Jim’s suggestion that you kill unwanted messages before the xlate. It’s really quite easy in a simple tcl proc. Much easier than killing during translation. The only downside is your tcl code has to split the message to find the right segment(s) and field(s).
We received HL7 from a large number of client hospitals. Many of them will send spurious message types that we don’t want to process. So a collegue and I created a tcl message filter. Not quite the same thing you need to do but pretty close.
I call the message filter proc in the TPS Inbound Data box on the Inbound tab on the Thread Configuration.
If you want the code, let me know and I’ll post or email it to you.
Steve Robertson
TeamHealth, Inc.
We have multiple hospitals sending up HL7 messages, these different hospitals use different ID numbers for each provider. Our billing system provides a canonical list of provider IDs and I need to map each of the hospital provider IDs to the canonical provider ID. If there is no mapping from the hospital provider ID to the canonical provider ID then I need to notify operations to update a lookup table and save the message so that it can be resent once the table has been updated.
These provider IDs are sprinkled throughout the different message types in a bunch of different segments. I need to perform the mapping on every provider ID so trying to do this in one grand filter on the thread seems like it would be hard to manage.
If I do the lookup of provider ID at the TPS Inbound Data point, can I edit the message data at that point? If I’m looking up the provider ID anyway I might as well edit the message at the same time, the other option is to lookup the provider ID in the translation. I guess at that point my worries are over since I would know for a fact that the value I want exists in the lookup table.
On another note, in the XLTP type of tclproc it looks like I don’t have access to the message handle of the message currently being processed by the $xlateId. It doesn’t seem that $env exists, either.
Is this correct? Or are there other ways to access this information?
Thanks again.
So we have multiple inbound threads, each using overlapping hospital identifiers and a single outbound thread that writes to a database.
So what we do is have a translations (or set of translations) per inbound thread (hospital group). Each inbound thread has a Cloverleaf lookup table. Each translation does a table lookup on it’s table to translate the facility ID assigned by the hospital group to the enterprise-wide ID that we want.
It’s very simple. I suspect you will be able to do the same thing for your provider IDs. You can even assign a default value in a lookup table such that if the input value isn’t found, the output value will be the default. You could assign a default value that indicates that manual intervention is needed.
While you certainly can change message data in a TPS inbound tcl proc, it’s easier to do in the translation, provided all you need to do is a Cloverleaf table lookup. We typically only change inbound (pre-translation) message data in tcl when we need to remove or replace spurious characters that baffle the Cloverleaf xlate thread.
Steve Robertson
TeamHealth, Inc.
When dealing with dynamic lookups like doctor IDs, I do not agree that Cloverleaf Tables are your best bet. These tables are designed more for static data. And, I wholeheartly agree with the others that, if possible, you *ALWAYS* KILL oR ERROR messages prior to entering the Xlate engine!
Here is what I do.
I ask that these tables be maintained by a clerk or whoever in an Excel spreadsheet. Whenever the table changes – which can be often – save a copy in CSV format to a known file name on the Cloverleaf box.
Then, in my Tcl procedure, I read this CSV file into a structure (I prefer an array) upon script startup and save the mtime of the file in a global or namespace variable. Then for each message, simply stat the file. If the file has changed, re-read it else using existing table in memory. I also can add code to error the message if I cannot translate it and send e-mail if required.
FWIW, I have several implementations like that out there and running now. So far, none of the software has exceeded it’s shelf life 😛
BTW, there is an excellent package for parsing CSV in tcllib. You can download the latest version of tcllib (tcllib1.9 is the latest) at
We used a GBDM based setup at one time that worked quite well. In it we used a web CGI interface to update the GDBM “database” (using simple HTML tables and forms) and used a little retry loop in the tcl proc to work around the “open from write” collisions when updates were being written and an open for read was executed. We then let the “users” do the table maintainance via the web page, and as soon as they hit submit the change was there for the next read.
With that said, if dealing with a lookup table, I prefer, for ergonomic reasons, to have it reside in memory. This used to be a problem but in the modern day computers I never run into a problem with it.
There are as many methods of tackling any problem as there are people doing it. Most of them are good. Just because we do it different does not make one right and one wrong. But, on the other hand, just because it works does not make it right!
There are only two people I know of that do it correctly, you and me. And, sometimes I wonder about you! 😆
This (using sqlite and the Tcl package) sounds like a great topic to present as a webinar or at the User Group conference.
My $.02 worth.
Jim Cobane
Henry Ford Health
Just wanted to get back to you and say thanks for all of the suggestions. I really appreciate the help.
–dennis
I am planning a small tcl that will pull the doctor numbers from the various HL7 messages and look into a SQLite table to decide whether to kill or continue the message to the EMR.
1) This Tcl will be called from multiple threads, processes and sites. Will I be affected by locking if I use a single file? The calls will all be reads.
2) The table would be rebuilt daily from a system generated flat file, should I create multiple copies, one per site or thread?
3) Will I have to shutdown all threads using the script when I update the table?
4) There will literally be tens of thousands of messages checked against this filter daily. What about the speed and reliability of SQLite in your experience?
I am brainstorming the idea, so suggestions or comments are welcome. Thanks.
In my usage sqlite is pretty fast. I haven’t used in an environment like you intend to. My suggestion would be to experiment with it.
I do know if you will be accessing it from several sources it will not be a problem unless some of the sources are writing to the database. In that case, like any other database, it will lock the record it is updating. If you attempt to access the record during this time you will get an error that it is busy.
What I do is add logic to my code to look for the busy error and, if seen, wait for a second and try again. I retry 3 or 4 times and if still unsuccessful, bail out with an error message.
The sorts of problems you will encounter with sqlite are the same sorts of probles you would encounter with any database.
I like the idea of reading the file into memory at startup. That way I don’t have the overhead of opening an closing the database for each message.
I also like the idea of using the modification date of the file. Do you think it could be used to avoid starting and stopping the threads, when updating the database?
Scenario: I overwrite/update the database file, the mod date changes, all threads would check the mod date of the file notice it is different from the date it has stored in the global when the thread started and reload the table.
I wonder if five threads tried to reload the table at the same time if there would be an issue.
I will have a lot of testing ahead, I appreciate any and all of the suggestions you have made so far. Thanks again for your help.
That’s why you get the big bucks 😀 If it were that easy, we would’nt need you