› Clovertech Forums › Read Only Archives › Cloverleaf › Cloverleaf › TCL process Vs XLate process
 Jim Kosloskey.
Jim Kosloskey.Thanks a lot for the help.
My opinion:
Use the Xlate. Use Tcl to extend Cloverleaf (including Xlate) in order to provide funtionality not intrinsically provided by Cloverleaf (including Xlate).
It is also my opinion to write thee Tcl to be modular and reusable.
Just my opinion.
email: jim.kosloskey@jim-kosloskey.com 30+ years Cloverleaf, 60 years IT – old fart.
The best practice is to use a combination of Xlate and Tcl. The xlates are used for Segment and Field mappings and field transformations with the built-in statement or tcl fragments. Tcl is used for filters, complex logic, and special hooks such as renaming outbound files.
-- Max Drown (Infor)
I think I have posted this rant several times.
The reason Cloverleaf (actually HCI Link previous) was developed was to do away with having to write code for translations – to provide an easy method, using a GUI, to make changes to a translation.
If it is all done in Tcl code why do you need Cloverleaf? Tcl is free.
With Tcl code, every time something changes in the translation you have to modify the code and do a lot of regression testing. Plus, if you leave do others really understand your code? Many other reasons.
No one is a stronger proponent of Tcl than I am but it has its place. It should complement the Cloverleaf toolbox, not replace it.
Speaking from the viewpoint of someone who has developed a home-grown engine, there’s a lot more to it than just HL7 translations… routing, logging, communications, find/resend… the list is considerable.
The thing is that Cloverleaf is a bucket of tools – nothing more.
My favorite, or “right”, selection from that toolkit may be primarily Tcl – my Swiss Army knife. For others, it may be Xlates.
For most, like Charlie and Max have stated, it’s some combination of the two – a continuum from All Xlates/All The Time, to Nothing But Tcl.
I certainly appreciate the opinions of my experienced peers, but where a particular person or organization chooses to place the pointer on that spectrum is dependent on many localized factors and cannot be dictated by someone from outside.
Charlie is correct that finding people who understand Tcl may be a challenge, but well written, modular Tcl code is understandable and maintainable. Heck, we can design convoluted solutions with Xlates as well, so with whatever tool we choose, coherent design and documentation are paramount.
The bottom line is that straightforward message manipulations suit Xlates well. But, the more aggressive/adventurous we get, the more Tcl solutions are required. There are simply some things that Xlates just can’t do.
So, from my experience, as I develop more sophisticated solutions with Tcl, my Tcl toolkit gets better – and I can do more faster with that toolkit than I can do with Xlates. I certainly do some things with Tcl that could easily be done with Xlates, but that would take more time – something that’s a pretty precious commodity for most of us.
When deciding how to implement an interface, we certainly need to consider future maintainability, but we shouldn’t limit our current solutions based primarily on that – or on whatever the prevailing view of what the Right Way is.
Jeff Dinsmore
Chesapeake Regional Healthcare
Tough question to be honest because it depends.
Xlates are good for somethings but not everything. If you are not careful you can put so much into an xlate that it can confuse how things actually work. In line tcl plus call outs could confuse a new person.
Granted Xlates do have their place.
TCL is faster and easier depending on what you are trying to do. Yes it might take a programmer to implement but I agree with Jeff if it is designed well and documented then almost anyone can maintain it.
So I use both TCL and Xlates depending. I also use tools outside the engine to help things. Can it all be done in the CL engine? Yes. It really depends on what you are comfortable with building, maintaining and explaining.
Rob
Wonderful discourse.
Keep the opinions, etc. coming.
This is exactly one of the reasons clovertech exists.
What is certain is the Cloverleaf product provides the tools you shuld need to accomplish virtuallly any integration.
While there are strong opinions as to how to use the tools provided (I certainly have mine), each facility will have to decide what use of which tools and how they are used best suits their needs.
email: jim.kosloskey@jim-kosloskey.com 30+ years Cloverleaf, 60 years IT – old fart.
Jim,
I would like to understand from the Processing perspective too. how different is it to open a message using a tps process vs a Xlate process. If we are filtering based on a sub filed, is it a good idea to do it using TPS process vs Xlate.
do we have any metrics around the amount of memory and processing used between these 2 types of processing.
thanks,
Kiran
Wonderful discourse.
Keep the opinions, etc. coming.
This is exactly one of the reasons clovertech exists.
What is certain is the Cloverleaf product provides the tools you shuld need to accomplish virtuallly any integration.
While there are strong opinions as to how to use the tools provided (I certainly have mine), each facility will have to decide what use of which tools and how they are used best suits their needs.
The consensus thus far has been it is better to do filtering in a pre Xlate Tps Proc.
Indeed we do that here.
It is possible to do filtering using an Xlate. I have no metrics comparing the two and a lot would depend on how efficient the Tcl code is. I do not know of any published metrics on this.
You should be able to easily determine some rough metrics yourself by building a couple of routes one using some representative Tcl based filtering and one using just an Xlate. Then run some messages through (probably a few hundred or a thousand) and looking at the available metrics determine an average of the time spent.
But there is more to it than just machine efficiency.
As a manager of mine once told me “There is not a year that goes by that I cannot buy faster and cheaper hardware, but I have never had any human resource offer to work faster for less. Thus I will spend my money on hardware if I can make my humans more efficient”.
One of the issues not related to machine performance involving using Xlates to filter is: it is not obvious without actually looking in the Xlate if filtering is going on.
So when doing on-call support when it is essential to know as much as you can with a quick glance having filtering obvious by seeing a filtering proc in the route (in our case pretty much one proc to do all filtering) one can know some sort of filtering is going on without having to drill too deeply.
I have performed some thought processes of how to do filtering in Xlates and make it obvious without looking in the Xlate that filtering is going on.
If you are interested in my thoughts, opinions, and experience on this topic email me and we can discuss off line.
email: jim.kosloskey@jim-kosloskey.com 30+ years Cloverleaf, 60 years IT – old fart.
In most cases, filtering in Xlates is not the best practice, especially when dealing with large volumes of message traffic.
1. Each message that enters the xlate thread is fully parsed on the format.
2. Each message that enters the xlates goes through the entire xlate regardless of whether a SUPPRESS statement is called or not and regardless of where the SUPPRESS statement is called in the Xlate (ex. top of bottom). The SUPPRESS statement effectively sets the disposition of the message to KILL, but the engine does not drop the message until the message exits the xlate.
Because of these two facts, filtering in the xlate is computationally less efficient than filtering in a pre-proc. For low volumes of messages, the extra processing times would be trivial. But the best practice is to filter messages BEFORE the xlate.
What I teach my students in the Cloverleaf classes is to consider multiple factors when designing interfaces, not just focus on one factor. Consider hardware resources as cash, and spend that cash on what you value.
1. Speed of development
2. Speed of throughput
3. Ease of support
4. Clone-ability (re-usability) of the interfaces and code
For example, it is OK to have a slightly slower throughout if you gain 100’s of hours of saved support time and/or 1000’s of hours in development time.
Remember that while Xlates may be slower, they are not slow! In fact, they are highly optimized and specialize in mapping segments and fields.
R&D is currently working on adding a BREAK statement to xlates that would allow developers to break out of the xlate early. This statement will help efficiency somewhat, but the message will still be parsed upon entry into the xlate (it just won’t be required to complete all of the actions).
-- Max Drown (Infor)
Thanks a lot for answering my questions.
Max,
I am not sure #2 is always true.
Take this case:
IF
SUPPRESS
ELSE
.
I suspect the build message logic is not executed if is true.
email: jim.kosloskey@jim-kosloskey.com 30+ years Cloverleaf, 60 years IT – old fart.
Max,
I am not sure #2 is always true.
Take this case:
IF
-- Max Drown (Infor)
Max,
I am not sure #2 is always true.
Take this case:
IF
-- Max Drown (Infor)
I had thought I posted this earlier, so if we eventually get two copies, please accept my apologies.
We have similar need to suppress duplicate messages when morphing result messages from Xcelera into charges for Epic.
We’re using a simple SQLite DB for this.
Here’s code for building and read/write of the SQLite DB:
namespace eval crmcChargeDb {
variable nsDbName “UNKNOWN”
variable dbName chargeDb
}
proc crmcChargeDb::dbInit { dbName initErrsName } {
	variable nsDbName
  
	upvar $initErrsName initErrs
  
	set nsDbName $dbName
set retVal 1
set initErrs “”
	if { ! [llength [$dbName eval “PRAGMA table_info(sentCharges)”]] } {
		crmcSqliteUtils::log $dbName “First access – creating sentCharges table” 0
		if { [catch {$dbName eval “create table sentCharges(chargeUid TEXT, sourceSystem TEXT, lastUpdateTclSec INTEGER)”} initErrs] } {
			set retVal 0
		}
	}
return $retVal
}
proc crmcChargeDb::openWait { a } {
variable nsDbName
  set waitMsec 2000
  
  crmcSqliteUtils::log $nsDbName “Attempt $a – Database $nsDbName is locked – waiting $waitMsec milliseconds” 1
  
  after $waitMsec
  
  return 0
  
}
proc crmcChargeDb::insertCharge { chargeUid sourceSystem } {
variable dbName
set retVal 1
  if { ! [crmcChargeDb::alreadyCharged $chargeUid $sourceSystem] } {
		if { [crmcSqliteUtils::openDb 0 $dbName crmcChargeDb::dbInit crmcChargeDb::openWait] } {
			if { [catch {$dbName eval “insert into sentCharges (chargeUid,sourceSystem,lastUpdateTclSec) values(’$chargeUid’,’$sourceSystem’,’[clock seconds]’)”} errs] } {
				crmcSqliteUtils::log $dbName “Failed to insert chargeUid= $chargeUid, sourceSystem= $sourceSystem” 1
				set retVal 0
			}
		} else {
			set retVal 0
		}
	}
crmcSqliteUtils::closeDb $dbName
return $retVal
}
proc crmcChargeDb::alreadyCharged { chargeUid sourceSystem } {
  
  global env
variable dbName
  if { [info exists env(debugChargeDb)] } {
    crmcSqliteUtils::log $dbName “chargeUid= $chargeUid, sourceSystem= $sourceSystem” 0
  }
  
  if { ! [crmcSqliteUtils::openDb 0 $dbName crmcChargeDb::dbInit crmcChargeDb::openWait] } {
    return 0
  }
  
  set chgExists [$dbName eval “select count(chargeUid) from sentCharges where chargeUid = ‘$chargeUid’ AND sourceSystem = ‘$sourceSystem'”]
  if { [info exists env(debugChargeDb)] } {
    crmcSqliteUtils::log $dbName “chargeExists= $chargeExists, chargeUid= $chargeUid, sourceSystem= $sourceSystem” 0
  }
  
  crmcSqliteUtils::closeDb $dbName
  
  return $chgExists
  
}
When a result message is received from Xcelera, we first check to see if it’s already been charged (keyed to document UID from Xcelera).
If it’s already been charged, we kill the message. Otherwise, we process the message and write the DB – something like this:
				# Check that this Xcelera document number has not been charged before
				if { [crmcChargeDb::alreadyCharged $xceleraDocNum XCELERA] } {
					set killMsg 1
					append warnings “Duplicate charge discarded – (Procedure= [crmcHL7::readSegFieldComponent msgArray OBR 4 3] ([crmcHL7::readSegFieldComponent msgArray OBR 4 1]), MRN= [crmcHL7utils::validMrn msgArray EPIC], Encounter= [crmcHL7utils::validEncounter msgArray EPIC])nn”
				} else {
# morph result into a DFT here
					# record the charge message as sent
					crmcChargeDb::insertCharge $xceleraDocNum XCELERA
# CONTINUE the message here
}
This solves the problem of volatility of a global or namespace variable and is plenty fast. It could be a little quicker with indexing – something definitely required for larger/more complex databases.
Jeff Dinsmore
Chesapeake Regional Healthcare
Yes I know but if one was filtering, the way I described is probably the best way to do it I think.
That is have the filter described in a IF Action first thing with the building of the message in the ELSE of the IF.
Nothing before the IF (unless it is related to the filter) and nothing after the ELSE outside of the scope of the IF.
email: jim.kosloskey@jim-kosloskey.com 30+ years Cloverleaf, 60 years IT – old fart.
