Forum Replies Created
-
AuthorReplies
-
I don’t know that this is the answer but it may help you.
According to the man page for TCL 8.5:
All internal computations involving integers are done calling on the LibTomMath multiple precision integer library as required so that all integer calculations are performed exactly. Note that in Tcl releases prior to 8.5, integer calculations were performed with one of the C types long int or Tcl_WideInt, causing implicit range truncation in those calculations where values overflowed the range of those types. Any code that relied on these implicit truncations will need to explicitly add int() or wide() function calls to expressions at the points where such truncation is required to take place.
All internal computations involving floating-point are done with the C type double. When converting a string to floating-point, exponent overflow is detected and results in the double value of Inf or -Inf as appropriate. Floating-point overflow and underflow are detected to the degree supported by the hardware, which is generally pretty reliable.
One caveat, I wrote this from memory and I rarely code anymore so that “string range” command might be a tad off. Other than that it looks right but your mileage may vary…LOL.
Couple of things:
1. A split on a newline would actually be “n” though splitting on “r” should work fine too. The thing to also understand about split is that it works on ONE character only (you can’t have it split on “******”). What I will often do in that case is a regsub to get a series of characters down to one (and I use a bell “b” as the character I replace it with because you never see that otherwise).
2. The foreach looks more like this:
Code:
set segments [ split $msg “n” ]
foreach line $segments {
set line [ string trimright $line “r” ] # Just in case the line termination is rn
if { [ string range $line 1 5 ] eq “*****” } {
continue
# This will go to the next iteration
}
set linedata [ split $line “:” ]
set label [ lindex $linedata 0 ]
set value [ lindex $linedata 1 ]
}Hope that is helpful.
I am in agreement with everyone else here regarding the first step. What you have is a header followed by vertical records. Eliminate the header since that is of little or no use to you (you may need the name of the hospital for other purposes but you could grab that as you are going through). First I would split on a newline and then (probably in a foreach) go through each line in the file. Ignore everything until you get to one that begins with a series of asterisks. Beyond that, ignore every line that trims out to be blank. Split the lines that are not blank on the colon and do:
set label [ lindex $linedata 0 ]
set value [ lindex $linedata 1 ]
Then do:
set $label $value
This will give you a variable such as Date that would contain its value for the current record. Any new fields that are added would fall into that same situation, you would just have a new variable. Then you can set up a variable length record using a subst command.
This is obviously more pseudo code that actual code but, hopefully, you get the idea.
Have a great day!
How about you take PV1.7.1 and FT1.3 as the input to a copy statement in the translate and set FT1.3 as the output from that copy statement. Then you can take [ lindex [ split [ lindex $xlateInVals 1 ] “-” ] 1 ] and see if it is blank.
Code:
set inval1 [ lindex $xlateInVals 0 ]
set inval2 [ lindex $xlateInVals 1 ]
set testValue [ lindex [ split $inval2 “-” ] 1 ]
if { $testValue eq “” } {
set inval2 [ lreplace $inval2 1 1 $inval1 ]
}set xlateOutVals $inval2
I have not actually tried this code I just wrote it directly into this browser so I may have put in some syntax errors so though the concept should be right your mileage may vary.
Good luck Frank!
OUTBOUND:
Here is a basic proc that shows how you use the DRIVERCTL metadata
Code:
proc ftpAdtOutputFileVPN { args } {
keylget args MODE mode ;# Fetch modeset dispList {} ;# Nothing to return
switch -exact — $mode {
start {
# Perform special init functions
# N.B.: there may or may not be a MSGID key in args
}run {
# ‘run’ mode always has a MSGID; fetch and process itkeylget args ARGS.ACCTNUM acctnum
set timestamp [ clock format [ clock seconds ] -format %Y%m%d%H%M%S ]
set transTimeInClicks [ clock clicks ]keylget args MSGID mh
set extension “DAT”
set prefix “CLF”
set outboundFilename “$prefix$timestamp$acctnum$transTimeInClicks”
set filename “$outboundFilename.$extension”
keylset fileKeys OBFILE $filename
keylset filesetKeys FILESET $fileKeys
msgmetaset $mh DRIVERCTL $filesetKeys
lappend dispList “CONTINUE $mh”
}I have only included the run mode and I have only included the OBFILE key in order to set the outbound filename but there are keys that match the password and the username and the host, etc…
There is almost nothing that you cannot change about the outbound using this technique. You just put a proc like this on the Outbound TPS, configure the thread to be Fileset-FTP and voila!
Here is the list of the other keys that you have available:
FTPACCT
FTPCLOSE
FTPHOST
FTPOBDIR
FTPPASSWD
FTPPORT
FTPTYPE
FTPUSER
OBAPPEND
OBDIR
OBFILE
OBSTYLE
This comes from the help under configuration->protocols->Fileset FTP but it is a little difficult to follow without understanding the code snippet above.
Enjoy!
The options available in the GUI are fairly self-explanatory, I think. That said, all of those options can be overridden using the DRIVERCTL metadata field within the message.
First, there is the question of inbound vs outbound.
Both Fileset-FTP and Fileset-Local handle the inbound the same way, by default they will read everything in the directory specified and delete it when they are done. How the messages enter the engine is determined by the style. If the style is single or eof then the full content of the file will be in the message. If the style is hl7 or nl then each file will be split accordingly and the content of the message just one hl7 or nl terminated message. When you use the latter methods there is a fileset file that is created in the applicable process directory that keeps track of where in the file the process was at (in case the process is cycled before the engine is done with the file). That is very important because if you want to start over with the file you have to get rid of that fileset file or you will have problems.
If you do not want to process all of the files in the directory then you will have to create directory parse and deletion parse procs. Those procs get a list of the files that are in the directory (not the contents of the files but a file listing). You can then use TCL to traverse the list and remove any that you do not want to process. Once you have removed them you continue the message and the Inbound TPS will get a message with the FILE CONTENTS. If you want the original filename it is held in the message metadata. The deletion parse gets passed the list of files that came out of the directory parse and you remove from that list any files that you do not want to be deleted. Keep in mind that you must do something with the files or they will still be there and will be processed by the next iteration.
Okay, that is how the inbound works. I will separate the outbound into another post because this one is getting long.
I am not going to pretend that I know how to configure the http client protocol for this (and since it is curl internally this may help you with that also) but I have done it with tclCurl in a proc:
Code:
set filename $args
package require TclCurl
package require base64
echo filename:$filename
set outputDir [ file dirname $filename ]
set url “http://10.180.10.130/pdf/pdf.php”
set typ “multipart/form-data”
set ch [ curl::init ]
$ch configure -verbose 1
$ch configure -post 1
$ch configure -timeout 60
$ch configure -url $url -httpheader [ list “Content-Type: $typ” ]
$ch configure -httppost [ list name “f” file “$filename” contenttype “text/plain” ]
$ch configure -httppost [ list name “submit” contents “Submit Query” ]
$ch configure -bodyvar xmlData
catch { $ch perform } returnCode
if { $returnCode == 28 } {
error “Connection timed out waiting for server”
}
#Format of error:Error: File invalid.
$ch cleanup
#echo “XMLDATA:$xmlData”
set returnData “”
if { $xmlData eq “” } {
error “The return value was blank”
}
regexp (.*) $xmlData wasteVar returnData
if { $returnData eq “” } {
error “Data returned was $xmlData and it is not in the right format”
} else {
regexp (.*) $returnData wasteVar newFilename
regexp (.*) $returnData wasteVar newPDFData
}
set startText [ string range $newFilename 0 4 ]
if { $startText eq “Error” } {
error “Webservice could not process file!”
} else {
set fullName [ file tail $newFilename ]
set extension [ file extension $fullName ]
set rootName [ file rootname $fullName ]
set outputFilename “$rootName$extension”
set decodedPDF [ base64::decode $newPDFData ]
set fh [ open $outputDir/$outputFilename w ]
fconfigure $fh -encoding binary
puts -nonewline $fh $decodedPDF
close $fh
}What this code is doing is sending a PDF to a webservice that is then modifying the PDF and sending it back to me within an XML message. That is, obviously, not exactly what you are doing but it should give you a good basis to work from. If this doesn’t quite do it for you let me know and I have a few other samples that might help, it is really going to come down to tweaking those httppost arguments.
We embed PDFs (some upwards of 10-15 Meg). As long as they are base64 encoded (or MIME encoded in some way) there is no real issue with interference with MLP. We do not limit the size of an image that we will take in or deliver however some of our communications mechanisms do limit that. Also, if we do not NEED to have the image ride all the way through with the message then we will put the image out to a file and then read it back in on the outbound-TPS. It is a little extra work but the translate thread really slows down when messages of that size go through it so the throughput that we get back is well worth it.
One other thing, performance-wise, related to this is that you can win back some of the performance by changing to a length encoded transfer instead of using MLP. It is a small difference but it does add up with messages of this size.
Firstly I will say that there isn’t much of anything that can’t be done.
You have several choices about how to tackle this, you can go with a completely TCL-upoc method or you can go with building your output format as a VRL with the separator being a pipe. Once you have created the VRL and mapped the information in the segments to that VRL then you will need to handle the incrementing of the counter. For that you can create an outbound TPS proc that reads the previous count and the date from a file, gets the current date, compares the current date against the previous date from the file to see if it is time to reset the counter, if it is time to reset the counter then the counter becomes 1 and that gets written out to the file along with the current date otherwise you take the counter from the file, increment it, and write it back out with the current date.
That description should be enough to give you a leg up but I think that you are over thinking this one unless there is something that I am missing.
As options, you could parse the message within the same TPS proc and skip the translate completely. For that I would probably use the subst command since you already know what your output format needs to consist of. Another option for tracking the counter would be database table or sqlite.
If it is a regular expression then the asterisk says to match the previous character, a period says to match any character. So your expression would match:
rad_data_fileetxt
but not:
rad_data_file.txt
The first one works because the period is a wildcard but if you really want to match the period then you would have to escape it:
rad_data_file.txt
How many clients do you have using CSC? I am assuming it is fairly extensive.
Yes, that day is not here yet but we will likely be at that point in the next couple of years or so. The way that we handle it is by using a remote access program. This is also how we handle support of the CSC clients, that makes it a bit more challenging but we have made that a standard so it is not as bad as it sounds. At the user group 2 years ago I gave a presentation on this and I still stand by what I said then, I would not install CSC on a client that I do not have remote access to in some way. I have had to back off of that on two occasions and both of those occasions have come back to bite me from a support standpoint.
Even though it IS a killer to upgrade all of those clients, I strongly believe that the whole thing could be fully automated. That would (will) take some time and effort but that is what I am shooting for next. My biggest issue with that (believe it or not) is with the constant modification of the ports that CSC uses for its connections. We have several clients where their network people block ports going outbound so we are stuck with either changing the ports that tomcat is listening to or having the newest set of ports opened. We have elected to modify the ports that Tomcat listens to but this means that we cannot run two different versions of CSC at the same time. The effect of that is that we have to flip the clients back and forth between our two servers during an upgrade. That is bad but the benefit is that we maintain full control over when the upgrade will occur and we are not stuck waiting for a port to be opened.
This automated upgrade process is becoming more of an option in 4.4 because 4.3 and below are unable to transfer the large amount of data that the CSC install executable represents. As far as I can tell, 4.4 handles it just fine. I am sure that this limitation on the size of a transfer has kicked you in the tail a time or two (just try having the server retrieve the client’s log files and you will know what I mean…on second thought don’t actually try it just trust me that you don’t want to). In 4.4 it looks like, within reason, those options really are valid. This product is improving exponentially with every new version and the changes from 4.3 to 4.4 are no exception.
We already have a process that automatically does everything on the client side up to sending the registration to the server so most of the scripting required to auto-upgrade is complete. Our challenges to that process are the ports and the transfer of the client install software. Using a task in 4.4 we could transfer the software so if we can solve the port problem then we will be there. I have a couple of ideas there but nothing solid yet.
We are using 4.3 and 4.4:
4.3 on a windows 2003 server
4.4 on redhat linux
We are in the process of the upgrade from 4.3 to 4.4 but it is challenging because there is not a simple process to upgrade. 4.4 is definitely where you are going to want to go because it sends much faster and is better in every way that we have seen up to this point.
We have a little under 200 clients at this point.
-
AuthorReplies