Forum Replies Created
-
AuthorReplies
-
Look at the OBX-5 and then look for these headers below, there is one for each MIME attachment.
=_NextPart_000_004F_01CDD23C.28674660X000dContent-Type: text/html;X000d
charset=”Windows-1252″X000d
Content-Transfer-Encoding: 7bitX000d
Here is a tcl library for handling mime. I’ve not used it, but it is where I would start. http://wiki.tcl.tk/779 I’ve used perl modules for handling MIME and they parse the message and provide each attachment back as different elements in an array. Then I would process/save each one.
Looks like the message has end of line quoted (x000d) These need to be mapped back to r and then use something that can parse the MIME formatting and provide you the attachments. The last section is html that you could parse out the fields and put them into an hl7 message.
Do you know what your ORU output needs to look like? ie can you convert these sections to notes or do you need to convert each line into OBX segments with results?
It looks like the first mime attachment is not HTML. Depending upon your output requirements, you might be able to just pass the entire first attachment through as a plain text note.
Does the DRIVERCTL have a FILESET?
Here is code from a working files_out proc. I cut out extraneous code. This only sets the directory and leaves the filename alone. You might be able to do something similar from the trxid proc.
proc files_out_ld_route_minimal_out { args } {
set dispList {}
keylget args MODE mode ;# Fetch mode
switch -exact — $mode {
start {
}
run {
keylget args MSGID mh
set msg [msgget $mh]
set base_outdir “/ftp/Labdaq/Cloverleaf/results_minimal”
##############################
# set the directory name
##############################
set driverctl [msgmetaget $mh “DRIVERCTL”]
set fileset {}
keylget driverctl “FILESET” fileset
set outdir “/new/out/dir”
# add OBDIR back into the FILESET key without clobbering any filenames that are there
keylset fileset “OBDIR” $outdir
keylset driverctl “FILESET” $fileset
msgmetaset $mh DRIVERCTL $driverctl
lappend dispList “CONTINUE $mh”
}
time {
# Timer-based processing
}
shutdown {
}
default {
error “Unknown mode ‘$mode’ in fileset_numfile_out”
}
}
return $dispList
}
Stephan Rubin wrote:Vaughn,
Also, you made mention of automated testing you conduct… what sort of tools or framework are you using to carry this out?
We use a homegrown system. It has a central testing framework which is actually a shell script. It has a set of functions for different types of tests. Then a section invoking the various test groups. Each test group has an IN directory with a set of files each with raw input, and then an OUT directory where the output is saved. This is compared with a MASTER directory to determine if that individual test passed.
For instance: we test a tclproc with a wrapper around hcitpstest which passes hcitpstest the input file (hl7) and then saves the output file. For each test condition we create a different input file with a filename that describes the test.
We have additional tests for individual route testing (hciroutetest), series of routes, xlate tests (hcixlttest), and standard output.
We also use the respective language unit test frameworks of tcl for testing tcl libraries and perls for perl libraries. Many of the custom reports accessing our LIS system are also unit tested in this same process.
The whole set of tests is run every night and I get an email which contains any failed tests.
I have a script which I use to run the emacs ediff command to compare the output with the master files. This lets me see exactly what lines/letters have changed very quickly. I can evaluate 10 changed hl7 tests in 2-3 minutes, see all the changes and save the changes I want to the MASTER files.
It could use some tuning and cleanup as it has grown with our need, but it works for us. It has made it possible for us to make changes, test and put into production within an hour, while being confident that everything else still works.
We have only the site directory in svn. We exclude some directories used by cloverleaf: exec, lock, revisions.
We use a stock svn installation on the test server. The production server uses ssh to get into it. We use the command line svn tools. ‘svn status -u’, ‘svn diff’, ‘svn update’ on both servers.
NetConfig doesn’t seem to stay in sink. ie when cloverleaf saves it, it sometimes reorders the parts. I wrote a script which parsed it into sections and then rewrote it ordering it by name of section. I run that script before checking in NetConfig and then I can use the svn diff tools to tell what changes have occurred.
We use subversion for everything except the live process files. So tclprocs, xlates, netconfigs, formats, etc. Also other scripts and sql reports and their output for reference.
We currently only have one site. But you could create another svn location for another site. Or use svn externals if you wanted to share tclprocs between sites for instance.
Regarding change control, I envision a development environment for each developer and then a central environment where all changes are merged and tested before tagging a release version. I don’t know very much about that as I haven’t worked in enviroments with multiple developers in revision control. I count on unit testing to confirm that all the parts are working together.
Obviously, svn gives you an easy way to revert changes if there is a problem. I’ve needed that a few times. 🙂
Vaughn
We have used two main concepts: revision control and automated testing.
Both production and test environment are in svn. Before any release, we make sure our test environment passes a set of automated tests. We migrate changes by updating svn on the production.
This has worked very well for us. However, we currently have only one developer. I expect when we need more developers, we will have a central testing environment which we use to confirm that all the tests pass, then svn tag that release and deploy it.
Svn allows us to confirm that the development and production environments are exactly the same. It also provides us with a way to view and double check the changes being applied.
The automated testing allows us to make changes and be pretty confident we didn’t break any previously working code. This depends upon the creation of adequate tests.
If in unix, you can use a shell script that looks something like:
sed -e 1d out
In tcl, you can split by n (or whatever eol character) and remove the first item from the list and then join.
set in “anbncn”
set lines [split $in “n”]
set lines [lreplace $lines 0 0]
set out [join $lines “n”]
puts $out
I hope this helps.
Chris, I found that in the guest I needed to run vmware-toolbox and select the time synchronization setting. You might check there.
This question will be better answered in the vmware forums.
Jeff,
I had the same issue you did when entering into a pre-existing cloverleaf environment. We changed everything to use files between sections of cloverleaf and then wrote a queue manager to move the files. This allowed us to create archives between each cloverleaf section that were searchable with normal windows/unix tools. This made it much easier to figure out what was going on.
We found that we needed to use the tclproc checkDir to make sure that the inbound directory files were fully written. Otherwise, cloverleaf could pick up files before they were completely written.
Jim,
Thank you for the feedback. What we have now is one inbound routing thread that routes to about 30 outbound client threads. The change would be to merge the 30 client outbound threads into one that uses tcl only for determining the output directory and uses the message trxid to determine the outbound directory. It will be a little more obscure for viewing in the GUI, but easily seen in the inbound thread routing configuration where we have the xlates configured.
Can an inbound thread handle multiple messages at the same time? If not, then we will not have any performance decrease.
Cost is a large factor in this decision.
Thank you.
Thank you all for your feedback. This will really help us.
My problems went away as soon as I disabled compression. Now I use a cron job in the middle of the night to compress any .log files. This lets me use the regular log rotation settings within cloverleaf.
The qdx version we are using is 5.6 if that makes a difference. You might consider firewalling the cloverleaf ports so that only the expected ips can access them. This obviously will not work if the remotes use dynamic addressing, but in many cases firewalling would work and be sufficient.
I have turned LogHistory back on and disabled the compression. I haven’t had any more trouble this week. I can enable the compression as a cron job in the middle of the night.
-
AuthorReplies