› Clovertech Forums › Read Only Archives › Cloverleaf › Cloverleaf › SMAT History strategies
We have had a SMAT file strategy in place that has served us for quite some time, but now are running into issues with running over the 1 GB limit and before we reinvent the wheel, I would be interested in hearing what others are doing.
Right now, we cycle all our processes nightly with a cron job and copy the process.old files into a directory that matches the day of the month…01-31. It works out pretty well because if we know the day a message was created, we can easily go to that day, find the process’s file and load it into SMAT. I could continue this strategy by using a script to cycle the busy processes more frequently and getting a little creative with the file naming, but I am also looking at the SMAT history functionality.
What are people doing in their environments and if you are using the SMAT history functionality, does it provide simple enough access to historical files? BTW, we don’t have Global Monitor yet.
Mike
Hi Mike.
We store all inbound untranslated messages in SMAT files and have housekeeping tasks that cycle the SMAT files daily, adding a YYYYMMDDHHMM timestamp and copy them to a ‘year’ repository. The SMAT files are deleted from the Cloverleaf site after a couple of months.
We also have a maintenance menu that allows cycling of the SMAT files. This is rarley used now, but was required on previous versions of Cloverleaf when SMAT files could not be accessed until cycled.
We work on the Cloverleaf Linux servers and use custom scripts to search the SMAT files.
We plan to create a database with an appropriate structure to load the SMAT files
We ran into large SMAT files like you when cycling and archiving daily.
We have a 2GB file size limit on our AIX server for the hci user ID.
We ended up cycling SMAT every 3 hours during the work day at the following times
6 AM
9 AM
noon
3 PM
6 PM
midnight
We use manual methods or homegrown scripts at the command line to retrive historical SMAT messages of interest.
The archived SMAT files are date/time stamped.
We keep one year of SMAT in our archive file system and delete anything older than a year once a day.
I almost never use the SMAT GUI tool becuase for me it has just been more direct and comfortable with commands on the back end in an xterm, but you did catch my attention when you said:
I am also looking at the SMAT history functionality
Does that imply that the SMAT tool has some history functionality built in, becuase that might peak my interest enough to launch the SMAT tool and look at this decade.
Russ Ross
RussRoss318@gmail.com
Yes, from the NetConfig window, go to Options>Site Preferences>Smat History. When enabled, it will
1. add a YYMMDDHHmmSS to each smat file and move it to the /SmatHistory directory.
2. purge off old files based on Max Number, Max Total Size and/or Max Age.
3. move any manually (scripted) cycled files to this new SmatHistory directory.
Of course, then you can turn on automatic SMAT cycling for on those processes that might bump up against the maximum allowable file size, 1 GB in our case. It is a nice safety valve just in case. Otherwise, when the file overflows, the process panics and shuts down.
My biggest goal in the redesign is to make it easy to find a specific file for a specific site as we are in them all the time and we don’t have the luxury of Global Monitor. I am pondering two options right now.
1. Have the SmatHistory directory actually be a symbolic link to a more flattened directory structure, making it easier to jump from one site’s SMAT to another. This would require a little scripting to automatically create the symlink and target directory whenever I create a new process. This script would probably also need to handle the purging of old SMAT files as they wouldn’t be in a place where CL’s SmatHistory functions would be able to purge them.
2. After thinking about it some, it may be easier to create a symlink for each site’s process folder in each site’s process folder. This way, if I am in our orders site and want to look at an ADT SMAT file, just need to hit drill into the appropriate symlink, pick the process and then pick the current SMAT file or go one level deeper into the SmatHistory file. It seems like it would be efficient to work with and we will benefit from CL’s native purging capabilities.
Thanks for indulging me. Typing this up has helped me think through the ins and outs of each option. #2 is sounding pretty good to me. The only thing I don’t like is having all that history decentralized in each process directory, but when I get out of my status quo mindset, it kinda makes sense to have it there.
Mike
<
We haven’t run into a problem with SMAT files growing too large for the file system as yet.
We have a script that cycles and archives all smat/error/process logs. They’re stored in a folder linked to Cloverleaf’s “archive” directory for local viewing.
We, also, use custom scripts to sift through SMAT files. It allows us to search across multiple days’ files in a single iteration.
Along with that mechanism is a function to feed all found messages through a custom procedure for processing. We use this function to do light data mining from the SMAT messages. For example, we may use it to get a list of all lab order codes that have traversed a given interface within a certain timeframe, how many of each, etc.
Richard -> What do you mean when you say:
We plan to create a database with an appropriate structure to load the SMAT files
Does that mean you’re planning to dump the whole SMAT, indexing, messages and all, into a DB?
For dynamic access to SMAT messages, we have an indexing function that takes indexing info from the idx files, extracts a few data elements of interest from each message (message trigger/type,MRN,encounter, etc) and drops it into an SQLite DB. This gives us very quick script access to individual messages across multiple SMAT files.
Jeff Dinsmore
Chesapeake Regional Healthcare
For dynamic access to SMAT messages, we have an indexing function that takes indexing info from the idx files, extracts a few data elements of interest from each message (message trigger/type,MRN,encounter, etc) and drops it into an SQLite DB.
Hi Jeff.
Richard -> What do you mean when you say:
We plan to create a database with an appropriate structure to load the SMAT files
Does that mean you’re planning to dump the whole SMAT, indexing, messages and all, into a DB?
Apologies for a poorly written response, we plan on cleaning up the SMAT file contents (some messages will be from a resend of data) and loading into a database with appropriate keys. In essence a similar process that you planning
I have recently used SQLite within TCL and it’s a great tool.
I am thinking about writing an app that would move all SMAT functions to an SQL database, making it easy to query for messages and resend them. I could also extend SMAT access to some of my users, taking the burden message safaris off us interface guys. It would also provide a good way to do queries like…
Select DG1_3_0, DG1_3_1, count of messages as msgCount from smatDB where DG1_3_0 in (338.19, 789.00, 787.00, 288.60) and MSH_7 between 20140501000000 and 20140505235959 group by DG1_3_0 order by count of messages desc
I know Global Monitor has some nifty functionality, but dollars are tight these days and I don’t see that changing anytime soon.
Mike
For those who have Global Monitor, or anyone who has thought deeply about SMAT files and archiving, what do you think about the Global Monitor query capability?
You can query across multiple SMAT files at once. This sounds great. When you think about the implementation, it gets a little complicated.
– it can only query uncompressed SMAT files
– I’m not sure if you can put a wildcard in the SMAT file name in the query
– If you can’t use a wildcard, then you can only search the specified SMAT file, which means the search window is variable – ex: if your automated script just cycle/saved the SMAT file, the query tool will be picking up a new SMAT file with little data in it. No matter how much you adjust your scripts to keep more data in the SMAT file before it is cycled, if you run a query at ‘the wrong time’ then there is not much data to look at.
I’m really pushing for having the GM query easy to use, so that users can run canned queries to look for ‘lost’ messages (i.e. prove that the message was delivered), or to see what was in a message (thats a little more complex, but thats a different subject).
But if we need to search back more than a few hours, we run into the cycle/save run time issue and the query is useless to non-Cloverleaf Administrations (they will not know how to put the date stamp/julian date index on the end of the file names to include the correct SMAT files for searching).
I like the database queries discussed and planned above. I also have report scripts setup to search multiple SMAT files, looking with specific criteria and reporting the information in an easy to use format.
Do you plan to enable querying for non-Cloverleaf administrators? If so, what kind of GUI will they use?
Finally, does anyone have reporting scripts that can trace an order across and between multiple systems, so they can see, for example, the add-on order coming fron an ancillary, the number assignment coming out of the HIS, the order status updates coming from both sides and the results coming out of the ancillary (at least a summary of the data), all in one report? I’m curious about the format of such a report.
Peter
Peter Heggie
Jeff Dinsmore, I also am interested in your logic to index messages within SMAT files. Are you willing to share the utility?
David – and (long ago…) Kevin,
I can certainly share, but have been dragging my feet because extracting the indexing from the other related code around it will take a bit of effort.
I’ll distill it and will let you know when I have it ready. It’s running under RHEL and Cloverleaf 5.6.
Are there others who are interested?
Jeff Dinsmore
Chesapeake Regional Healthcare
I did not realize how old the thread was…. I appreciate your willingness to do the work. I have concerns about the usefulness of your application since I am on windows.. Is your code in TCL? s shell script? Please say TCL!
Definately me too.
How does SMAT database querying line up with your process? Off the top of my head it seems that SMAT database use is not quick and clean replacement at this time/release.
Peter Heggie
Peter,
We generally use the indexing for direct access to SMAT messages to be used realtime in live interfaces. This is very quick, with sub-second times to find and retrieve messages.
This is generally used when I’m looking for specific messages – for a given encounter, for example – inside a translation script.
Info from the found messages is most often used to enrich (a term shamelessly stolen from someone on this forum) the current message – to add PV1 info to a result that doesn’t contain it, for example.
We also use the indexed data to backload lab/rad/transcriptions to one of our oncology systems whenever a person is registered as an oncology patient. We find the results for the person based on an ADT trigger and send them to oncology so it has a history of the patient’s recent tests/treatments.
I also have an older function that walks through SMAT files (without any indexing) for a given regular expression. It accepts parameters for start/end date/time to limit the search. The found messages are dumped in chronological order into a file.
I generally use this for investigating. If I want to see all of the ADT messages received for a given patient encounter, I’ll use this one with a general date range that’s sure to capture all of the messages I’m after.
I also use this one to mine the SMAT files for data – for example, if I’m interested in cataloging the order codes for results coming from Lab over some period of time – or if I want a summary of the ADT triggers coming from our EMR.
There’s no reason we couldn’t integrate the newer indexed method into the older tool, but just haven’t done that yet – and the indexed method doesn’t support full-text searching with regular expression.
Also, to answer one of your earlier questions, these tools are not used by “normal” folks – just by us abnormal interface types.
Jeff Dinsmore
Chesapeake Regional Healthcare
David – It’s all Tcl.
Jeff Dinsmore
Chesapeake Regional Healthcare
Good answer! Thanks!
I have used mysqltcl to great advantage to generate databases of indexed messages. It is structured so that I can say things such as:
select msg
from transindex as idx
left join translist as tr on tr.msgnum = idx.msgnum
where msh09 = ‘ORU^R01’
and PV1_3 like ‘ULBD%’
and pid05 like ‘SMITH^JO%’
I send messages to a capture site (to prevent site crashes from interrupting message processing) and use capture scripts to log the transactions into a database table. I then use cronjobs to move these messages to other machines where I use TCL scripts to create index tables from the message records.
######################################################################
# Name: store_trans
# Purpose: Stores the incoming transaction in a MySQL Database
# UPoC type: tps
# Args: tps keyedlist containing the following keys:
# MODE run mode (”start”, “run” or “time”)
# MSGID message handle
# ARGS user-supplied arguments:
#
#
# Returns: tps disposition list:
#
#
#####################################
# Modifications
#
# 20100511 – Added a stanza of regsub to replace backslashes with two
# to get thru the multiple escapes needed to get thru
# layers of TCL and SQL.
#
# 20100608 – Found that regsub put in on 0511 disturbs the escape removal
# regsubs. Some micro messages are corrupted.
# Moving escape replacement above backslash enhancement corrects
# the problem.
proc store_trans { args } {
keylget args MODE mode ;# Fetch mode
set dispList {} ;# Nothing to return
switch -exact — $mode {
start {
# Perform special init functions
# N.B.: there may or may not be a MSGID key in args
load /opt/quovadx/qdx5.6/integrator/usercmds/sql.so
}
run {
# ‘run’ mode always has a MSGID; fetch and process it
keylget args MSGID mh
set msg [msgget $mh]
regsub -all {’} $msg “” msg
# 20100511
regsub -all {x5c} $msg {\\} msg
regsub -all {x7b} $msg {&} msg
regsub -all {x7d} $msg {&} msg
regsub -all {x22} $msg {\”} msg
regsub -all {xc0} $msg {\xc0} msg
regsub -all {x80} $msg {\x80} msg
# Send to Venus – Localhost
set conn [sql connect 127.0.0.1 clem kaddadlehoffer]
sql selectdb $conn cloverleaf
#echo [msgget $mh]
if { [catch {
sql exec $conn “INSERT INTO translist VALUES (null,’$msg’,0,null,’uhadtordprod’)”
} result] } {
echo ‘Unable to store message $msg on Mercury’
}
sql disconnect $conn
#### We don’t need the message anymore
#lappend dispList “KILL $mh”
#### Testing the test code for pathlab – Cont the messages
lappend dispList “CONTINUE $mh”
}
time {
# Timer-based processing
# N.B.: there may or may not be a MSGID key in args
}
shutdown {
# Doing some clean-up work
}
}
return $dispList
}
proc store_outbound_trans { args } {
keylget args MODE mode ;# Fetch mode
set dispList {} ;# Nothing to return
switch -exact — $mode {
start {
# Perform special init functions
# N.B.: there may or may not be a MSGID key in args
load /opt/quovadx/qdx5.6/integrator/usercmds/sql.so
}
run {
# ‘run’ mode always has a MSGID; fetch and process it
keylget args MSGID mh
set msg [msgget $mh]
regsub -all {’} $msg “” msg
# 20100608 Moved before backslash enhancement.
regsub -all {\R\} $msg {~} msg ;# Replace HL7 CR escapes with tilde
regsub -all {\S\} $msg {^} msg ;# Replace HL7 Caret Escapes with Caret
# 20100511
regsub -all {x5c} $msg {\\} msg
regsub -all {x7b} $msg {&} msg
regsub -all {x7d} $msg {&} msg
regsub -all {x22} $msg {\”} msg
regsub -all {xc0} $msg {\xc0} msg
regsub -all {x80} $msg {\x80} msg
#regsub -all {\R\} $msg {~} msg ;# Replace HL7 CR escapes with tilde
#regsub -all {\S\} $msg {^} msg ;# Replace HL7 Caret Escapes with Caret
# Send to Venus – Localhost
set conn [sql connect 127.0.0.1 clem kaddlehoffer]
sql selectdb $conn cloverleaf
#echo [msgget $mh]
if { [catch {
sql exec $conn “INSERT INTO outtrans VALUES (null,’$msg’,0,null,’sccoutmsg’)”
} result] } {
echo ‘Unable to store message $msg on Mercury’
}
sql disconnect $conn
#### We don’t need the message anymore
#lappend dispList “KILL $mh”
#### Testing the test code for pathlab – Cont the messages
lappend dispList “CONTINUE $mh”
}
time {
# Timer-based processing
# N.B.: there may or may not be a MSGID key in args
}
shutdown {
# Doing some clean-up work
}
}
return $dispList
}
The syntax would be a wee bit different as this library load is a MUCH older version of mysqltcl. Were I to rebuild it today, it would be “load package mysqltcl” and the syntax would be “set conn [::mysql::connect -host 127.0.0.1 -user clem -password kaddlehopper -db cloverleaf]”
My inbound uses a simple single table index from “interesting” fields in the inbound messages.
CREATE TABLE `transindex` (
`msgnum` bigint(20) NOT NULL DEFAULT ‘0’,
`msh03` varchar(100) DEFAULT ”,
`msh04` varchar(100) DEFAULT ”,
`msh06` varchar(100) DEFAULT ”,
`msh07` varchar(100) DEFAULT ”,
`msh09` varchar(100) DEFAULT ”,
`pid02` varchar(100) DEFAULT ”,
`pid03` varchar(100) DEFAULT ”,
`pid04` varchar(100) DEFAULT ”,
`pid05` varchar(100) DEFAULT ”,
`pid07` varchar(100) DEFAULT ”,
`pid08` varchar(100) DEFAULT ”,
`pid10` varchar(100) DEFAULT ”,
`pid18` varchar(100) DEFAULT ”,
`pid19` varchar(100) DEFAULT ”,
`pv103` varchar(100) DEFAULT ”,
`pv107` varchar(100) DEFAULT ”,
`pv108` varchar(100) DEFAULT ”,
`pv109` varchar(100) DEFAULT ”,
`pv117` varchar(100) DEFAULT ”,
`pv119` varchar(100) DEFAULT ”,
`in102` varchar(100) DEFAULT ”,
`in103` varchar(100) DEFAULT ”,
`in136` varchar(100) DEFAULT ”,
PRIMARY KEY (`msgnum`),
KEY `pid10` (`pid10`),
KEY `msh09` (`msh09`),
KEY `pv119` (`pv119`),
KEY `in102` (`in102`),
KEY `in103` (`in103`),
KEY `pv103` (`pv103`),
KEY `in136` (`in136`),
KEY `pv107` (`pv107`),
KEY `pid18` (`pid18`),
KEY `pv108` (`pv108`),
KEY `pid19` (`pid19`),
KEY `pv109` (`pv109`),
KEY `pid02` (`pid02`),
KEY `pid03` (`pid03`),
KEY `pid04` (`pid04`),
KEY `pid05` (`pid05`),
KEY `pid07` (`pid07`),
KEY `pid08` (`pid08`),
KEY `msh03` (`msh03`),
KEY `msh04` (`msh04`),
KEY `msh06` (`msh06`),
KEY `msh07` (`msh07`),
KEY `pv117` (`pv117`),
KEY `Name-DOB` (`pid05`,`pid07`)
) ENGINE=InnoDB DEFAULT CHARSET=latin1 ROW_FORMAT=COMPRESSED KEY_BLOCK_SIZE=4
In retrospect, I really wish I had been interested in more fields 😉
As we are a clinical lab, and our product is the results we produce, my indexing of results is a bit more sophisticated. I use an index table for the general PID and PV fields, and then tables to contain index fields for the ORC, OBR, OBX, and NTE segments. These are linked together by generated msgnum (mother message) obrnum (obx’s link up to this and the mother message) obxnum (links up to obr), ntenum (links up to obr or obx). That allows me to get at outbound messages with quite a bit finer granularity. Here’s my index script for the outbound:
#!/usr/bin/tclsh
#
# ob_moveandindex_nodel.tcl – 20070808 – Terry Kellum
# This program moves records from the feeding
# system to the data respository system.
# Designed for the outbound HL7 transactions, it
# does the initial MSH, PID and PV1 indexing,
# then generates index/content tables for the
# ORCs, OBRs, OBXs and NTEs.
# 20071204 – Terry Kellum
# Script development completed. Script deployed.
#
#
###############################################################################
# Modifications
###############################################################################
#
# 20081120 – Terry Kellum
# Modified to product this nodel version that DOES NOT cleanup
# old records in the source table.
#
# 20100511 – Terry Kellum
# Added a regsub line to the replication stanza to duplicate
# backslashes to preserve their copy to the dest db.
#
# 20101011 – Terry Kellum
# Modified code to adapt to 64 bit change to mysqltcl binary
# package.
#
# 20110216 – Terry Kellum
# Modified to place new DSP segments in the NTE table.
#
# 20110817 – Terry Kellum
# Modified to prevent enumeration of old records in nodel script.
#
# 20111102 – Terry Kellum
# CHARON ONLY – Modified to add update of the year tables.
# Database has been divided to allow quicker access to larger
# datastores. The four tables outtrans, outindex, translist and
# outtrans have been divided into year tables.
#
###############################################################################
########
# Globals
########
global prog_name
global source_conn
global dest_conn
########
# Defines
########
set prog_name “ob_moveandindex_nodel”
# Source
#set source_host 1.2.3.4
# Temp Pluto
set source_host 1.1.1.31
#set source_host 127.0.0.1
set source_db cloverleaf
#set source_db temp
set source_table outtrans
#set source_table outtrans_source
# Dest
set dest_host 127.0.0.1
#set dest_host 127.0.0.1
set dest_db cloverleaf
#set dest_db temp
set dest_trans_table outtrans
#set dest_trans_table outtrans_dest
set dest_index_table outindex
#set dest_index_table outindex_test
set dest_outorc outorc
#set dest_outorc outorc_test
set dest_outobr outobr
#set dest_outobr outobr_test
set dest_outobx outobx
#set dest_outobx outobx_test
set dest_outnte outnte
#set dest_outnte outnte_test
set user clem
set password kaddadlehoffer
# Number of seconds back for cleanup:
# 3369600 = 39 days 604800 = 7 days 259200 = 3 days 172800 = 2 days
#set cleanupage 604800
set cleanupage 864000
#set debug 0
set debug 1
# Set Nodel version
set skipdel 1
#set UpdateYearTables 0
set UpdateYearTables 1
#############
# procedures
#############
proc GetHighKey {conn table} {
# Gets the highest key in the specified db table
set fieldlist [GetFields $conn $table]
set keyfield [lindex $fieldlist 0]
if {[catch {set hi_list [mysql::sel $conn “select $keyfield from $table order by $keyfield desc limit 1” -list] } errmsg]} {
puts “**SQL Query Problem in GetHighKey: Table $table”
puts “**$errmsg”
return 0
}
set high [lindex $hi_list 0]
return $high
}
proc GetLowKey {conn table} {
# Gets the highest key in the specified db table
set fieldlist [GetFields $conn $table]
set keyfield [lindex $fieldlist 0]
if {[catch {set lo_list [mysql::sel $conn “select $keyfield from $table order by $keyfield limit 1” -list] } errmsg]} {
puts “**SQL Query Problem in GetLowKey: Table $table”
puts “**$errmsg”
return 0
}
set low [lindex $lo_list 0]
return $low
}
proc getSourceRecord {conn table recno} {
# Returns a database record from the source system matching the msgnum of recno
#upvar source_conn conn
#upvar source_table source_table
if {[catch {set row [mysql::sel $conn “select * from $table where msgnum = $recno” -list] } errmsg]} {
puts “**Unable to obtain record $recno from Source:”
puts “**$errmsg”
return “”
} else {
set output [lindex $row 0]
return $output
}
}
proc insertDestRecord {conn table row} {
# Inserts the row into the destination database table
# Get all the little quoties out.
set v1 [lindex $row 0]
set v2 [mysql::escape [lindex $row 1]]
#set v3 [lindex $row 2]
set v3 0
set v4 [lindex $row 3]
set v5 [lindex $row 4]
# Changed to use single quotes to prevent interp of slash sequences
set query “insert into $table values (’$v1′,’$v2′,$v3,’$v4′,’$v5’)”
if {[catch {mysql::exec $conn $query} errmsg]} {
puts “**Unable to insert row $v1 on Destination:”
puts “** $query”
puts “** Error: $errmsg”
puts “**Non-critical error. Continuing…”
}
}
proc CleanupForExit {} {
global prog_name source_conn dest_conn
# Catch a rollback if exists
catch {mysql::rollback $dconn}
# cleans up db connections and delete lock file
##catch {sql disconnect $sconn}
##catch {sql disconnect $dconn}
catch {mysql::close $source_conn}
catch {mysql::close $dest_conn}
# Remove running instance flag
file delete “/tmp/$prog_name.processing”
}
proc GetQueryResult {conn query} {
global prog_name
# Safely submits a query and returns a list of rows
if {[catch {set out [mysql::sel $conn $query -list ] } errmsg]} {
# Error running query
puts “**Error on query:”
puts “** $query”
puts “** Error Message:”
puts “**$errmsg”
puts “**aborting $prog_name at [clock format [clock seconds]]”
CleanupForExit
puts “########################################################”
exit
}
set resultlist {}
foreach row $out {
set nrow [NormalizeString $row]
lappend resultlist $nrow
}
return $resultlist
}
proc GetSingleQueryResult {conn query} {
global prog_name
#### Temp Comment – Watch this one. Tricky
# Safely submits a query and returns a list of rows
if {[catch {set row [mysql::sel $conn $query -list]} errmsg]} {
# Error running query
puts “**Error on query:”
puts “** $query”
puts “** Error Message:”
puts “**$errmsg”
puts “**aborting $prog_name at [clock format [clock seconds]]”
CleanupForExit
puts “########################################################”
exit
}
set result [NormalizeString $row]
return $result
}
proc SubmitSQL {conn statement} {
global prog_name
# Safely submits an SQL Statement
if {[catch {mysql::exec $conn $statement} errmsg]} {
# Error running Statement
puts “**Error on Statement:”
puts “** $statement”
puts “** Error Message:”
puts “**$errmsg”
puts “**aborting $prog_name at [clock format [clock seconds]]”
CleanupForExit
puts “########################################################”
exit
}
}
proc GetFields {conn table} {
# Returns the field names in a table
set query “show columns from $table”
set fieldrows [GetQueryResult $conn $query]
foreach row $fieldrows {
lappend fields [lindex $row 0]
}
return $fields
}
proc GenerateNewRecord {conn table} {
global prog_name
# Big assumption: The first field is the autoincrementing primary key
set fieldlist [GetFields $conn $table]
#puts “$fieldlist”; CleanupForExit; exit;
set keyfield [lindex $fieldlist 0]
set highnum [GetHighKey $conn $table]
set statement “insert into $table values (”
foreach field $fieldlist {
append statement “null,”
}
set statement [string range $statement 0 [expr [string length $statement] – 2]]
append statement “)”
SubmitSQL $conn $statement
set newrecordkey [GetHighKey $conn $table]
if {$newrecordkey > $highnum} {
return $newrecordkey
} else {
puts “** Error in GenerateNewRecord: New key is not greater than previous high record number”
puts “** Table: $table Key Field: $keyfield”
puts “** Previous High Key: $highnum Current key $keyfield number: $newrecordkey”
puts “**aborting $prog_name at [clock format [clock seconds]]”
CleanupForExit
puts “########################################################”
exit
}
}
proc PopulateORC {conn orctab orcnum msgnum segment fd sfd rep} {
set worklist “{orc01 1} {orc03 3} {orc04 4} {orc07-1 7 0} {orc07-4 7 3} {orc07-6 7 5} {orc09 9} {orc12-1 12 0} {orc12-2 12 1} {orc12-3 12 2} {orc12-4 12 3} {orc12-8 12 7}”
ParseSegIntoTable $conn $orctab orcnum $orcnum $worklist $segment $fd $sfd $rep
set query “update $orctab set msgnum = $msgnum where orcnum = $orcnum”
SubmitSQL $conn $query
}
proc PopulateOBR {conn obrtab obrnum parentorc msgnum segment fd sfd rep} {
set worklist “{obr02 2} {obr03 3} {obr04-1 4 0} {obr04-2 4 1} {obr04-4 4 3} {obr07 7} {obr10-1 10 0} {obr10-2 10 1} {obr10-3 10 2} {obr14 14} {obr15-2 15 1} {obr15-4 15 3} {obr16-1 16 0} {obr16-2 16 1} {obr16-3 16 2} {obr16-4 16 3} {obr16-8 16 7} {obr22 22} {obr25-1 25 0} {obr25-2 25 1} {obr27-1 27 0} {obr27-4 27 3} {obr27-6 27 5} {obr28a-1 28 0 0} {obr28b-1 28 0 1} {obr28c-1 28 0 2}”
ParseSegIntoTable $conn $obrtab obrnum $obrnum $worklist $segment $fd $sfd $rep
set query “update $obrtab set msgnum = $msgnum where obrnum = $obrnum”
SubmitSQL $conn $query
set query “update $obrtab set orcnum = ‘$parentorc’ where obrnum = $obrnum”
SubmitSQL $conn $query
}
proc PopulateOBX {conn obxtab obxnum parentobr msgnum segment fd sfd rep } {
set worklist “{obx01 1} {obx02 2} {obx03-1 3 0} {obx03-2 3 1} {obx03-4 3 3} {obx03-5 3 4} {obx05 5} {obx06 6} {obx07 7} {obx08 8} {obx11 11} {obx14 14} {obx15 15} {obx16 16} {obx17 17}”
ParseSegIntoTable $conn $obxtab obxnum $obxnum $worklist $segment $fd $sfd $rep
set query “update $obxtab set msgnum = $msgnum where obxnum = $obxnum”
SubmitSQL $conn $query
set query “update $obxtab set obrnum = ‘$parentobr’ where obxnum = $obxnum”
SubmitSQL $conn $query
}
proc PopulateNTE {conn ntetab ntenum parentobr parentobx msgnum segment fd sfd rep lastseg} {
set worklist “{nte01 1} {nte03 3}”
ParseSegIntoTable $conn $ntetab ntenum $ntenum $worklist $segment $fd $sfd $rep
set query “update $ntetab set msgnum = $msgnum where ntenum = $ntenum”
SubmitSQL $conn $query
if {[string equal $lastseg OBR]} {
set query “update $ntetab set obrnum = ‘$parentobr’ where ntenum = $ntenum”
SubmitSQL $conn $query
}
if {[string equal $lastseg OBX]} {
set query “update $ntetab set obxnum = ‘$parentobx’ where ntenum = $ntenum”
SubmitSQL $conn $query
}
}
proc PopulateDSP {conn ntetab ntenum parentobr parentobx msgnum segment fd sfd rep lastseg} {
set worklist “{nte01 1} {nte03 3}”
ParseSegIntoTable $conn $ntetab ntenum $ntenum $worklist $segment $fd $sfd $rep
set query “update $ntetab set msgnum = $msgnum where ntenum = $ntenum”
SubmitSQL $conn $query
if {[string equal $lastseg OBR]} {
set query “update $ntetab set obrnum = ‘$parentobr’ where ntenum = $ntenum”
SubmitSQL $conn $query
}
if {[string equal $lastseg OBX]} {
set query “update $ntetab set obxnum = ‘$parentobx’ where ntenum = $ntenum”
SubmitSQL $conn $query
}
}
proc ParseSegIntoTable {conn table keyname key worklist segment fd sfd rep} {
# Process thru the work list
foreach item $worklist {
# Get db field
set dbfield [lindex $item 0]
if {[llength $item] == 2} {
# Simple Single Field
set data [lindex [split $segment $fd] [lindex $item 1]]
catch {set data [mysql::escape $data]}
#set query “update $table set $dbfield = ‘$data’ where $keyname = $key”
set query “update $table set $dbfield = ‘$data’ where $keyname = $key”
SubmitSQL $conn $query
continue
}
if {[llength $item] == 3} {
# Extract Sub Field
set data [lindex [split [lindex [split $segment $fd] [lindex $item 1]] $sfd] [lindex $item 2]]
catch {set data [mysql::escape $data]}
set query “update $table set `$dbfield` = ‘$data’ where $keyname = $key”
SubmitSQL $conn $query
continue
}
if {[llength $item] == 4} {
# Extract a Rep
set data [lindex [split [lindex [split [lindex [split $segment $fd] [lindex $item 1]] $rep] [lindex $item 3]] $sfd] [lindex $item 2]]
catch {set data [mysql::escape $data]}
set query “update $table set `$dbfield` = ‘$data’ where $keyname = $key”
SubmitSQL $conn $query
continue
}
puts “** Workist item not 2 or 3 long”
}
}
proc NormalizeString {string} {
# running into trouble where strings are listified and have
# embedded meta characters. Routine checks a transaction for
# “listification” (llength = 1) and de-listfies.
# Metacharacters are also removed.
# Get rid of curlys
regsub -all {x7b} $string {} string ;# Remove Open Curly
regsub -all {x7d} $string {} string ;# Remove Close Curly
# Backslash…
regsub -all {x5c} $string {/} string ;# Sub forward slash for Backslash
# Escape the quotes
regsub -all {x22} $string {\”} string ;# Escape Double quote
regsub -all {x27} $string {\’} string ;# Escape single quote
#catch {set string [mysql::escape $string]}
# Send it back
return $string
}
proc DelistifyString {string} {
if {[llength $string] == 1} {
# String is listified.
#set string [lindex $string 0]
set string [join $string]
}
# Check for a list. 2 levels gripe.
set slen [llength $string]
if {$slen == 1} {
# String still list – gripe & break
puts “** Error processing string for delistification.”
puts “** String: $string”
puts “**aborting ob_moveandindex at [clock format [clock seconds]]”
CleanupForExit
puts “########################################################”
exit
}
# String is delistified.
return $string
}
##############################################################
# Main code
puts “########################################################”
puts “$prog_name.tcl started at [clock format [clock seconds]]”
# Setup running instance checking.
if {[file exists “/tmp/$prog_name.processing”] == 1} {
# Another instance is running?
puts “*** Another instance of $prog_name is running – Aborting run.”
puts “**aborting $prog_name at [clock format [clock seconds]]”
puts “########################################################”
exit
}
exec touch “/tmp/$prog_name.processing”
package require mysqltcl
# Connect to Destination DB
if {$debug == 1} { puts “Connecting to Destination DB” }
if {[catch {set dest_conn [mysql::connect -host $dest_host -user $user -password $password -db $dest_db]} errmsg]} {
# Error on destination connect
puts “**Error on Destination Connect: host $dest_host”
puts “**$errmsg”
puts “**aborting $prog_name at [clock format [clock seconds]]”
CleanupForExit
puts “########################################################”
exit
}
# Connect to Source DB
if {$debug == 1} { puts “Connecting to Source DB” }
if {[catch {set source_conn [mysql::connect -host $source_host -user $user -password $password -db $source_db]} errmsg]} {
# Error on source connect
puts “**Error on Source Connect: host $source_host”
puts “**$errmsg”
puts “**aborting ob_moveandindex at [clock format [clock seconds]]”
CleanupForExit
puts “########################################################”
exit
}
# get low and high record numbers for tables
if {$debug == 1} { puts “Getting message numbers” }
set sourcelow [GetLowKey $source_conn $source_table]
set sourcehigh [GetHighKey $source_conn $source_table]
set desthigh [GetHighKey $dest_conn $dest_trans_table]
puts “Source ($source_host) database message numbers run from $sourcelow to $sourcehigh.”
puts “Destination ($dest_host) high msgnum is $desthigh.”
# Check for zero or alpha in bounds numbers. This indicates an SQL error
# in the functions. Die if we have been handed one.
if {$sourcelow == 0 || $sourcehigh == 0 || $desthigh == 0 ||
![string is integer $sourcelow] || ![string is integer $sourcehigh] ||
![string is integer $desthigh] } {
# We have been given the poison pill. I’d have taken the RED one…
puts “**SQL problem detected in message numbers. Aborting run.”
puts “**aborting $prog_name at [clock format [clock seconds]]”
CleanupForExit
puts “########################################################”
exit
}
while {$desthigh 0} {
set NumRecs [lindex $argv 0]
} else {
#set NumRecs 4000
set NumRecs 5000
}
puts “Processing a max of $NumRecs records.”
# Get the unindexed record msgnums
set query “select msgnum from $dest_trans_table where processed = 0 order by msgnum limit $NumRecs”
set msglist [GetQueryResult $dest_conn $query]
puts “There are [llength $msglist] messages to process.”
# Turn off autocommit
catch {mysql::autocommit $dest_conn 0}
# Do the Initial Indexing
foreach msgnum $msglist {
set query “select msg from $dest_trans_table where msgnum = $msgnum”
set msg [GetSingleQueryResult $dest_conn $query]
set seglist [split $msg r]
set mshstr [lindex $seglist 0]
set fd [string range $mshstr 3 3]
set sfd [string range $mshstr 4 4]
set msh [lindex $seglist 0]
# Create a segment index for speed!
set segindex {}
foreach segment $seglist {
lappend segindex [string range $segment 0 2]
}
# create a work list – Segment tag, index to field, sql field name
set worklist { {MSH 2 msh03} {MSH 3 msh04} {MSH 5 msh06} {MSH 6 msh07} {MSH 8 msh09} {PID 2 pid02} {PID 3 pid03} {PID 4 pid04} {PID 5 pid05} {PID 7 pid07} {PID 8 pid08} {PID 10 pid10} {PID 18 pid18} {PID 19 pid19} {PV1 3 pv103} {PV1 7 pv107} {PV1 8 pv108} {PV1 9 pv109} {PV1 17 pv117} {PV1 19 pv119} }
# Autocommit is off – No need to start a transaction.
#### Start a transaction
###if {[catch {sql exec $dest_conn “START TRANSACTION”} errmsg]} {
### # Cannot establish transaction lock – Abort
### puts “** Unable to lock transaction (START TRANSACTION)”
### puts “** $errmsg”
### puts “**aborting ob_moveandindex at [clock format [clock seconds]]”
### CleanupForExit
### puts “########################################################”
### exit
###}
if {[catch {mysql::exec $dest_conn “insert into $dest_index_table values ($msgnum,”,”,”,”,”,”,”,”,”,”,”,”,”,”,”,”,”,”,”,”,0)”} errmsg]} {
# Unable to insert blank record
catch {mysql::rollback $dest_conn}
puts “** Unable to insert blank record into $dest_index_table : $msgnum”
puts “** $errmsg”
puts “**aborting $prog_name at [clock format [clock seconds]]”
CleanupForExit
puts “########################################################”
exit
}
# create the output record updates – Value the fields
set querystart “update $dest_index_table set ”
set queryend “where msgnum = $msgnum”
for {set i 0} {$i (select msgnum from $table_$year order by msgnum desc limit 1) order by msgnum;”
#if {$debug} {puts “Query set to: $queryn”}
puts “Processing table: $table_$year …”
if {[catch {set resmsg [GetQueryResult $dest_conn $query]} errmsg]} {
puts “n** Unable to complete query: $query”
puts “** Error is: $errmsg”
puts “** Noncritical Error – Continuing”
}
#if {$debug} {puts “Query Result: $resmsgn”}
puts “…….. Completed.”
} else {
# Year table not found – Create and Initialize the table from the proto.
set table_proto “$tablex5fproto”
puts “Year Table: $table_$year not found in list of tables. Creating.”
set query “create table if not exists $table_$year like $table_proto;”
if {$debug} {puts “Create Table Query: $query”}
if {[catch {set resmsg [GetQueryResult $dest_conn $query]} errmsg]} {
puts “** Unable to create needed table $table_$year”
puts “** Error is: $errmsg”
puts “** Critical Error – Aborting $prog_name at [clock format [clock seconds]]”
CleanupForExit $source_conn $dest_conn $prog_name
puts “#########################################################################”
exit
}
# Query executed successfully – Initialize Table
## Get last recordnumber for previous year – assuming last record
set query “select msgnum from $table_[expr $year – 1] order by msgnum desc limit 1”
if {[catch {set LastRecordList [GetQueryResult $dest_conn $query]} errmsg]} {
puts “** Unable to get last record number from table $table_[expr $year – 1]”
puts “** Error is: $errmsg”
puts “** Critical Error – Blindly renaming table”
puts “[GetQueryResult $dest_conn {rename table $table_$year to $table_$yearx5fERR}]”
CleanupForExit $source_conn $dest_conn $prog_name
puts “#########################################################################”
exit
}
set LastRecord [lindex $LastRecordList 0]
# Check to insure that we have a good last record number.
if {$LastRecord ne “” && $LastRecord > 0} {
# Good LastRecord Number
#if {$debug} {puts “Last Record Number: $LastRecord”}
puts “Initializing table $table_$year with last record number of $LastRecord.”
set query “insert into $table_$year set msgnum = ‘$LastRecord’;”
#if {$debug} {puts “Query: $query”}
if {[catch {mysql::exec $dest_conn $query} errmsg]} {
puts “** Unable to create table: $table_$year .”
puts “** Query: $query”
puts “** Err Message: $errmsg”
puts “** Non-critical error. Continuing.”
}
} else {
# Bad LastRecord Number
puts “** Unable to get valid last record number from query:”
puts “** $query”
puts “** Renaming table: $table_$year”
catch [set numrecs [mysql::sel $dest_conn “select count(*) from $table_year” -flatlist]] errmsg
set numrec [lindex $numrecs 0]
if {$numrecs < 2} {
# tiny table – drop
puts "** Blindly renaming table: $table_$year"
catch {mysql::exec $dest_conn "rename table $table_$year to $table_$year[clock seconds]"} errmsg
puts "** Result: $errmsg"
} else {
# Table contains more than 2 records. We need to give up.
# We should not be here unless things are really strange.
puts "*** Critical error – Newly created table apparently has multiple records."
puts "*** This is a highly unusual situation."
puts "*** There is no action taken – Continuing"
}
}
}
}
}
CleanupForExit
puts "$prog_name ending at [clock format [clock seconds]]"
puts "########################################################"
# done.
You will notice that I have code in there for “year tables” as I found that large index tables were too slow to work with.
This doesn’t include all of the details necessary to implement such a system, but hopefully it gives you some ideas about what you can do with a couple of instances of mysql and a couple of linux boxes.
BTW… Had I been proficient in Namespaces when I wrote this stuff, I would use them and some things would be better encapsulated and simplified.
I guess the above is why Charlie is not so sure about me. 😉
We’ve moved away from smat files. We save off trace files that we access via Slunk. Implementing Splunk is alittle like loading the data to a SQL database for folks to access. Splunk allows for freezing older data that can still be retrieved. Using Splunk allows coworkers to simply access a url to search data going into or out of Cloverleaf, greating simplifying testing efforts and freeing up my time.
I’ve worked on Corepoint interface engines and Rhapsody interface engines where data logging is nicely built in and there are great search capabilities accross interfaces.
Here is our proc to save off trace files. We put this in the Inbound or Outbound TPS proc. We pass the directory/file as an argument.
proc save_message_to_file { args } {
keylget args MODE mode ;# Fetch mode
switch -exact — $mode {
start {
# Perform special init functions
# N.B.: may or may not be a MSGID key
}
run {
# ‘run’ mode always has a MSGID; fetch and process it
keylget args MSGID mh
keylget args ARGS uarg
set SAVEFILE [ open [ keylget uarg SAVEFILE] a ]
puts $SAVEFILE [msgget $mh]
close $SAVEFILE
return “{CONTINUE $mh}”
}
shutdown {
# Doing some clean-up work
}
default {
error “Unknown mode ‘$mode’ in just_print_message”
}
}
}
Splunk looks VERY interesting!!!
All,
Cloverleaf 6.1 has the option to use SQLite database functionality for SMAT (this is on by default in 6.1). It allows you to search across multiple SMAT files/databases and do the normal operations you do with the file-based SMAT. We are not on 6.1 in production, but have it installed on our development server and are starting to play around with it.
Jim Cobane
Henry Ford Health
That’s good to know, Jim.
Do you know if 6.x stores the messages in SQLite, or if it’s just the indexing (idx file) info?
Jeff Dinsmore
Chesapeake Regional Healthcare
It stores the message and the metadata within the SQLite database. There is another thread on Clovertech that shows the schema
https://usspvlclovertch2.infor.com/viewtopic.php?t=7354&start=0&postdays=0&postorder=asc&highlight=
Jim Cobane
Henry Ford Health
Also, please note that the SQLite DB files can be encrypted for “data at rest” and multiple threads can use the same database file!
-- Max Drown (Infor)
So using the SQLite option should allow archiving by message (timestamp) instead of file, if you use a SQL query to archive a message. This avoids the frustration of searching for a message in a smat file which just got cycle saved and compressed – you could always have a rolling 6 or 12 or 24 hour window of messages in the database.
Is any of the normal SMAT file functionality lost with the SQLite option?
I like the new SMAT file View / Reload option – quick and easy way to check for new messages.
But I also like the indexing and separate databases people use to store the most ‘popular’ HL7 field values, for quick querying. I do that as well, for timestamp, encounter#,mrn#,HL7 event,patient class, patient status, patient location and admit & disch dates. Its a big time saver.
But one of the reasons I use that query is to quickly track a patient’s encounter, find the ADT messages for that patient and therefore figure out which SMAT files to uncompress so I can run detailed queries against the SMAT files.
We will look at changing our SMAT file approach, to see if we can make searching across multiple smat files easier, not only for ourselves, but also in the future, for users to be able to run canned queries, especially during heavy integration testing.
Peter
Peter Heggie
For those of you on 6.X, how do you handle the roll over from day to day? Do you create a new DB file for each day and store the older DB in another location with a date and time? Right now what we do is at midnight we “roll” each threads data files (idx and msg) to an archive directory that we use for searching and other things.
Trying to wrap my head around the SQLite option for “archiving” of the data for searching or “audit” capabilities. I can see that if you get a lot of data, that the database files could get very large if you keep any number of days in a single database file.
Thoughts. TIA
Rob
I, also, am trying to understand the benefits of SQLite over files for SMAT.
There’s no magical properties of a SQLite DB that will make the data appreciably smaller, so the msg/idx data that we’re storing in files today will consume comparable space in SQLite form. Therefore, if you’re archiving daily now, my assumption is that you’ll need to do the same with SMAT in SQLite.
The primary difference with SQLite, as Peter mentions, would be that you could move messages from the active SMAT DB to archive based on a point in time that’s “N” minutes/hours/days ago – thereby always having at least the most recent “N” minutes/hours/days’ in the current SMAT/DB.
I suppose you could do the same with SMAT files as well if you really cared to, but it should be easier with SQLite.
My concern with the schema that was listed in an earlier post is that there looks to be no indexing of the data. From personal experience, I’ve found that searching slows down dramatically as SQLite DBs get larger.
Does anyone have experience with search performance of SQLite vs the traditional file-based SMAT?
Jeff Dinsmore
Chesapeake Regional Healthcare
Let me clarify…
From personal experience, I’ve found that searching slows down dramatically as UNINDEXED SQLite DBs get larger.
Jeff Dinsmore
Chesapeake Regional Healthcare
Come on guys, this is why you get paid the big bucks 😀
An sqlite database is a single file which, like any file, can be moved, copied or whatever. The downside to that would be your searches. You would have to know the date to know which database to search.
Why not simply write a routine that would run each day and remove any saved items over X number of days old? If you need to keep them then roll to an archived database somewhere where you have a lot of storage
If you need an indexed database, build one. It would be simple enough to roll your data from the SMAT database to your indexed database. When it comes to indexing everyone wants something different so build and customize your own.
IMHO, sqlite SMAT storage is light years ahead of the old method if employed properly.
This seems like it would be a good topic for a Webinar for Infor to present…”SMAT Database Best Practices” 🙂
Just saying….
Jim Cobane
Henry Ford Health
Jim,
I agree. A 6.x SMAT DB webinar is a great idea.
To the Infor folks: Please let us know when…;)
Charlie,
I agree that SQLite has the potential be better than a file based approach. You’ve highlighted a few points, but please expand a bit on the improvements that the new DB SMAT brings to the table and what a proper implementation would be.
All,
Many of us discussing here have already built indexes outside of the file-based SMAT – to overcome their inherent segmented nature and the lack of a good search function. My hope is that a SQLite solution would address those limitations.
As far as I’m concerned, a SQLite solution would be Much more useful if:
1) Its indexing were extensible – so I could add IDX data columns (MR, encounter, service codes, etc) directly to the SMAT SQLite DB itself. That way, I could feed my additional IDX-type info directly to that DB rather than writing it to an external one. That would allow me to search in the Cloverleaf GUI based on my extended data elements.
2) It stored the relatively larger messages separately from the relatively smaller indexing (IDX) info. Then, instead of being able to keep a day’s worth of “current” data, I could keep a month – or three – or more, without negatively impacting search performance. That eliminates the “What day was the message sent?” question.
3) We (or Infor) could add indexes to the native SMAT DB columns to enhance search performance. This would be a gimme if we had the capability outlined in #1.
4) It handled archiving natively – so I wouldn’t have to do that myself. Even if the DB is just a file – and it’s not that hard to copy – all logs need to be trimmed/archived eventually. It would be Very Nice if this were a core engine function.
5) It searched/retrieved seamlessly across all of my current and archived data.
We have not yet installed 6.x. To those of you who have, does the SQLite SMAT solution support any of my wish-list points?
To clarify (and not to insult anyone’s intelligence), DB column indexes are not the same as the IDX-type data points that we refer to when talking about an external indexing database.
Jeff Dinsmore
Chesapeake Regional Healthcare
Greetings,
One and All, INFOR is sponsoring a Winter Showcase this Spring in St. Paul Minnesota to review among other products the next Integrator 6.1.1.
We plan to ask about the new SMAT database functionality as we are still using the legacy method (idx/msg pair).
I suggest that you ask INFOR about any such showcase opportunities coming around in your area.
As for the BIG BUCKs Charlie…
Enjoy.
If you use a single database table and do a “rolling record move” or a “rolling record delete (delete from table where record < tooold)", I believe that you have to do a periodic "vacuum" command to physically get rid of old records. This will also keep your DB Table small.