Steve Robertson

Forum Replies Created

Viewing 15 replies – 1 through 15 (of 16 total)
  • Author
    Replies
  • in reply to: File protocol #71026
    Steve Robertson
    Participant

      I’m just going to add my 2 cents. While what you propose will work, it may not work for long. It’s been our experience that UNC paths for file I/O lead to a panic sooner rather than later. I also can’t recommend a drive letter mapped network drive. Same problem

      What we’ve seen is that if the network location is unavailable or even just slow, Cloverleaf doesn’t like it. This is with 5.4.1 (and earlier releases) on Windoze Server.)

      Our solution is to always do file I/O on a local drive. We’ve used various non-Cloverleaf mechanisms to transfer files between the local drives and network drives. That used to be VB scripts run from the Windows scheduler, a utility called Opalis, and most recently Informatica PowerCenter.

      in reply to: Oracle stored procedure to update trigger database #62040
      Steve Robertson
      Participant

        I don’t read posts very often. But here is my 2 cents …

        You have gotten very good advice so far. In my shop, we also write data to Oracle via ODBC calling a stored procedure.

        Let me mention that opening a new connection to Oracle for each write, then closing it, is by far the easiest way to go, since error handling is so easy. However, this is also very slow. If you have much volume, you’re going to be sunk. We route about 250,000 per day to our database, spread over 8 Cloverleaf sites, each site having it’s own DB connection.

        Much faster to hold a persistent connection. But much harder.

        If you are interested, let me know and I can e-mail you a tcl that has the code to hold a connection, retry for failures, etc. There are at least 2 specific Oracle error codes that you have to contend with properly.

        Steve Robertson

        in reply to: odbc – multiple tables to update #67325
        Steve Robertson
        Participant

          We use stored procedures, too. Actually, we use packages/package bodies in Oracle.

          in reply to: ODBC connection #59941
          Steve Robertson
          Participant

            Mark, yes, a large table will be a problem. Just create a table with one row or zero rows. It’s only purpose in life will be to respond to the ODBC test.

            If anyone reading this has Oracle, just use a query to dual to check the connection. (For non-Oracle folks out there, dual is a system “pseudo table” that always exists. It is used for just such occasions as this.)

            Best regards,

            Steve Robertson

            TeamHealth, Inc.

            in reply to: ODBC connection #59937
            Steve Robertson
            Participant

              Todd, This has worked out pretty well. We’ve been using this in production for over a year now.

              Steve Robertson

              TeamHealth, Inc.

              in reply to: Filter based on ADT^A38 #62489
              Steve Robertson
              Participant

                Rickey, I think maybe you’re not setting the dispList correctly. I’m pretty sure you need to CONTINUE messages you want to keep processing. Also, I lappend the dispList rather than set it. Might or might not be necessary.

                I’m not an expert on tcl syntax nor on the engine. I know there are often many ways to accomplish the same thing. But I have a message filter that works for me. Here is how I do it:

                Edit — I left out some lines! The code below should work for you.

                set msg [msgget $mh]

                set Separator_1 [cindex $msg 3]

                set Separator_2 [cindex $msg 4]

                set splitmsg [split $msg r]

                set mshseg [lindex $splitmsg 0]

                set fields [split $mshseg $Separator_1]

                set msgtype [lindex $fields 8]

                # Substitute underscore for subfield separator in message type

                set MsgType [split $msgtype $Separator_2]

                set msg_type “”

                append msg_type [lindex $MsgType 0]_[lindex $MsgType 1]

                if {$msg_type == “ADT_A38”} {

                in reply to: Threads losing connection #61636
                Steve Robertson
                Participant

                  Doug,

                  We’ve had similar problems from time to time in the past. We never could figure out exactly what the problem was, but we always suspected some kind of router timeout on the foreign networks.

                  We’ve used a couple of different work arounds. One is to write an OS-level script to cycle the threads. Actually, if you do this, I would recommend cycling the processes rather than the threads – We had lots of memory leaks leading to panics when we used script to cycle threads. We are running 5.4.1 on Windoze, by the way.

                  The other approach is to set up a timer thread that periodically sends what I would call a “heatbeat”. Just a tcl proc to generate an HL7 message header segment with the message type set to something innocuous. You will likely have to coordinate this with the other systems so that they will know to filter out these messages. I can send/post some tcl that we have used if you like.

                  in reply to: How do I test tps code? #61417
                  Steve Robertson
                  Participant

                    Ah ha! We do something similar here. We have some clients that are groups of hospitals that all use a single HL7 feed to us. But we typically have multiple hospital groups sending us HL7 over different ports on the same Cloverleaf site. Because we use a common thread to write translated message data to a database, we need to assign a hospital identifier to each message. While each hospital in a group has a unique value for each hospital, those values might be the same as another group.

                    So we have multiple inbound threads, each using overlapping hospital identifiers and a single outbound thread that writes to a database.

                    So what we do is have a translations (or set of translations) per inbound thread (hospital group). Each inbound thread has a Cloverleaf lookup table. Each translation does a table lookup on it’s table to translate the facility ID assigned by the hospital group to the enterprise-wide ID that we want.

                    It’s very simple. I suspect you will be able to do the same thing for your provider IDs. You can even assign a default value in a lookup table such that if the input value isn’t found, the output value will be the default. You could assign a default value that indicates that manual intervention is needed.

                    While you certainly can change message data in a TPS inbound tcl proc, it’s easier to do in the translation, provided all you need to do is a Cloverleaf table lookup. We typically only change inbound (pre-translation) message data in tcl when we need to remove or replace spurious characters that baffle the Cloverleaf xlate thread.

                    Steve Robertson

                    TeamHealth, Inc.

                    in reply to: How do I test tps code? #61415
                    Steve Robertson
                    Participant

                      Dennis,

                      I’ll echo Jim’s suggestion that you kill unwanted messages before the xlate. It’s really quite easy in a simple tcl proc. Much easier than killing during translation. The only downside is your tcl code has to split the message to find the right segment(s) and field(s).

                      We received HL7 from a large number of client hospitals. Many of them will send spurious message types that we don’t want to process. So a collegue and I created a tcl message filter. Not quite the same thing you need to do but pretty close.

                      I call the message filter proc in the TPS Inbound Data box on the Inbound tab on the Thread Configuration.

                      If you want the code, let me know and I’ll post or email it to you.

                      Steve Robertson

                      TeamHealth, Inc.

                      in reply to: odbc question #61400
                      Steve Robertson
                      Participant

                        Ryan,

                        I don’t see right off what the problem is.

                        However, may I suggest using SQLExecDirect instead of SQLPrepare/SQLExecute? It’s a whole bunch easier. We have a large daily volume per day and have no performance issues with ODBC.

                        Here is some sample code where I insert into a table some info about unknown HL7 message types:

                        set sqltext “insert into hl7.ib_unknown_msg_types (record_created_date, facility, filter_table_file, message_type, raw_message_type, message_timestamp, message_control_id, message_disposition) values (SYSDATE, ‘$msg_facility’, ‘$filter_table’, ‘$msg_type’, ‘$msgtype’, ‘$msg_datetime’, ‘$msg_controlid’, ‘$unk_disp’)”

                        set sqlreturn [odbc SQLExecDirect $hstmt $sqltext SQL_NTS]

                        Then I can check $sqlreturn for success or failure.

                        Also, to go along with Ryan’s thought, be very sure that the actual values for all fields of data you’re attempting to insert are no larger than defined in the database.

                        Steve Robertson

                        TeamHealth, Inc.

                        in reply to: Killing in Write TPS UPoC #61125
                        Steve Robertson
                        Participant

                          Daniel,

                          I just thought of something else: If you set up your thread like I suggest (PROTOCOL: file), the tcl proc that writes to ODBC will have to kill ALL messages, even the ones that go to the database successfully. If you don’t kill them all, they will fill up the Cloverleaf recovery database.

                          Steve

                          in reply to: Killing in Write TPS UPoC #61124
                          Steve Robertson
                          Participant

                            Charlie,

                            I see where I’m in error – In the original post, I missed that the thread was set up as PROTOCOL: upoc. My threads that do something similar are set up as PROTOCOL: file (even though I’m not writing to a file) and the tcl proc is invoked on the Outbound tab. The KILL/CONTINUE will act differently in this case.

                            Steve

                            in reply to: Killing in Write TPS UPoC #61122
                            Steve Robertson
                            Participant

                              Daniel,

                              So maybe Cloverleaf doesn’t support killing a message on a PROTOCOL: upoc? Do you have to use PROTOCOL: upoc?

                              Maybe the problem is the way you have the thread set up. I went back to check my threads that use ODBC to write to the database, then kill the message. Mine are set up as PROTOCOL: file, even though I’m not writing to a file. I specifiy my tcl proc (the one that does the ODBC stuff and kills the message) in the TPS Outbound Data field on the Outbound tab.

                              Give that a try.

                              Best regards,

                              Steve Robertson

                              Team Health, Inc.

                              in reply to: Killing in Write TPS UPoC #61120
                              Steve Robertson
                              Participant

                                I have to disagree with Bill. The CONTINUE disposition will leave the message in Cloverleaf’s recovery database, where it will reside for eternity. Or until you delete it.

                                Here is how I kill a message in a write UPoC:

                                lappend dispList “KILL $mh”

                                mh is the message handle. The kill needs to be in double quotes. The dispList is returned to Cloverleaf and the proc is exited using this statement:

                                return $dispList

                                in reply to: budgeting development hours & ODBC module #60619
                                Steve Robertson
                                Participant

                                  Kathy,

                                  FWIW, it took me about 80 hours or more to get the ODBC interface working the way we needed it to. I wrote our code to hold a persistient connection to the database. That made it harder. I had to find out the hard way which error codes from the database (Oracle here) needed to be trapped and handled in tcl.

                                  Email me if you want more details – I don’t check Clovertech regularly.

                                Viewing 15 replies – 1 through 15 (of 16 total)