sqlite insert/update slowness with Epic ADT

Homepage Clovertech Forums Cloverleaf sqlite insert/update slowness with Epic ADT

Tagged: 

  • Creator
    Topic
  • #113277
    Jeff Dawson
    Participant

    Hello,

    Since not of all our ancillary applications send back a full provider list we use the ADT from Epic to capture those different roles and a few other various ADT data elements per encounter or Epic CSN saved in sqlite database file.  This allows us to query and see who the PCP was on a lab result since our Lab application doesn’t send this information on the result.  Currently we have one sqlite database file with 15 tables created per facility which has reached around 500mb  in size.  Reading on the sqlite page it looks like this size is well below the maximum database files size.

    With more of our facilities being added to Epic our ADT feed volume has increased, we have weeded out a lot of the noise regarding duplicate A08 and A31 updates however it seems the past few weeks we’ve noticed some recovery database queues in the site which houses this process.  The site in question only has 7 processes with 20 threads. I’m  curious if anyone has experienced similar issues with high volume data feeds that may be doing a similar process with sqlite?  In Test we are looking at splitting these facilities into their own separate database file with individual table to see if that improves performance.  This process is doing a select as well as insert or update based on return, other interfaces that use this data would only be doing a query against it.

     

    #echo ” > begin process of writing vars to sqlite table”
    set dbFile “$HciRoot/sqlLiteDbs/ADT/EpicADT.db”
    #echo “dbFile $dbFile”

    # sqlite table name based on hospital and environment
    set tableName “$hosp\_$env”
    #echo “tableName $tableName”

    # assign db name
    sqlite SqDbHandle $dbFile

    # set write ahead logging
    SqDbHandle eval {PRAGMA journal_mode=WAL; PRAGMA synchronous=NORMAL;}

    # Create table for data
    SqDbHandle eval “CREATE TABLE if not exists $tableName (msgDT TEXT, msgID TEXT, ptMRN TEXT, acctNum TEXT, har TEXT, pcp TEXT, atten TEXT, refer TEXT, consul TEXT, admit TEXT, serviceArea TEXT, room TEXT, bed TEXT, admitDT TEXT, dischDT TEXT, clinicID TEXT, clinicName TEXT, finClass TEXT, ptClass TEXT, optOut TEXT, FirstName TEXT, MI TEXT, LastName TEXT, DOB TEXT, Gender TEXT)”

    # Query sqlite table by account number to see if a record already exists
    set dbMsgID [SqDbHandle eval “SELECT msgID FROM $tableName WHERE acctNum = \’$acctNum\'”]
    #echo “dbMsgID $dbMsgID”

    if {$dbMsgID == “”} {
    # record not found so insert to table
    echo “tps_SaveEpicADMinfo.tcl ==> msgID $msgID inserting new record into table for acctNum $acctNum”
    SqDbHandle eval “INSERT INTO $tableName (msgDT, msgID, ptMRN, acctNum, har, pcp, atten, refer, consul, admit, serviceArea, room, bed, admitDT, dischDT, clinicID, clinicName, finClass, ptClass, optOut, FirstName, MI, LastName, DOB, Gender) VALUES (\’$msgDT\’, \’$msgID\’, \’$ptMRN\’, \’$acctNum\’, \’$har\’, \’$pcp\’, \’$atten\’, \’$refer\’, \’$consul\’, \’$admit\’, \’$serviceArea\’, \’$room\’, \’$bed\’, \’$admitDT\’, \’$dischDT\’, \’$clinicID\’, \’$clinicName\’, \’$finClass\’, \’$ptClass\’, \’$optOut\’, \’$FirstName\’, \’$MI\’, \’$LastName\’, \’$DOB\’, \’$Gender\’)”
    } else {
    # update existing record with most recent fields if msgID is greater than or equal to current record
    if {$msgID < $dbMsgID} {
    echo “tps_SaveEpicADMinfo.tcl ==> message ID $msgID is older than current messageID $dbMsgID for acctNum $acctNum so doing nothing”
    } else {
    echo “tps_SaveEpicADMinfo.tcl ==> msgID $msgID updating existing record in table for acctNum $acctNum”
    SqDbHandle eval “UPDATE $tableName SET msgDT = \’$msgDT\’, msgID = \’$msgID\’, ptMRN = \’$ptMRN\’, har = \’$har\’, pcp = \’$pcp\’, atten = \’$atten\’, refer = \’$refer\’, consul = \’$consul\’, admit = \’$admit\’, serviceArea = \’$serviceArea\’, room = \’$room\’, bed = \’$bed\’, admitDT = \’$admitDT\’, dischDT = \’$dischDT\’, clinicID = \’$clinicID\’, clinicName = \’$clinicName\’, finClass = \’$finClass\’, ptClass = \’$ptClass\’, optOut = \’$optOut\’, FirstName = \’$FirstName\’, MI = \’$MI\’, LastName = \’$LastName\’, DOB = \’$DOB\’, Gender = \’$Gender\’ WHERE acctNum = \’$acctNum\'”
    }
    }
    # Close the database
    SqDbHandle close

    lappend dispList “KILL $mh”

Viewing 6 reply threads
  • Author
    Replies
    • #113278

      It’s possible the slowness may be caused by the locking and unlocking of the sqlite database. We traditionally have recommended that all threads that will access the sqlite database should be in the same process to limit the need for locking and unlocking because the process will only allow on thread access at a time.

      Just a thought.

      -- Max Drown (Infor)

    • #113279
      Jeff Dawson
      Participant

      Thanks Max,

      Originally when we built this out the tcl proc that performed these sqlite functions for these facilities were all in one process.  After more facilities were added we had quite a back log of messages in the recovery database which is when we tried splitting this out in regions per facility/process which got rid of the queue for a while.  The IL process being was recently split out as a stop gap fix and has helped with the queue once again but this has me thinking the insert/update process we are doing could be more efficient.  Attached is a screen shot showing how it’s currently broken into processes which has helped.

       

       

      Attachments:
      You must be logged in to view attached files.
    • #113281

      Would it be possible to use separate sqlite databases for each process?

      -- Max Drown (Infor)

    • #113282
      Jeff Dawson
      Participant

      That’s what we are currently going to try out in Test however it makes me question if a large facility has a burst of ADT if the process we are using with SQLite and tcl to perform the insert/update per message if there’s a more efficient route.  Most of what I read on the web falls under mobile phone development which has different SQLite API’s being used.  From the TCL SQLite side I haven’t found much in terms of what has worked for other high volume SQLite databases that may be doing a similar process.

    • #113283
      John Mercogliano
      Participant

      One thing I noticed is you do not appear to have any indexes.  As the tables get larger your selects and updates will start taking longer.   I looks like you only need one index on acctNum.

      John Mercogliano
      Sentara Healthcare
      Hampton Roads, VA

    • #113284
      Jeff Dawson
      Participant

      Thanks John,

      We are going to try creating an index for acctNum in Test and pump some volume through to see if this helps.

    • #113286
      Jeff Dawson
      Participant

      We started with updating acctNum INTEGER PRIMARY KEY as the primary key and noticed a pretty drastic increase in performance.  I know that SQLite will auto create a primary key with rowid but since we’ve used the vacuum command over a period of time on this database it could have affected the searching performance and probably not optimal since the acctNum column is what is mainly being queried on.   Appreciate all the suggestions with this issue.

Viewing 6 reply threads
  • You must be logged in to reply to this topic.

Forum Statistics

Registered Users
5,117
Forums
28
Topics
9,292
Replies
34,435
Topic Tags
286
Empty Topic Tags
10