glen goldsmith

Forum Replies Created

Viewing 15 replies – 1 through 15 (of 20 total)
  • Author
    Replies
  • in reply to: CL 6.1 SMAT file extensions? #81527
    glen goldsmith
    Participant

      Actually, you don’t have to do this.

      When you issue a save_cycle command, either from the GUI or via hcicmd, it takes care of these tmp files.

      These are not actual databases.

      When you invoke the sqlite command on a DB that has these tmp files, they empty themselves and go away automatically, and gracefully.  (IE no lost data, the temp files populate the smatdb like it should)

      What this means…… on the old .msg/.idx files, we have a live SMAT viewer, you could see transactions being saved in real time — it was useful in troubleshooting.  This won’t work on the new smatdb’s…..

      in reply to: CL 6.1 SMAT file extensions? #81525
      glen goldsmith
      Participant

        The SQLite website discusses hthe -wal files (write ahead logs) and -shm (shared memory) files…

        https://www.sqlite.org/tempfiles.html

        in reply to: KSH and Cloversleaf 6.1 #82780
        glen goldsmith
        Participant

          Yes it does.  Nothing different than 6.0… and what I’ve seen, 5.8.5.

          Although, personally I use Bash (and Perl) for most things.

          in reply to: Script for Cycling SMAT database #82316
          glen goldsmith
          Participant

            When you compress a smatdb and it doesn’t get smaller — its encrypted.

            You can’t compress encrypted files.

            Currently – we have a perl script that cycles all the smatdb’s daily.

            We have SMAT History turned on.

            We have another script that archives any files in the SmatHistory that are +7 days old.

            These are compressed with 7za (7zip) (which is much better than gzip… 3 or 4 times better, but a tad slower)

            The encryption option, causes me wrinkled nose expression:

            1.

            in reply to: Sharing Directory across VMs (redhat) #82416
            glen goldsmith
            Participant

              Hey Rob,

              We have multiple VM’s and basically, we we do, is copy any changes over to the “master” vm via rscync. (one that we arbitrarily chose)

              We’re on 5.8.5 but upgrading to 6.1.  So no Boxes currently.

              When new code is deployed, and after verification its good… we “sync.sh” the files….

              Any changed files are uploaded to the master…… and the script calls “sync.sh” on the other VMs and it pulls down any changes from the master.  All via rsync.

              in reply to: SMAT regular expression compound search #82480
              glen goldsmith
              Participant

                we’re on 5.8.5 and I do compound searches occasionally.

                Frankly, if I had to do several compound, I just converted the SMAT file at the command line and used grep or awk for my searches.

                in reply to: Active Directory with Global Monitor #79378
                glen goldsmith
                Participant

                  We totally integrated cloverleaf 6.1 w/ AD.

                  It does not require advanced security.

                  Its nice, no more certificate mess anymore.

                  Works *very* well w/ the Cloverleaf IDE on citrix.

                  I’m curious what modifications were made to allow Internet Explorer to work with Global Monitor.

                  in reply to: Error when running hcidbdump -r #77824
                  glen goldsmith
                  Participant

                    What I’ve done…. when I’ve had a db error of -902… where keybuild and dchain didn’t save the bacon……… is wipe out the recovery database.

                    After initalizing the database and getting everything running again..

                    We then, go back to a point of when the error occured – and resent the previous 15 minutes data (it was ADTs….. so this was deemed safer than losing data)……

                    After the data is sent, allow queued messages from the source system into cloverleaf again.

                    in reply to: MERGE Mirth #80199
                    glen goldsmith
                    Participant

                      A lot of vendors are starting to use it.  It’s free/cheap and it’s descent for small applications.  MModal, MERGE, Optum CAC are a few of the vendors.

                      We went through this with Carefusion on our Pyxis interfaces awhile back — although it wasn’t Mirth and they put in their own engine, etc.  And they did require it.

                      GE uses cloverleaf — which is nice, we (developers and the support staff) know Cloverleaf very well.  We’ve even had to correct GE on the syntax and what commands to use 😉

                      in reply to: MERGE Mirth #80197
                      glen goldsmith
                      Participant

                        There are a couple of things…….

                        – Sure it reduces the amount of interfaces

                        – For MERGE, all interfaces will use this — the current/old way will be legacy, and they may require it in the future.

                        Caveats of having 1 interface:

                        – you are correct, you will lose the ability to customize the interface to the specific application… plus if you already have.

                        – you will probably pay for each custom on upgrades

                        Mirth is open source — MERGE has their own version of it.  They probably fixed up alerts, as it was pretty bad.  Mirth just released version 3… I haven’t used it extensively since the 1.7 days.

                        ….. as it comes down to it – it would NOT be hard for them to just route your interface data……raw.. to their application with mirth.  There *is* a middle ground!

                        Instead of having 1 ADT interface going to Mirth, you have 5

                        1 for Hemo, 1 for PACs, etc  Same with Orders and reports and images coming back.

                        in reply to: CLoverleaf and I/O to disk #78294
                        glen goldsmith
                        Participant

                          Since we’re on VMWare…… we’ve had to set SYNCFILES=1 in the rdm.ini, which dramatically slowed Cloverleaf (and increased IO) than when it wasn’t on.  However, even with this handicap – Cloverleaf is orders of mangitude faster than the hardware we came from.

                          We do have dedicated LUNs… and our SAN has SSD’s — however, the IO demands of Cloverleaf aren’t near as great as some of the other apps we have, so Cloverleaf hardly gets to flash.

                          in reply to: CLoverleaf and I/O to disk #78292
                          glen goldsmith
                          Participant

                            As a follow up….. the things we’ve done have dramatically dropped IO Wait %……..

                            What we have, is a /sites and /data hierarchy.  Before, we had the entire $HCISITEDIR on /sites and just storing archived SMAT files on data.

                            What we’ve done now:

                            /sites still has $HCISITEDIR execpt the $HCISITEDIR/exec/processes tree.

                            That’s on /data now

                            So your primary IO, are log files, smat, etc on one disk……. and the raima/recorvery|error|icl database on /sites

                            So now our IO is split between two disks.

                            Each disk now has a controller.  In VMWare, there is a virtual controller for /sites, another for /data and a third for everything else.

                            This seems to have cut our IO wait by 2/3rds.  We were in the 20-40% range on IO Wait %.  Now, most of the time, we’re less than 10%.

                            VMWare has the same “bottlenecks” that a real controller has….. so this would  work on actual hardware.

                            in reply to: CLoverleaf and I/O to disk #78289
                            glen goldsmith
                            Participant

                              open files are not a limitation; but the latency on our SAN is.

                              When you do ‘top’……… cpu utilization will come across as a line like this:

                              Cpu(s): 36.4%us, 42.3%sy,  0.0%ni,  1.7%id, 16.1%wa,  0.0%hi,  3.6%si,  0.0%st

                              the 16.1%wa is IOWait……. how much time the processor is waiting on on disk.  That, at this point, is our bottleneck, SAN.

                              in reply to: Useful HL7 Scripts #65834
                              glen goldsmith
                              Participant

                                A Table for HL7 2.2/2.3 names are already in the $HCIROOT/formats/hl7/2.3/etc directory……… so getting them is easy as pie.

                                I have for years, used some sed functions to do the same things for your scripts…………  I’ve had to update it to  perl for 5.8…..

                                readhl7msg () {

                                  cat $1 | perl -pe ‘s/{CONNID.*?}MSH/x0dx0aMSH/g; s/x0d/x0dx0a/g’

                                }

                                converthl7msg () {

                                  cat $1 | perl -pe ‘s/{CONNID.*?}MSH/x0dx0aMSH/g; s/{CONNID.*}//g’

                                }

                                in reply to: CLoverleaf and I/O to disk #78287
                                glen goldsmith
                                Participant

                                  we have cloverleaf 5.8.5 on all VM’s………

                                  All the storage is on SAN.

                                  Our IO is pretty intensive.

                                  We do about 1/3rd of the volume ya’ll do per day

                                Viewing 15 replies – 1 through 15 (of 20 total)