Jason Russell

Forum Replies Created

Viewing 15 replies – 1 through 15 (of 37 total)
  • Author
    Replies
  • in reply to: Log control #121752
    Jason Russell
    Participant

      The script works great, now I have to figure out how to get it to pull the environment variables in cron so it will actually run automatically.

      in reply to: EPIC integration with Cloverleaf #121750
      Jason Russell
      Participant

        Epic is pretty flexible on how it gets data and displays data. I’m not sure if they can process textual data to discrete fields, but it wouldn’t surprise me if they did. That will be handled by your Beaker analyst and their TS. Typically if you want it to fill discrete elements you’ll have to mark it as such. I can’t say much in the way of CCDAs, we don’t touch those at all. Those are handled by different groups (Grand Central or HIMs I believe). Once of the decisions that were made is that our Bridges group would not handle any FHIR or “API hooks” as some refer to them as. Most data imports are going to be handled by non-bridges folk (Usually the group where the data is going to go). Things sent to and picked up by SFTP are not handled by the interface group (excepting facilitating the transmission of said SFTPs in many cases).

        You may find that the reach and some of the things that you were doing in your current set up may no longer be your responsibility in Epic.

         

        in reply to: Log control #121744
        Jason Russell
        Participant

          So I think the primary difference between our codes is the fact that yours is mean to be manually run, mine is automatic. It writes to a log file so I get log files each day of what happened. I may update so that the log files are also kept for a period of 14 days for posterity, but this is neither here nor there. I’ve removed a lot of the comments as this code doesn’t really need a lot, it’s pretty straightforward.

          log=”/sitedata1/tmp/clLogClean.log”
          logDays=14

          SCRIPT=$(basename $0)
          script_start_time=$(date +”%B %e, %Y %T %Z %p”)

          echo “======================= $SCRIPT begin time: $script_start_time =======================” > ${log}

          # Pull site list from server.ini. Script assumes no spaces in site names.
          siteList=$(grep “environs” ${HCIROOT}/server/server.ini | awk -F “=” ‘{print $2}’ | sed ‘s/;/ /g’)

          for siteDir in $siteList;
          do
          site=${siteDir##*/}

          setsite $site

          echo “SITE: $site <<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<” >> ${log}

          process_list=$(grep “process ” $HCISITEDIR/NetConfig | awk ‘{print $2}’ | sort -d -f)

          for process in $process_list;
          do

          loghistory_folder=$HCISITEDIR/exec/processes/$process/LogHistory
          echo “PROCESS: $process” >> ${log}
          echo “Cycling logs for ${process}…” >> ${log}
          hcicmd -p $process -c “. output_cycle” >> ${log}

          if [ -d “${loghistory_folder}” ];
          then

          echo “Removing files older than 14 days:” >> ${log}
          find ${loghistory_folder} -type f -mtime +${logDays} -print -delete >> ${log}

          else

          echo “PROCESS: $process does not have a LogHistory folder.” >> ${log}

          fi
          echo >> ${log}

          done
          echo >> ${log}

          done

          echo >> ${log}

          I’ll get a proper file uploaded momentarily.

          in reply to: Log control #121742
          Jason Russell
          Participant

            Jay, thanks for the script. I’m probably going to co-opt it and make some changes (I work with absolute paths, don’t CD around so things don’t get changed and I don’t get down the wrong directory, got bitten once by that). The only thing I noticed that I saw that was odd, was there seems to be a mix of tabs and spacing in the code (multiple people working on it potentially?). Not sure if it was in the source or something weird happened when you uploaded it.

            Looks like grepping the processes out of NetConfig the most convenient way. I’m still debating if I want to use the server.ini to get a list of the active sites, or if I want to do all. Otherwise, that is a straightforward script, thank you. I’ll post my update to it if you’d like when I’m done.

            in reply to: Log control #121741
            Jason Russell
            Participant

              I think the point of using TCL vs BASH was an easy way to get active running logs. It’d be easy enough in *sh to get to the integrator root, look down our site names (We have a naming convention that needs to be followed), but the intention of this script is to look at actively running processes and grab those. I’m copying that script into vim to look over it. I’m fairly proficient in KSH/BASH (KSH is our go-to, looking to move away into something more modern, but it definitely works). First blush looks pretty straightforward.

              in reply to: Log control #121736
              Jason Russell
              Participant

                No. If you have log history turned on, it creates a folder in the process’ directory. When the engine cycles (whether you restart the process or force it via command), it takes the .log and .err and datetime stamps them, and moves them to the subfolder.

                It won’t let me post a screenshot (too lazy to save and upload), but the options under keeping log history are you can have them removed at x number of logs, or x number of KB of size. you can also automatically cycle by size, meaning once the log gets so large it automatically cycles out the log. None of these options are really what we want. We want time based cycling. We could probably do this via scheduler, but again this is something we don’t want to do multiple times, looking for a more global approach.

                in reply to: Log control #121734
                Jason Russell
                Participant

                  #TCL script to force all processes to cycle logs, then delete logs older than a set number of days.

                  set clearDays 10
                  set hciRoot ${::env(HCIROOT)}
                  set servIniPort [open ${hciRoot}/server/server.ini]
                  set servIniList [split [read $servIniPort] \n]
                  close $servIniPort
                  set environs [lsearch -regexp -inline $servIniList “environs=”]
                  #puts $environs
                  set siteList [split [lindex [split $environs =] 1] “;”]
                  #puts $siteList
                  foreach sitePath $siteList {
                  set site [lindex [split $sitePath /] end]
                  puts “Site: $site”
                  set setsiteRes [catch {eval [exec ${hciRoot}/sbin/hcisetenv -site tcl ${site}]} err]
                  if { $setsiteRes } {
                  puts “Error in setting site:”
                  puts $err
                  }
                  set showroot [exec showroot]
                  puts $showroot
                  netconfig load ${sitePath}/NetConfig
                  set procList [netconfig get process list]
                  if { $procList == “” } { continue }
                  #puts “Site: $site”
                  #puts “Process List: $procList”
                  #foreach process $procList {
                  # puts “Will run hcicmd -p $process -c \”.output_cycle\””
                  # set cycleRes [catch {eval [exec hcicmd -p $process -c “. output_cycle”]} err]
                  # if { $cycleRes } {
                  # puts $err
                  # continue
                  # }
                  #}
                  #exec [find ${sitePath}/exec/processes -maxdepth 3 -mtime $clearDays -type f -name *.log]
                  #exec [find ${sitePath}/exec/processes -maxdepth 3 -mtime $clearDays -type f -name *.err]
                  #
                  }

                   

                  So interestingly enough, it all calls correctly, but doesn’t seem to actually change the site. Seems to be heading to just manually moving and deleting the file.

                  in reply to: EPIC integration with Cloverleaf #121731
                  Jason Russell
                  Participant

                    TS is an Epic Employee. Basically your direct contact that is an “expert” in your area (Being bridges). You will do the day-to-day (port changes, starting/stopping, basic troubleshooting, setting up and getting details from vendors, etc), and they will do more complex stuff that requires deeper dives into other parts of Epic. They have access to what other facilities do (their ticketing system, Sherlock, they can see non-phi items for other hospitals). If you can’t figure out the issue, they normally can, though sometimes it takes times. They also reach out to the actual Epic Devs on really complex issues (we’ve hit that a few times).

                    You’ll basically continue doing the same thing you’re doing now, except you do have a bit more control in Epic, setting PV’s to determine how your specific interface in Epic works. As part of your implementation, you will be sent to take the Bridges class (do it in person, I will admit their campus is amazing), and get certified. you’re not supposed to work in Epic (on the back end) unless you’re certified. When I did the class, it was two days, and some of those certificates (Cogito, Beaker, Willow, etc) can 1-2 weeks or more.

                    Honestly, your state query (the one we’re fighting with now is the vaccine query) should really be FHIR or SOAP/HTTPS which really simplifies things, but it depends on the state itself. Like I said NC is behind the curve with this technology, so your mileage may vary, but it seems you already have some experience in that.

                    I’ll be more than happy to answer any question I can based on my experience.

                    in reply to: EPIC integration with Cloverleaf #121727
                    Jason Russell
                    Participant

                      We’ve been with Epic since 2017 (live, started the project in 2016). There are a few key points about Epic and LOTS of decision making that will happen (both at your level and much higher than your level).

                      • No Epic integration is the same. There will be different teams that do different things.
                      • You will have some modules that others don’t and vice versa, just depends on what your organization decides to spend on.
                      • Epic has a lot of solutions (That cost money) that make many integrations much easier.
                      • From what I can tell, neither Epic Connect or Epic Direct is tied into ‘integrations’, meaning Cloverleaf will have little to nothing to do with those.
                      • Direct Messaging is handled through Care Everywhere.
                      • Epic Connect is also a MyChart feature and I’m thinking of the correct item, it’s sending data from facilities through Epic. No “integrations” done.

                      If I’m thinking of the wrong things, feel free to correct me. Epic has MANY different modules with MANY different names.

                      So a super basic high-level item with epic, ALL of their master records have an 3 letter ID that tells you what they do. AIPs are your interfaces. AIK is the Interface Kind which determines how your interface functions. EPT is the patient master record, things like that. There are hundreds of these, most you’ll never touch.

                      From an “integrations” standpoint (HL7, FHIR, etc) you will be working with Bridges, which is their primary connections for HL7 and X12. These are the interfaces that will connect to Cloverleaf and back. You will be no real coding in Epic, just setting profile variables (PV’s) on your interfaces (AIPs) to make the interface “do things”. There are custom codes that they do (Programming points) but they are incredibly basic and will normally be handled by your TS (technical service) person.

                      Some “epic funny business” that I’ve seen from our implementation specifically. Epic has a hard time differentiating between A31 (person level) and A08 (visit level) messages. While this is fine most of the time, there are some systems that don’t want visit information in their A31s, and need visit information in their A08’s, and Epic just doesn’t do that sometimes. Again, most of the time it doesn’t matter, but for some systems it may, so you may have to check for those.

                      Develop everything at a minimum of HL7 2.5+. Epic doesn’t really have a “Version ID” and they use a lot of bits and pieces from various HL7 standards. To save yourself a lot of effort, skip 2.3.1 completely and just jump to 2.5, if that is even a consideration. It will make your structures a lot easier to manage.

                      Something new, typically most of your interfaces will be outgoing data (ADT, Orders, scheduling, etc) and sometimes a response (Results, documents, etc).  However, they’ve been moving to a weird hybrid of bidirectional interfaces where they will send a message and expect a response on that outgoing interface, except they actually jump those replies to an actual inbound interface to route back into epic. Brush up on your ACK levels (Commit vs Application) and “extended acknowledgement” using MSH 15/16 to determine IF and HOW they want replies. You’ll run into this when you start doing device integrations and potentially with state-level integrations IF you go the HL7 route and not SOAP over HTTPs. We’re in NC and … NC is behind in that so it may not be an issue.

                      In the 7 years we’ve been on Epic, I’ve noticed a trend where Epic is trying to do EVERYTHING. They do a LOT of FHIR integrations, but they give a few for free then start charging for it. They are pretty convenient, but it’s going to be a $$$ issue. They’re now starting to implement their “not an engine” Space Bridge where they can do basic filtering and whatnot, so I fully expect there to be a lot of expansion on that in the future. Not sure how far they’ll go with it, but considering how they’ve been so far, it wouldn’t surprise me if they tried to replace conventional interface engines.

                      Epic is a pretty big deal with a pretty big company. There’s a LOT of moving parts in Epic and a lot of times things get lost in the aether. There’s a lot of other minor things that we’ve come across that may or may not be applicable in your situtation (Physician IDs (Use NPI!), having a HAR ADT and CSN ADT depending on your needs, etc). It’s a big project, but Mama Epic will be with you every step of the way. They will have loads of people on their side helping you implement this and guide you, but a lot of their ‘guidance’ will be keeping you in the Epic fold as much as possible.

                      Good luck!

                      Jason Russell
                      Participant

                        Glad I stumbled across this, we are definitely going to be running into this very soon (We use eCare Next/Passport for RTE, Address Verification and NOA through eGate currently). I don’t think we’ll have a throughput that will require multiple servers, but that may also be why we have some really wonky issues with eGate and connectivity, that we were simply unaware of. I remember having to fight with them to get their encapsulating characters correct and we’ll have to bring those over.

                        in reply to: Permanently stop monitorD #121713
                        Jason Russell
                        Participant

                          Thank you all. Kind of silly you can’t temporarily disable that or disable for specific sites/processes.

                          in reply to: Permanently stop monitorD #121700
                          Jason Russell
                          Participant

                            I do have GM, that specific site was removed. What process did you follow to make sure GM wasn’t affecting it?

                            in reply to: Get host xlate name in included xlate #121647
                            Jason Russell
                            Participant

                              We haven’t had the opportunity to use INCLUDE in our xlates yet, but when you INCLUDE, does it pass the variable down to the included xlate file? IE: Define a variable before including it (call it @xlatehost or whatever), then call that variable in the included xlate if necessary? I’m not sure there is a global variable that would have that directly (again, haven’t researched much on the topic yet).

                              Jason Russell
                              Participant

                                I’m fairly certain that when inserting a node into the inbound, it splits the node into separate elements in the xlatInList. So if your name was LAST^FIRST^MIDDLE, your xlate in vals would look like this:

                                {@xlt_put_userdata_debug} {@xlt_put_userdata_key} {LAST} {FIRST} {MIDDLE} {@xlt_put_userdata_key_override} {@xltecho_debug}

                                and you are expecting:

                                {@xlt_put_userdata_debug} {@xlt_put_userdata_key} {LAST^FIRST^MIDDLE} {@xlt_put_userdata_key_override} {@xltecho_debug}

                                If that’s the case, you will have to define a variable (@name or whatever) and copy pid-5 into that variable, then pass the variable into the translation line you’re attempting to do.

                                in reply to: Upgrade 6.2.2 to 2022.09.03 on RHEL 8.9 #121645
                                Jason Russell
                                Participant

                                  We had a similar issue, as Jim noted, the umask in your .profile (/home/hci/.profile) is what was causing issues. We keep all of our cloverleaf directories (and subortinate files) as 700 (drwx——.) our umask is the inverse of that, so it’s 077. When we initially started it was something off like 022.

                                  Once the umask was set correctly, we had no issues.

                                  For reference, we noticed the issue when we moved from RHEL 8.3 to 8.7 (We’re up to 8.10 now).

                                Viewing 15 replies – 1 through 15 (of 37 total)