Jason Russell

Forum Replies Created

Viewing 15 replies – 1 through 15 (of 48 total)
  • Author
    Replies
  • in reply to: Multiple Cloverleaf GUI Windows #121833
    Jason Russell
    Participant

      Have you opened task manager to see what the guis are trying to run?

      I have also noticed something odd with our set up. We have virtual desktops specifically for Cloverleaf development. They’re on the same virtual cluster that the servers are on.  However, we have Imprivata OneSign SSO to log into the desktops that auto-logs us off after 10 minutes. When the box is logged off of, and one of the GUIs are up, it will cause a lot of CPU alarms through the vSphere client. Once you log in, it dies back off. We’ve never found a root cause (hard to see what’s hogging the CPU when you’re logged off of the box), and this may be somewhat related. It seems when one of the guis are “idle” it causes problems. Again, it’s really hard to find the root cause when you can’t log in, even using remote powershell scripts to see what is running. If the GUI is completely closed out it never happens. It also happens a LOT less with 2022.09 vs 20.1.

      As for the network speed, I’m not sure how Cloverleaf caches data, but it does seem to pull data every time you open things and move around. Like I said on our setup, we have a pretty open connection to the server itself, and it can still take time. 20.1 was AWFUL with this, especially with the testing tool, it would take forever to load. Once we moved to 2022.09, much of it became substantially faster.

      in reply to: FTP logging #121814
      Jason Russell
      Participant

        I figured I’d resurrect my old thread. We are setting a new FTP connection. We created a key pair, gave the end system the public key. I can successfully log in with the private key  on the command line with sftp -i <key> <username>@<host>. However, when attempting to set the connection with Cloverleaf, I get an authentication error:

        * SSH public key authentication failed: Unable to extract public key from private key file: Wrong passphrase or invalid/unrecognized private key file format

        The only thing I can think of is we do have spaces in the passphrase, as well as an @ and an !. Would those conflict with Cloverleaf’s ability to use the keys? I didn’t see anything in the help that would indicate that.

        in reply to: Wizards question #121807
        Jason Russell
        Participant

          We don’t use the wizards (Yet), but if you pull your help file up (through Infor’s website or locally if you aren’t on a newer version), and it’s under Inform Cloverleaf Integration Services User > Cloverleaf wizard. Searching ‘wizard’ should bring it up.

          Online help link  (2022): https://docs.infor.com/clis/2022.x/en-us/useradminlib_on-premise/default.html?helpcontent=clfisolh_on-premise/ksj1491837470180.html&hl=wizard

          From a high-level standpoint, it’s another browser based application that allows restricted access to engine components. I believe they are pushing to move a lot of functionality to the Wizard as they continue to develop it. It may make things easier on the developer as well in the future, but again we’re not in that deep yet.

          Jason Russell
          Participant

            Is this file overwritten daily, or incremented in some fashion (by datetime stamp or counter)?

             

            edit: Also, is the location of the drive mounted/attached to the computer as a drive, or will it need to be FTP’d in some fashion?

            Jason Russell
            Participant

              This can be done, as Charlie stated, with fileset-local. However, you really want to either have the files renamed or removed so you don’t pick up duplicates. Probably the easiest way to do this is have a process outside of the process (shell scripting, timed event within cloverleaf, etc) that renames the files to something else so cloverleaf can pick them up.

              • Outside process places file named pickmeup.fre
              • script in some place checks for files older than x date/time, or keep a record of the last file picked up.
              • Rename all files after that point to pickmeup.fre.clover (or whatever), and cloverleaf will pick up the file(s) and delete them.
              • Cloverleaf processes files normally.

              There’s a lot of unknowns here which make it difficult to answer the question.

              • Are the files sequential or have date/time stamps?
              • How many and how often are the files placed?
              • Why is it a requirement to keep the files named the same in the same location.
              • Can they not be moved after processing to put them in a different directory (as suggested).

              No matter what method is used, as the files build in a location, they will start to take longer and longer to parse, search, and process.

              in reply to: BASIC CLEANUP TASKS #121782
              Jason Russell
              Participant

                Out of curiosity, what do you mean when you say “basic cleanup tasks”. We’re on a Linux environment, so I’m wondering if this is a windows specific thing, or just general ‘clenaup’ meaning reducing databases if necessary, cleaning up old files, etc?

                in reply to: Alerting off of Error DB #121781
                Jason Russell
                Participant

                  Finally decided to contact support about this, and after a lot of testing, and getting close to going full debug mode, we found the issue. It seems that cloverleaf writes out a temporary mail file in $HCISITE/exec/hcimontord/. However, something happened with the files we wrote out, and cloverleaf wasn’t able to overwrite these files or change them. Once the files were removed, the alert worked fine. Something very odd and not common at all that was at play.

                  in reply to: Alerting off of Error DB #121772
                  Jason Russell
                  Participant

                    Amusingly enough, now it just crashes the monitorD if I get an error:

                    {ALERT
                    { NAME {Error DB Depth – st_sched1 Test} }
                    { VALUE errdb }
                    { SOURCE t_eGSCH2aria_o }
                    { WITH -2 }
                    { COMP {> 0} }
                    { FOR once }
                    { REPEATING {
                    { MAX 2 }
                    { TIME {nsec 1} }
                    } }
                    { WINDOW */5:30-17:30/*/* }
                    { CUSTOMMSG {{ITEM {}} {DELIMITER keyvalue}} }
                    { ACTION {
                    { email {
                    { FROM cloverleaf@infor.com }
                    { TO jmcrussell@firsthealth.org }
                    { SUBJ {Error DB Test} }
                    { MSG {Test to trigger. } }
                    } }
                    } }
                    }

                     

                    Everything runs fine until an error hits, and then everything goes down.  The monitor immediately crashes after I attempt to start it. Once I clear the errors, it works fine. Any ideas?

                    in reply to: Faulting – hcitcl.exe – nightly scripts #121768
                    Jason Russell
                    Participant

                      Another question (We’ve seen this in other areas), if you’re pulling the log file data itself into memory, how BIG are your files? If you’re simply moving the files without pulling them into memory it shouldn’t cause a problem, but if you’re reading the file into memory, if they’re large (I know they can get big, quickly if you have echo’s or various levels of verbose turned on).

                      Could you share your script? May be something in the script itself that someone could point out that may be causing issues.

                      Just thinking in high-level thoughts, the other possible problem is potentially the script is running too quickly, especially if the files are exceptionally large. If you’re moving a file, and then trying to access the file at the same time (log is writing to the log file) that could potentially cause issues as well.

                      in reply to: Alerting off of Error DB #121762
                      Jason Russell
                      Participant

                        Yeah, we had to mess with the alerts and emails a bit too, but we did get emails to pass through (one is set to interface down and it works fine).

                        What is your source count set to?

                        Also, looking at your “time window” , we were setting for a single time, and i see now your s looks like from 5:30-17:30 on the hour slot. That may be part of our issue.

                        in reply to: Hardware for Red Hat Linux #121759
                        Jason Russell
                        Participant

                          RHEL 8.10, CIS2022.09.03. 8 CPUs, 32GB of RAM. We’re still migrating to the servers. We’re debating skipping 9 and going to 10, but it probably depends on the next version support.

                          in reply to: Log control #121752
                          Jason Russell
                          Participant

                            The script works great, now I have to figure out how to get it to pull the environment variables in cron so it will actually run automatically.

                            in reply to: EPIC integration with Cloverleaf #121750
                            Jason Russell
                            Participant

                              Epic is pretty flexible on how it gets data and displays data. I’m not sure if they can process textual data to discrete fields, but it wouldn’t surprise me if they did. That will be handled by your Beaker analyst and their TS. Typically if you want it to fill discrete elements you’ll have to mark it as such. I can’t say much in the way of CCDAs, we don’t touch those at all. Those are handled by different groups (Grand Central or HIMs I believe). Once of the decisions that were made is that our Bridges group would not handle any FHIR or “API hooks” as some refer to them as. Most data imports are going to be handled by non-bridges folk (Usually the group where the data is going to go). Things sent to and picked up by SFTP are not handled by the interface group (excepting facilitating the transmission of said SFTPs in many cases).

                              You may find that the reach and some of the things that you were doing in your current set up may no longer be your responsibility in Epic.

                               

                              in reply to: Log control #121744
                              Jason Russell
                              Participant

                                So I think the primary difference between our codes is the fact that yours is mean to be manually run, mine is automatic. It writes to a log file so I get log files each day of what happened. I may update so that the log files are also kept for a period of 14 days for posterity, but this is neither here nor there. I’ve removed a lot of the comments as this code doesn’t really need a lot, it’s pretty straightforward.

                                log=”/sitedata1/tmp/clLogClean.log”
                                logDays=14

                                SCRIPT=$(basename $0)
                                script_start_time=$(date +”%B %e, %Y %T %Z %p”)

                                echo “======================= $SCRIPT begin time: $script_start_time =======================” > ${log}

                                # Pull site list from server.ini. Script assumes no spaces in site names.
                                siteList=$(grep “environs” ${HCIROOT}/server/server.ini | awk -F “=” ‘{print $2}’ | sed ‘s/;/ /g’)

                                for siteDir in $siteList;
                                do
                                site=${siteDir##*/}

                                setsite $site

                                echo “SITE: $site <<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<” >> ${log}

                                process_list=$(grep “process ” $HCISITEDIR/NetConfig | awk ‘{print $2}’ | sort -d -f)

                                for process in $process_list;
                                do

                                loghistory_folder=$HCISITEDIR/exec/processes/$process/LogHistory
                                echo “PROCESS: $process” >> ${log}
                                echo “Cycling logs for ${process}…” >> ${log}
                                hcicmd -p $process -c “. output_cycle” >> ${log}

                                if [ -d “${loghistory_folder}” ];
                                then

                                echo “Removing files older than 14 days:” >> ${log}
                                find ${loghistory_folder} -type f -mtime +${logDays} -print -delete >> ${log}

                                else

                                echo “PROCESS: $process does not have a LogHistory folder.” >> ${log}

                                fi
                                echo >> ${log}

                                done
                                echo >> ${log}

                                done

                                echo >> ${log}

                                I’ll get a proper file uploaded momentarily.

                                in reply to: Log control #121742
                                Jason Russell
                                Participant

                                  Jay, thanks for the script. I’m probably going to co-opt it and make some changes (I work with absolute paths, don’t CD around so things don’t get changed and I don’t get down the wrong directory, got bitten once by that). The only thing I noticed that I saw that was odd, was there seems to be a mix of tabs and spacing in the code (multiple people working on it potentially?). Not sure if it was in the source or something weird happened when you uploaded it.

                                  Looks like grepping the processes out of NetConfig the most convenient way. I’m still debating if I want to use the server.ini to get a list of the active sites, or if I want to do all. Otherwise, that is a straightforward script, thank you. I’ll post my update to it if you’d like when I’m done.

                                Viewing 15 replies – 1 through 15 (of 48 total)