Forum Replies Created
-
AuthorReplies
-
** This is given assuming you’re writing in Linux, not windows. If you’re in Windows you’ll have to adjust the slash direction (/ vs \) and file path.
My script renames the file in place. $msg_list contains the full path, and I’m simply appending ‘.processed’ so it doesn’t get picked up by the file-path glob/regex. So I pick up *.txt normally, so having it end in *.txt.processed means it won’t get picked up again.
file rename -force /opt/cloverleaf/cis2022.09/integrator/sd_lab1/indata/file_to_run.txt /opt/cloverleaf/cis2022.09/integrator/sd_lab1/indata/file_to_run.txt.processed
If you want to change the folder completely and put it in an archival folder, you would have to get the file name by stripping off everything but the last ‘/’ ( set fileName [lindex [split [$msg_list] /] end] ), and set the full path in the destination. Paul got his ‘archive’ path by passing it in. You can code it in if you want.
Also, I haven’t had much time to play with the script since I put it in place, but you may want to change file rename to file copy, as I think cloverleaf will automatically clean up the file regardless, and I get an error saying it can’t find the original file.
Your best best is probably to have a test server and install and run some testing. You’ll find there’s probably not a lot of people who are “bleeding edge” in terms of OSes.
I don’t know anything about your environment in general, but we use VMs for everything. If you have a test server (you should), I would install on 2025 and migrate and see how it pans out. If it fails, you can move back to the old server. That is generally the process we would follow if you can’t do an in-place upgrade.
Jeff, I simplified Paul’s script a bit, and it does what you’re looking for. You still have to rename the file for it to work properly. The files I use simply get named with ‘<filename>.processed’ in the same directory they were stored. I’m going to do something more akin to what Paul did on the FTPs, but this I’m using for local files.
The main issue I found with his is an errant return that didn’t return the disposition list which I believe is why you’re getting repeated files. Make sure the “return {}” is removed so it can finish processing the file properly.
# Get message handle
keylget args MSGID mh# Check for correct context
keylget args CONTEXT ctxif {$ctx != “fileset_ibdel”} {
echo “\nERROR proc used in wrong context”
echo “Context should be fileset_ibdel”
echo “Proc called in: $ctx\n”
return {CONTINUE $mh}
}# In this context the message handed to this proc
# by the driver in not a data message. Rather
# it is a list of file names from the directory
# the fileset driver is configured to read.
#
# The list of files to process is accessed
# in the same way data messages in other contexts
# are accessed.
#
# The list is manipulated and returned
#
# The fileset driver now processes the files in
# order they appear in the returned message list.set msg_list [msgget $mh]
set ofileid [open file_process.log a]
echo “Inbound File: $msg_list”
#set destination_file $arg_dest_dir
set destination_file $msg_list.processed
#echo “ofile: $msg_list.processed”file rename -force $msg_list $destination_file
# Put modified list in the message handle
msgset $mh $msg_listlappend dispList “CONTINUE $mh”
-
This reply was modified 2 weeks, 3 days ago by
Jason Russell.
May 27, 2025 at 5:10 pm in reply to: SMAT DB file will not open on Win11 via hardwired connection #122060Interesting. I can’t help and test much, I’m remote, using a MacBook pro to VPN into our network, log into a linux workstation for all of my CLI needs, and a Windows 10 workstation onsite for GUI needs.I’m assuming you’re on a VPN of some sort, and that shouldn’t make a difference.
If you’re running the GUI locally on your laptop, I can only think it is some Windows shenanigans switching from wireless to wired. The only thing I can think of just as a troubleshooting step (not as a workaround) is if you have started the laptop on the dock itself, if it acts differently.
My inner tech is coming out and I really wanna see this firsthand, lol.
May 27, 2025 at 8:18 am in reply to: SMAT DB file will not open on Win11 via hardwired connection #122058There’s a lot that could be going on here, but here’s my first question: Are you at work or remote? Our facility has a different IP range and grouping arrangement for wireless vs wired connections. You may be hitting a strange thing where you’re losing authentication between sessions because your IP is different.
Are you logging out before changing from docked to undocked?
Are you completely exiting the gui?
When you dock, does it disable the wireless connection completely (not just set your wired to primary)?
Dear Cloverleaf, please let me edit my message over and over to cover for my mistakes:
I think my edit is answered by the comments, but it is still unclear to me why you would have: return “{CONTINUE $mh}” in the error but a blank return at the end of the command. That return near the end would stop the command before the displist return is processed.
-
This reply was modified 2 weeks, 6 days ago by
Jason Russell.
I love it when I come in and someone asked the same question I was about to.
First off, thank you for the answers above. I’m probably going to grab Paul’s code and modify a bit.
However, is there a ‘default’ tps that is used, or is file deletion simply written into the engine itself?
edit: Also, I’m not seeing the return of a dispList in some form, are these not necessary? I see it in Paul’s code for the error (IE: Nothing happens), but not after everything gets moved.
-
This reply was modified 2 weeks, 6 days ago by
Jason Russell.
-
This reply was modified 2 weeks, 6 days ago by
Jason Russell.
I would iterate and key off your assigning authority or assigning facility (PID 3.4 or 3.5), then run a simple if statement to grab the correct ID. That way it doesn’t matter if there is a change in their system, you will always get the ID you want. Essentially:
Iterate on basis of field PID-3
if PID 3.5 eq =PI
pathcopy PID 3 (iteration %f) to outbound PID 3
We do this frequently, and I can pull the code for you in our system if you need a direct view.
Cloverleaf prioritizes inbound traffic, and if that traffic never stops, it may cause the outbounds to slow down too much. This also gives your outbound routes time to process everything so they don’t become clogged up or unbalanced. We would never put more than around a hundred thousand messages (ESPECIALLY PDFs!) at a time. Time it, see how long it takes to process them out completely and set your pickup timer to do that.
You should be able to control that with a fileset local, and throttle so you only pick up x number of messages every y seconds (depending on if they’re in a single file, or single message/file).
That’s a big load, and it’s going to take a few days to process it all through.
If it’s running on all sites and processes, it’s probably a very global process. May want to check your $HCIROOT/tclprocs to see if anything was modified. What’s a hard find unless you know what’s calling it.
I made a mistake. The msgcopy should be inside the loop. msgcopy will give you a new message handle for each message while maintaining the original metadata. To make it better, if he original message was large, after getting the data from the original message handle you could do something “msgset $mh “. That way you are not copying the data with each msgcopy, only the metadata. Note the original message handle is killed since it contains nothing needed after the loop. I hope this makes sense.
A lot of this nuance is what I’m trying to figure out. I’m assuming my statement prior about different message handles being passed in is correct. msgset puts data into the current message ID, so I don’t think I want that, I think msgcopy will be fine (since the original message is a small ADT), then I can replace the message itself. The script would be more like:
set fileList [lsort [glob /folder/ORU_R01_*.HOLD]]
foreach fileName $fileList {
set newmh [msgcopy $mh]
set fh [open $fileName r]
set newmsg [read $fh]
close $fh
set newmh $newmsg
lappend dispList “CONTINUE $newmh”
}I would do msgcreate, but I do want to keep the intial metadata (where it came from, etc). I believe this is ultimately what I’m looking for.
-
This reply was modified 2 months, 1 week ago by
Jason Russell.
April 2, 2025 at 9:31 am in reply to: How to convert data in xml into array or list using tcl script #121980Would you want to keep them linked? You may want a dictionary where the key is probably order number with the account and code as the dataset, you’d have something similar to:
reportInfo { ORD0029 {C628 PROC1}
ORD0030 {C629 PROC142}
… {…}
}
I’d have to look up the syntax specifically. You can pull the keys via dict keys and process them in a foreach later if you need. I think an array would work as well, but a dictionary will allow you to link an order number to the account/Code, where single variables/arrays would require you to make sure they were in a specific order, and keep that order if removing/adding elements later, assuming you needed that link.
Charlie, this may be what I’m looking for, with some potential modification, but some questions:
What is the point of the first msg copy? The original ADT will be in the first $mh, and any additional messages will be from the files themselves, so wouldn’t it be better to:
foreach fileName $fileList {
set fh [open $fileName r]
set newmsg [read $fh]
close $fh
msgset nmh $newmsg
<do some minor work on putting MRN/CSN into message>
lappend displist {CONTINUE $nmh}
}
This will produce a list that is {{CONTINUE <first message raw>} {CONTINUE <second message raw} {…}}
That will push each message individually into the next step (in this case, next step being translation)?
In essence the disposition list can be a list of lists (not just a list with two items) that contains the disposition and the message, that would functionally work the same as individual message, so if I were to pull in a file with multiple messages (a batch-style), we could process each message individually, and append the dispList with all of them to be broken apart after the TPS is done, doing other dispositions (KILL, ERROR, etc) on each message.
The ultimate issue is we get the documents long before we get the ADT (on the scale of two weeks). It’s an overly complicated process. They’re EMS runsheets, so the transport happens, EMT documents in software by company A. Company A sends the document immediately to us, and then nightly batches the runsheets to company B. Company B then holds the runsheets for a minimum of 5 working days to ensure there are no additional notes. The runsheet is then coded, and a nightly batch sends ADT and DFT. We use the ADT to create the encounter in Epic, then use the ensuing ADT out of epic with the keyword to pick up both the DFT and initial runsheet (ORU), and file them to the encounter. There’s other components but that’s the most basic workflow. When the ADT comes out, there may be 2-4 runsheets (modifications, etc) that are tied to the encounter. The short answer is you can’t query Epic for the encounter since it hasn’t been created yet.
Thankfully there is pretty much a zero chance. The second process is usually two weeks out from the first (Essentially Company A sends file to us and company B. Company B waits 5 days MINIMUM to make sure there’s no more updates, then processes it and sends us the ADT).
I was definitely hoping for something a bit more elegant, but writing to a place to be picked up is not uncommon in our old engine.
-
This reply was modified 2 weeks, 3 days ago by
-
AuthorReplies