Forum Replies Created
-
AuthorReplies
-
Hey Stephen – curious what your approach is to repos and branches. We are a software company (not a hospital) that uses github extensively, and I’m trying to figure out the best way to adapt it to Cloverleaf’s architecture. Automated deployment is one the goals (at least for things like procs, xlates, formats, and tables – I don’t think something like NetConfig would work) as well as just versioning.
My 2 ideas right now are either:
1. Separate repos for our TST and PRD boxes. This makes promotion tricky as we’d have to figure out how/when to pull from the TST repo, install into PRD, then push back to PRD.
2. 1 repo with master branch for PRD, develop for TST, and maybe feature or bugfix branches. This would better match a modern day development process, but I’m not sure if this would work for Cloverleaf’s architecture.
thanks for any input
Steve Pringle wrote:It would also help to know what hardware platform/OS you’re running on and how it’s configured.
We process ~12 million messages a day, we’re running on a dedicated AIX server with 4 cpus and 32 Gb of memory.
Jim Kosloskey wrote:Joshua,
Also if you are using cross-process routing with high volumes, that could be an issue. Have you tried comparing to having everything in one process in this case?
Or are your volumes spread out throughout the day?
When exactly is your peak arrival period and what is that peak volume wise and how long does it last?
I have tried to put everything in one process and it helps the transfer time between threads but then the xlate time drastically slows down.
Our current setup is file based and each file can have 50K or messages to process. Maximum load will not be daily but will be backloads when a new customer comes on board. We can smooth this out but short of chopping files into 10K batches, we will still see times when 50K or more are read into the interface.
I figured out a way to make this work, but any suggestions on a way to do it better would be welcome!
1. Disk_in reads in file (for current customer, this is a .csv) line by line
2. VRL format -> xlate -> json format
3. Disk_out saves each json record to a file on local disk
4. A second disk_in reads in entire file using eof termination instead of newline
5. inbound tps proc on second disk_in surrounds entire file with [] and uses regsub to add in the comma between each json record
regsub -all {}n{} $msgText “},n{” msgText
6. sftp_out sends entire file to remote server
I haven’t tested it yet, but I think this may end up processing faster overall since the records are written locally and the sftp proc only needs to send one large file (vs. sending each record one at a time over sftp)
Yeah, post xlate I need the output file to start with an open square bracket, terminate with a closed square bracket, and have commas seperating each json record.
[Rec1, Rec2, Rec3]
I may need to do something similar, but I’m still figuring out tcl (or at least the right data structures to use within Cloverleaf).
I know this is an older post, but does anyone have a sample proc they can share?
-
AuthorReplies