We had a similar issue when changing from 5.6 to 5.8 so I’m guessing the change was done in 5.7.
In our case it went from around 5 minutes up to 30+ minutes to process a large xml file.
The issue appears to be that cloverleaf will iterate through the entire xml structure to the node your working on whenever something is read/written, so as the structure gets larger it gets exponentially slower.
I’m guessing that it is reading/writing from disk as well rather than memory, so that is the cause of the slowness.
We ended up using a basic tcl xml parser to read in the xml to bring it to a reasonable timeframe to handle the large xml documents.
Our vendor raised a ticket with Infor to have a look at this at the start of June, but I’m not sure on the status of the ticket.
I’m just curious. Are you using an XSD to define your (outbound) XML? And if so: are there any ‘maxOccurs = 99’ or ‘maxOccurs = 9999’ in the XSD? Try replacing it with ‘maxOccurs = unbounded’.
I have a feeling that when you use ‘maxOccurs = 99’, Cloverleaf will always go to through the repetition 99 times, whether it’s filled or not. With ‘maxOccurs = unbounded’, Cloverleaf will check whether another repetition is necessary. Just a gut feeling though.
Zuyderland Medisch Centrum; Heerlen/Sittard; The Netherlands
This causes it to go through the entire structure up to that point for each index. ie if you search for index 1000 it goes through 1000 index’s to get to that point.
So for my case with around 30000 indexs it goes through around 450 million iterations.
It appears that something changed in the grm code to increase the processing time by 5-6x the amount.
Author
Replies
Viewing 3 reply threads
The forum ‘Cloverleaf’ is closed to new topics and replies.