Homepage › Clovertech Forums › Cloverleaf › Message size chokes the connection between Cloverleaf and Epic InterConnect
Tagged: large message, memory, stack
- This topic has 5 replies, 4 voices, and was last updated 3 years, 4 months ago by mike brown.
-
CreatorTopic
-
May 22, 2021 at 2:56 pm #118878Ron SwainParticipant
Every now and then we get very large message from an application and it seems to choke the connection between Cloverleaf and our Epic InterConnect thread. These message are usually larger than 10 mg. We had one yesterday that was 17.1 mg and had over 4500 embedded PDF’s in the OBX segments. There is a translation on this feed. I found an old thread discussing increasing the size of memory, stack and processes. The results of running the ulimit -a returns the following:
core file size (blocks, -c) 0
data seg size (kbytes, -d) unlimited
scheduling priority (-e) 0
file size (blocks, -f) unlimited
pending signals (-i) 1030675
max locked memory (kbytes, -l) 64
max memory size (kbytes, -m) unlimited
open files (-n) 65536
pipe size (512 bytes, -p) 8
POSIX message queues (bytes, -q) 819200
real-time priority (-r) 0
stack size (kbytes, -s) 8192
cpu time (seconds, -t) unlimited
max user processes (-u) unlimited
virtual memory (kbytes, -v) unlimited
file locks (-x) unlimitedFor the size and content of the message mentioned, should any of these be increased and what would the recommendation be? As always thanks in advance.
-
CreatorTopic
-
AuthorReplies
-
-
May 24, 2021 at 4:31 am #118879Charlie BursellParticipant
You did not say which OS. AIX or Linux?
Assuming AIX check out:
https://www.ibm.com/docs/en/aix/7.2?topic=memory-interprocess-communication-limitsHow do you know if it is a problem with the Cloverleaf box or Epic Interconnect?
-
May 24, 2021 at 7:19 am #118880Ron SwainParticipant
I do not know if it is Cloverleaf or Epic Interconnect for sure. That is why I am troubleshooting this and would like to do everything I can to eliminate the probability of it being an issue on Cloverleaf. My OS information is below”
NAME=”Red Hat Enterprise Linux Server”
VERSION=”7.9 (Maipo)”
ID=”rhel”
ID_LIKE=”fedora”
VARIANT=”Server”
VARIANT_ID=”server”
VERSION_ID=”7.9″
PRETTY_NAME=”Red Hat Enterprise Linux”
ANSI_COLOR=”0;31″
CPE_NAME=”cpe:/o:redhat:enterprise_linux:7.9:GA:server”
HOME_URL=”https://www.redhat.com/”
BUG_REPORT_URL=”https://bugzilla.redhat.com/”REDHAT_BUGZILLA_PRODUCT=”Red Hat Enterprise Linux 7″
REDHAT_BUGZILLA_PRODUCT_VERSION=7.9
REDHAT_SUPPORT_PRODUCT=”Red Hat Enterprise Linux”
REDHAT_SUPPORT_PRODUCT_VERSION=”7.9″Thanks
-
May 24, 2021 at 2:40 pm #118886mike brownParticipant
I ask the client to make the adjustments and the second issue is the client is not ACKing our messages after adjustments were made. They did receive a message without the PDF.
thanks Mike
-
-
May 24, 2021 at 9:07 am #118881Ron SwainParticipant
After researching more I do not believe it is the message size itself; but the number of embedded PDF’s in the OBX segments… but still interested in any thoughts.
Thanks
-
May 24, 2021 at 9:20 am #118882Jim KosloskeyParticipant
Do you observe this choking behavior in Test if you send to a File Protocol?
Do you observe this choking behavior in Test sending to EPIC?
What makes you think it is the number of PDFs (OBX Groups)?
email: jim.kosloskey@jim-kosloskey.com 29+ years Cloverleaf, 59 years IT - old fart.
-
-
AuthorReplies
- You must be logged in to reply to this topic.