Message size chokes the connection between Cloverleaf and Epic InterConnect

Clovertech Forums Cloverleaf Message size chokes the connection between Cloverleaf and Epic InterConnect

  • Creator
    Topic
  • #118878
    Ron Swain
    Participant

      Every now and then we get very large message from an application and it seems to choke the connection between Cloverleaf and our Epic InterConnect thread. These message are usually larger than 10 mg.  We had one yesterday that was 17.1 mg and had over 4500 embedded PDF’s in the OBX segments. There is a translation on this feed. I found an old thread discussing increasing the size of memory, stack and processes. The results of running the ulimit -a returns the following:

      core file size (blocks, -c) 0
      data seg size (kbytes, -d) unlimited
      scheduling priority (-e) 0
      file size (blocks, -f) unlimited
      pending signals (-i) 1030675
      max locked memory (kbytes, -l) 64
      max memory size (kbytes, -m) unlimited
      open files (-n) 65536
      pipe size (512 bytes, -p) 8
      POSIX message queues (bytes, -q) 819200
      real-time priority (-r) 0
      stack size (kbytes, -s) 8192
      cpu time (seconds, -t) unlimited
      max user processes (-u) unlimited
      virtual memory (kbytes, -v) unlimited
      file locks (-x) unlimited

      For the size and content of the message mentioned, should any of these be increased and what would the recommendation be? As always thanks in advance.

       

    Viewing 2 reply threads
    • Author
      Replies
      • #118879
        Charlie Bursell
        Participant

          You did not say which OS.  AIX or Linux?

          Assuming AIX check out:
          https://www.ibm.com/docs/en/aix/7.2?topic=memory-interprocess-communication-limits

          How do you know if it is a problem with the Cloverleaf box or Epic Interconnect?

           

          • #118880
            Ron Swain
            Participant

              I do not know if it is Cloverleaf or Epic Interconnect for sure. That is why I am troubleshooting this and would like to do everything I can to eliminate the probability of it being an issue on Cloverleaf.  My OS information is below”

              NAME=”Red Hat Enterprise Linux Server”
              VERSION=”7.9 (Maipo)”
              ID=”rhel”
              ID_LIKE=”fedora”
              VARIANT=”Server”
              VARIANT_ID=”server”
              VERSION_ID=”7.9″
              PRETTY_NAME=”Red Hat Enterprise Linux”
              ANSI_COLOR=”0;31″
              CPE_NAME=”cpe:/o:redhat:enterprise_linux:7.9:GA:server”
              HOME_URL=”https://www.redhat.com/”
              BUG_REPORT_URL=”https://bugzilla.redhat.com/”

              REDHAT_BUGZILLA_PRODUCT=”Red Hat Enterprise Linux 7″
              REDHAT_BUGZILLA_PRODUCT_VERSION=7.9
              REDHAT_SUPPORT_PRODUCT=”Red Hat Enterprise Linux”
              REDHAT_SUPPORT_PRODUCT_VERSION=”7.9″

              Thanks

            • #118886
              mike brown
              Participant

                I ask the client to make the adjustments and the second issue is the client is not ACKing our messages after adjustments were made. They did receive a message without the PDF.

                thanks Mike

            • #118881
              Ron Swain
              Participant

                After researching more I do not believe it is the message size itself; but the number of embedded PDF’s in the OBX segments… but still interested in any thoughts.

                 

                Thanks

              • #118882
                Jim Kosloskey
                Participant

                  Do you observe this choking behavior in Test if you send to a File Protocol?

                  Do you observe this choking behavior in Test sending to EPIC?

                  What makes you think it is the number of PDFs (OBX Groups)?

                  email: jim.kosloskey@jim-kosloskey.com 29+ years Cloverleaf, 59 years IT - old fart.

              Viewing 2 reply threads
              • You must be logged in to reply to this topic.