Joe Sellers

Forum Replies Created

Viewing 5 replies – 1 through 5 (of 5 total)
  • Author
    Replies
  • in reply to: 5.8 Java bug? (Client/Hostserver slow, Disconnections) #80401
    Joe Sellers
    Participant

      Russ, could you elaborate on the problem you had when using interface ports in the ephemeral range on AIX?  We’re running CL 5.8.4 on AIX 6100-06-01-1043 and occasionally have problems with slow GUI performance and no results returned from the GUI testing tools or database administrator.

      in reply to: Setting Up and "OVER" Thread #80293
      Joe Sellers
      Participant

        Thanks Jim.  Your sample, along with what I pieced together from a couple of other posts, did the trick.  I now have this setup in test and am auditing the output.

        As with anything, there are all kinds of “tweaks” possible, but the basic concept was accomplished with a TPS Outbound Data proc on the “middle” thread that creates a new message from it’s outbound message and “OVER’s” it back to the inbound data for further translation/routing to another thread(s).

        I found a couple of examples where a msgcopy was used and one where the original message was simply reused after updating the SOURCECONN.  I played around with these options, but both throw the thread/intersite stats off and reusing the original message does not allow for delivery of an “intermediate” message from the “middle” thread.

        Here’s the code from my proc that actually does the work.  I added an argument to control the disposition of the original message.  Jim had an argument in his code to control the use of the recovery database that I’ll likely add to my proc as well.

        Code:

        set over_mh [msgcreate -meta {USERECOVERDB true} [msgget $mh]]
        lappend disp_list “OVER $over_mh”
        lappend disp_list “$orig_msg_disp $mh”

        Thanks everyone.

        in reply to: Setting Up and "OVER" Thread #80291
        Joe Sellers
        Participant

          We are on 5.8.6.  In my testing with “chaining”, it appears to allow you to link multiple xlates together on one route.  In my testing with “branching”, it appears to allow you to link multiple xlates together on the “pre-route” and then create separate “branch” routes to separate destinations, each utilizing the same or a different xlate.

          My goal is to combine four inbound threads into a single feed and then have a secondary xlate applied to the combined feed.  This is necessary due to a shared xlt proc that is writing out to a file.  If the proc is applied on each of the four inbounds, they’re constantly fighting for write access to the file.

          Thank you.

          in reply to: Inter-Siter Routing with Cloverleaf 5.8 #75822
          Joe Sellers
          Participant

            Thanks everyone for your help and suggestions.  For simplicity’s sake, we’ve decided to go with the tried and true method of inter-site routing over TCP/IP via localhost:port combinations.  With the desire to do additional routing and translation in the destination site, it seems as if we would be intorducing additional points of failure in the message flow with the new ISR feature.  Thanks again!

            in reply to: Inter-Siter Routing with Cloverleaf 5.8 #75819
            Joe Sellers
            Participant

              Yep, that makes perfect sense.  That’s what I’m seeing in my testing too.  Unfortunately, we were hoping ISR could be used to create the second scenario and reduce some of the load on the processor and NIC.  Perhaps a future enhancment…

              Thanks!

            Viewing 5 replies – 1 through 5 (of 5 total)