Forum Replies Created
-
AuthorReplies
-
Here is how I ended up doing it. I removed most of the error handling for brevity.
Code:
# This Tcl script should be invoked as
#
# tclsh – >
#
# where is an input file and format is one of
# “a” (for ASCII), “e” (for EBCDIC), or “u” (for UTF-8).
## … parameter assigning and error checking removed for brevity
# The emt_date should be 6/26/2017 in Julian format.
# which is yyyyddd where ddd is days since January 1.
#
set emt_date_ascii “2017177”
set emt_date_ebcdic [encoding convertto ebcdic $emt_date_ascii]
puts stderr “Setting PIX emt_date to $emt_date_ascii”set infd [open $infile “rb”]
set ubin [read -nonewline $infd]
set ebin [encoding convertfrom utf-8 $ubin]
set abin [encoding convertfrom ebcdic $ebin]set ebout [string replace $ebin 48 54 $emt_date_ebcdic]
set about [string replace $abin 48 54 $emt_date_ascii]
set ubout [encoding convertto utf-8 $ebout]switch $format {
“utf8” { puts -nonewline $ubout }
“ebcdic” { puts -nonewline $ebout }
“ascii” { puts -nonewline $about }
default { puts stderr “Error: unknown format.” }
}To replay the messages, I invoked this with -u. If I just wanted to see the contents, I invoked it with -a.
Just an update on my findings. I’m more confident about my suspicions outlined above after I figured out how to decode the error DB content. I was mistaken near the end about the failure to decode the recovery DB content as UTF-8 (though it seems like a miracle that it somehow succeeded). Nevertheless, if I dump the contents of the message and
1. manually utf8-decode the contents,
2. treat the result as a hex representation of an EBCDIC encoding and decode that,
I get the expected contents. The mysterious “C” character in between most of the other characters is the result of utf8 starting each two-byte sequence with 1100. In EBCDIC this is the letter C. So the utf8 encoding of an EBCDIC encoding applied to the capital letter P converts it to xC3x97.
EBCDIC(“P”) = xD7
utf8(EBCDIC(“P”)) = utf8(xD7) = xC3x97 = EBCDIC(“Cp”)
What a coincidence that utf8-encoding an EBCDIC capital letter will convert it to a “C” followed by the corresponding lowercase letter. The coincidence is enabled by the fact that upper-to-lower conversion amounts to changing the second bit from 1 to 0. (In ASCII, this is changing the 3rd bit from 0 to 1).
“P” = xD7 = 1101 0111 —- 1001 0111 = x97 = “p”
That really through me off.
I’m teaching myself how to wield the “binary scan” and “binary format” Tcl commands in order to write scripts to extract, edit, and resend these messages.
The SOAP Fault reason text seems to be saying that your UsernameToken element must have a value. According to their sample at the link you furnished, the UsernameToken element must have two child elements: Username and Password (I’ve left the namespaces off). Are you providing these in your SOAP header?
Network interfaces support multiple IP addresses. Often an IP assigned after the first IP is considered a “secondary IP”. There can be multiple secondaries. The interface will respond to ARP requests for either IP.
I suspect the bigger question is how do you eventually remove the old IP (otherwise why bother moving in the first place). If you have just a few clients, it’s easy to determine which client is using which IP. Remove the old secondary IP when all your clients have migrated to the new IP.
Another factor is to which IP does your Cloverleaf servers bind. You could have them all bind to 0.0.0.0 so that all inbound IP traffic is considered. But some people prefer each server bind to a single IP. In the interim you’ll have some servers bind to the primary, some to the secondary. After you have all Cloverleaf servers binding to the new IP, you can remove the secondary IP from the network interface.
Disclaimer: I have no Cloverleaf-specific experience with this. But I’ve done this kind of thing with other network devices used as data center application proxies.
Thanks, Jim and Rob for chiming in. Based on Jim’s initial response, I searched the docs for “xpmerror” and found it contained an option for how I wanted the Xlate engine to handle the error. In my case, I chose “curdetail” for this option. The docs claim it will send the copy of the message associated with the current route detail to the error DB while allowing the other route details to continue. The other two options were “alldetail”, which fails all details (as Rob suspected it might); and “status” which subjected the error status to the setting in the Xlate action panel.
I verified that my message was sent to the error DB when my Tcl proc invoked the xpmerror function. I didn’t find my “reason string” in the metadata of the message that was sent to the error DB. The API docs say the reason string is “attached to the record as its error context.” I’m not sure how one accesses the “error context” of a message.
But finding my “reason string” was a nice-to-have that I didn’t expect when I initially posted this inquiry. Being able to send a message for a single route detail to the error DB without affecting other route details was what I was initially after.
I’ll continue testing and update this thread if I observe behavior inconsistent with the docs.
After some experimenting, reading, and contemplation, I think I’ve been able to answer my original post. My desire to have the connections ultimately end up in the recovery DB instead of the error DB was a result of my misunderstanding of how things generally work in Cloverleaf. I thought I knew of other cases (with raw TCP) where connection failures resulted in the message always being re-queued into the recovery DB. But upon closer inspection of those configurations, the retry count was set to -1. Reaching the retry count is what sends the message to the error DB. With no retry limit, no error DB (at least not for connection timeouts). I’m not sure unlimited retries is the best practice. But at least I understand why the TCP cases behaved that way at my location and that its behavior is consistent with the CAA-WS case.
I still wonder about other ways a web service can fail. For example, the request could reach the server, but the server returns a fault or a 500 return code for some other reason. In the reply TPS, I could send the reply payload to the error DB. But I wouldn’t be able to correlate it with the original request (ostensibly still sitting in my SMAT DB).
Or would I?
Would the inbound reply message metadata contain any clues by which I could correlate it to the original request message sitting in the SMAT DB?
Yes, so far, I only have one CAA-WS thread in my process. I intend to keep it that way due to the warnings in the CAA-WS documentation.
I suspect my problem is elementary. I’m still new to Cloverleaf development and this is my first time playing with the CAA-WS API. In particular, I’m not confident that I’m handling the reply messages properly. I’m coding an outbound node calling a web service in order to notify the service of an event. It’s simple in that I only have to check the response for a successful status code. I don’t need to further process or forward the response payload to another destination.
One suspicion I have about my configuration is that I’m not assigning a disposition to the message I receive in the reply. I’m still a bit fuzzy on how message dispositions work for inbound replies. Should I include a disposition of CONTINUE for the reply message? Or should I KILL it since a successful reply isn’t going anywhere?
Erroneous Information Alert – In my previous post I said the error and recovery DBs showed no entries related to my problem thread. That’s because I had my environment variable set to the wrong site. After I corrected this, I found 15 message in the error DB and several dozen in the recovery DB. I deleted all of it and I no longer receive the PANIC messages.Thanks, guys, for your replies. That helped me zero in on the problem.
As I related above, there was a suspicious 2-byte length field that seemed dangerous to attempt an EBCDIC to ASCII to Unicode to ASCII to EBCDIC conversion. I’m new here; and I was told that this has all been running fine in PROD for a long time (the grief I related above is only occurring in TEST and I refuse to promote any changes until I get them working in TEST). Well, you can probably see where this is going. After checking PROD, I found an extra TPS inbound data script that plugs those two bytes to zeros. This happens before any of the conversion procs. It had never been back-ported to TEST.
Once I added the extra TPS inbound data invocation, TEST has not experienced the conversion issue. This will at least allow me to promote recent changes from TEST to PROD while I contemplate a longer term CP037 solution.
Thanks, Rob. I’m hesitant (but not unwilling) to flip the CP037 switch since my predecessor related to me that he spent a good deal of effort getting the a2e and e2a working after the upgrade to 6.1. (Maybe all he really had to do was use CP037). But I also know he wrote a great deal of custom code doing things like packed decimal conversions, etc.
One angle I’m investigating is an inbound 2-byte binary field from the mainframe. It’s a binary length and thus has no EBCDIC/ASCII representation. Of course, that doesn’t stop the converter from converting it. In theory, it should be harmless since that particular length isn’t used for downstream processing.
But I’ve noticed certain values seem to cause problems. I’m trying to investigate more deeply by dumping error messages for failures and SMAT messages for successes in order to compare them. I view the contents in a hex viewer and I’m trying to understand what I’m looking at.
I’ve read that Cloverleaf stores the messages internally as unicode. When SMAT or DB messages are written to files, they are utf-8 encoded. I’ve been assuming that. But in one case, the error DB exported the following sequence (from outbound to mainframe just before ASCII to EBCDIC conversion):
Code:32 31 20 9b 31 35
In ASCII, 3x maps to the single digit x. 20 is a space. But 9b is suspicious. It’s not a valid utf-8 mapping. If it was really 9b in unicode, it would map to c2 9b in utf-8.
Earlier in the message the 2-character length field I mentioned earlier shows as
Code:7f c3 9b
If I assume these are utf-8 encoded, I expect them to represent unicode characters 0x7f and 0xdd.
I doubt it’s a coincidence that the
9b on the end of my 3-byte example is the same value that causes trouble in the first sequence I listed. The translation rule that handles this inserts an 8-character field right after the two-character (2-byte) length. The result seems to be that the 0x9b is in both places: (the original place, which is 2-bytes in length, but expands to 3-bytes in utf-8; and 8 characters later, it appears again. The FRL subfield definition lists the 2-byte length type as a character field of length 2.I’m contemplating a band-aid by which I simply convert those two field positions to 0x40 (EBCDIC spaces) in the inbound TPS stage before any translation rules run. But I want to be sure I understand what I’m seeing when I view SMAT and DB output.
Hi Folks,
I’ve been working to determine a sensible logging practice using the
logger package that was suggested to me earlier. I’ve posted some of the practices I’ve developed in this preliminary stage to my GitHub account at<a href="https://github.com/pglezen/tpslogger” class=”bbcode_url”>https://github.com/pglezen/tpsloggerIt will evolve as I obtain more troubleshooting experience.
For those of you not familiar with GitHub, you can register for free in order to edit source code repositories and add issues. You don’t need an account to view public repositories. For more information on Git, see
<a href="https://git-scm.com/” class=”bbcode_url”>https://git-scm.com/.Ah, OK. So the log level configuration would be programmatic (an administrator would need to change and reload a Tcl script to change the log level). It’s not something they could configure using Engine Output Configuration like they do with their Cloverleaf modules. I really appreciate the guidance on this.
Would Cloverleaf development entertain a feature request to better integrate UPOC/TPS log level configuration with their other Cloverleaf administrative features? I’m thinking of something akin to certain Java application server models where user-written code is just another module (i.e. package) controlled by the administration console. The programmatic change technique works fine in TEST. But in PROD (where developers might not have direct access), this is important for requesting log information from a Cloverleaf administrator who may not possess the confidence to modify Tcl code (even if it’s just to change a log level).
While pursuing Levy’s tips, I stumbled across the logger package that allows us to assign a namespace-scoped module-name to the logging instance we use to issue log statements. If this library was leveraged, then conceivably this log module name could be used in the Engine Output Configuration strings as a basis for filtering UPOC/TPS log statements. These filters can be configured in Network Monitor and/or Global Monitor if it’s available.
Thanks for the logging tips. I just got around to trying this out. It seems that only error messages appear. When I add the following code during the start mode of a TPS Inbound proc:
Code:package require log
….
switch -exact — $mode {
start {
log::log info “info message”
log::log notice “notice message”
log::log warning “warning message”
log::log error “error message”
….
}
….
}I only see the error message, even though I add enable_all_info to my thread’s Engine Log Configuration.
Code:[tcl :err :ERR /0: tccoal:05/03/2016 14:47:52] error error message
I can see my thread name, “tccoal”. The first “error” must be coming from the log package because I see the same thing when I try this in a standalone Tcl command line.
Can we leverage the Cloverleaf EO configuration to determine which log statements get logged? Or must the log level be set programmatically?
I was also wondering about a related package in the Tcllib called
logger. It seems similar to log except that one can customize a “service” name. It would really be great if this could be somehow related to the module or submodule of the EO configuration.I’m in a similar situation. I had Cloverleaf tossed in my lap after our guru retired. Since the guru pretty much did everything for everybody, he didn’t bother much with VC. I learned that the Cloverleaf product offers VC and I expect to leverage it in another few months or so, once I figure it out. But I wanted to start tracking changes immediately, so I started using what I know, and that’s Git.
My AIX admins declined my request to install Git on our TEST or PROD servers. So I created three repositories on my local workstation: one for external programs that run outside (but interact with) Cloverleaf; one for tracking the configuration scripts (mostly shell scripts); and one for the Cloverleaf source. I found that PROD and TEST were hopelessly out of sync. So I only officially track PROD. For my Cloverleaf source, I created a site directory with subdirectories for Tables, Xlate, formats, java_uccs, and tclprocs. I also track the NetConfig files for each production site, naming them NetConfig.1, NetConfig.2, etc, to keep them apart.
This is only production. When someone wants to start an enhancement, I copy the relevant Cloverleaf source from my Git working copy to the Cloverleaf TEST so they know they are starting from what is currently in PROD (and not some version on TEST that someone changed and then abandoned). After development and testing has completed in TEST, and we’re ready for a PROD promotion, I copy the changed files back onto my workstation and commit them into Git. I promote them from my workstation, not from TEST. All NetConfig changes are done manually on PROD (not copied from TEST). If, after a day or so, no disaster occurs in PROD, I tag the PROD promotion in Git. Rinse and repeat.
Note that I am the only one using Git in all this. Everyone else is oblivious. This is all a temporary stop-gap until I get my head around the Cloverleaf development lifecycle which includes many tools such as version control and BOX for packaging/promotion.
I initially shared the concern about the Cloverleaf checkout locks. However, as I learn more about the Cloverleaf development model, I feel that in this context the locks make more sense. A Cloverleaf developer never has a private working copy of the code. Rather, the client is working with code real-time fetched from a Cloverleaf TEST server. If lack of communication leads two developers to change the same file at the same time, somebody is going to get hurt. In this case, the lock mechanism compensates for the negligence of the technical team lead that should be communicating properly as to who should be changing what.
As Rob pointed out on a previous post, an admin can release abandoned locks.
I love Git. I use it all the time for
my open source projects. The biggest problems I have with Git at work are that (1) none of my admins are willing to install/support it and (2) none of my colleagues are willing to effectively learn it. The worst are my otherwise-bright colleagues with previous VC experience (baggage) who simply try to map the commands from CVS/SVN/ClearCase/whatever to Git and just make a mess that I have to fix. Most Cloverleaf developers I’ve met that are not Tcl developers don’t possess the sophistication to make a distributed version control system (or even CVS/SVN) a regular part of their routine. They’re terrified of command line VC. That’s why I don’t see Git as a long term solution for Cloverleaf VC. For most Cloverleaf developers at my work location, a GUI checkout, lock, change, check-in process makes more sense. So once I get a grip on the Cloverleaf VC, I expect to drop Git and embrace the Cloverleaf model (at least for Cloverleaf source). I expect to also reap the benefits of the BOX feature.You specifically called out the challenge of tracking configuration scripts. Unlike source code, configuration scripts often vary between TEST and PROD. You can minimize this by externalizing these variations to property files and trying to make the TEST and PROD file systems as consistent as possible. But fundamentally, the configuration of TEST and PROD will often be different due to different infrastructure topology if nothing else. (IT shops have been addressing this for decades; but some in the industry have “discovered” this phenomenon under the guise of “DevOps”.) In this vein, container-based cloud technology has garnered much attention. (Docker containers, et al). It would be interesting to see if Infor could offer cloud solutions with semi-private Docker repositories offering Cloverleaf Site images on which customers layer their own customizations. Then the Docker image (or whatever container you prefer) becomes the unit of controlled deployment.
The netstat output seems to indicate that port 17202 is a TCP CLIENT port (by virtue of only having the ESTABLISHED state).
Ah, so it seems
hcitbllookup is simply an alias fortbllookup. I did indeed findtbllookup in the docs.Thanks for the heads-up.
-
AuthorReplies