We use Cloverleaf tables, that are database tables, from within shell scripts. That being said, we still need Cloverleaf. The Cloverleaf tables have to live somewhere, and maybe someone has figured out how to use a Cloverleaf table whose definition is not stored in Cloverleaf, but we have not.
Sometimes we use our Master site to hold Cloverleaf database tables, just because the tables are not related to any particular interface or business process that is associated with a Cloverleaf site. I believe the Network Monitor has to be bounced once after creating the table in the site. But we also store such Cloverleaf DB tables in an “application” site, the example below uses site “chargeprd”.
Here is an example. We have a shell script that functions as the job (parent level processing), it calls a utility script (Generic_Charges_Report_New.tcl) which invokes the DB table:
dbl_prGenericCharges_Report
which is an Advanced Database Lookup stored procedure:
{?=CALL prGenericCharges_Report( <@StartDateTimeString> , <@EndDateTimeString>,<@Source> , @Count1V OUT , @Count2V OUT , @RCode OUT , @RDesc OUT )};
IN: @StartDateTimeString,@EndDateTimeString,@Source
OUT: _CLRC_,RS_tstamp,RS_mrn,RS_ecd,RS_c_svccode,RS_c_svcdate,RS_prlocation,OUT_@Count1V,OUT_@Count2V,OUT_@RCode,OUT_@RDesc
Notice that the output has both record set (RS) data and stored procedure output (OUT) variables. Don’t know if you have had both RS and OUT data on the same stored procedure but you will notice that the result data has multiple rows, and each row has multiple data items (the RS items). But tacked on the end of each row will be the output items (OUT), and those values will be the same on each row (so they are kind of like duplicate data).
with this code:
set result [dblookup -maxrow 999999 dbl_prGenericCharges_Report $startdatetime $enddatetime $datasource]
if {$debug} {echo “[gts] $result RESULTS”}
if {$debug} {debugw “[gts] $result RESULTS”}
set rows [split “$result” “\n\r”]
set numrows [llength $rows]
if {$debug} {echo “[gts] $module results had $numrows rows”}
if {$debug} {debugw “[gts] $module results had $numrows rows”}
fyi – ‘debugw’ writes the information to a file, while ‘echo’ writes to the parent shell script. Don’t forget to set -MAXROW to a high number, to get all your output rows.
The utility script goes on to process each row in a loop. fyi – the utility script is TCL ; the parent script is a UNIX KSH shell script
This is from the parent ‘job’ shell script. Variable clsfx is the environment suffix, tst or prd.
prc=0
echo “omtstart-date +%Y%m%d%H%M%S”
# set site charge – to get to table dbl_prGenericCharges_Report
setsite charge${clsfx}; clCheckSite “$prc” “setsite” “charge${clsfx}” ; prc=$?
# run query SQL – this produces output to a file
Generic_Charges_Report_New.tcl “$startdatetime” “$enddatetime” “$email_fr” “$email_to” “$email_su” “$runtime” “$datasource” ; prc=$?
# set audit return code for the entire job and echo a timestamp to the log
omtend
When you run TCL in a batch script you need something like this at the top:
#! /usr/bin/ksh
# The following line is seen as a continuecomment by Tcl\
exec $QUOVADX_INSTALL_DIR/integrator/bin/hcitcl “$0″ ${1+”$@”}
At the bottom of the TCL script, after all the subroutines, is the main TCL script code, here is a snippet:
# main routine
set startdatetime [lindex $argv 0 ]
set enddatetime [lindex $argv 1 ]
set email_fr [lindex $argv 2 ]
set email_to [lindex $argv 3 ]
set email_su [lindex $argv 4 ]
set runtime [lindex $argv 5 ]
set datasource [lindex $argv 6]
When you are building and testing, what is very helpful is to use this utility on the command line: hcitcl
it creates another shell / command line, where you can set variables and then invoke your dblookup stored procedure call. You will get the results back to your screen (or to a variable). This is really helpful, to make sure the database part of your solution is working, before running it out of a bigger script. Don’t forget to do a setsite first.
hope this helps
Peter
Peter Heggie
PeterHeggie@crouse.org