I suspect the global variable tcl_precision had it’s value change between the TCL implementation in 5.8 to that of 6.0 (problem here too) and 6.1. I got this excerpt from Princeton University TCL page.
The global variable tcl_precision determines the number of significant digits that are retained when floating values are converted to strings (except that trailing zeroes are omitted). If tcl_precision is unset then 6 digits of precision are used. To retain all of the significant bits of an IEEE floating-point number set tcl_precision to 17; if a value is converted to string with 17 digits of precision and then converted back to binary for some later calculation, the resulting binary value is guaranteed to be identical to the original one.
So then I tried it out using hcitcl. Note the base value is 0 when the shell is launched and I suspect same from within Cloverleaf TPS. Then I modified to 16, one less then IEEE definition above, and it appears to fix the issue. Then I set to the IEEE definition of 17 and look what happens to the 1/3 example!
$ hcitcl
hcitcl>echo $tcl_precision
0
hcitcl>expr 2.8/7
0.39999999999999997
hcitcl>expr 1.0/3
0.3333333333333333
hcitcl>set tcl_precision 16
16
hcitcl>expr 2.8/7
0.4
hcitcl>expr 1.0/3
0.3333333333333333
hcitcl>set tcl_precision 17
17
hcitcl>expr 2.8/7
0.39999999999999997
hcitcl>expr 1.0/3
0.33333333333333331
hcitcl>
There is another interesting article here. <a href="http://wiki.tcl.tk/1650″ class=”bbcode_url”>http://wiki.tcl.tk/1650. Check out the 3*1.4 example in the article!
I don’t think there is an answer to your question, but it appears tcl_precision is your culprit!
Hope this helps,
Robert Milfajt
Northwestern Medicine
Chicago, IL