Looking for a site using CL 6.0 to chat

Clovertech Forums Read Only Archives Cloverleaf Cloverleaf Looking for a site using CL 6.0 to chat

  • Creator
    Topic
  • #53693
    Jennifer Hardesty
    Participant

      We’re going to be upgrading from CL5.7 to CL6.0 later this year.  We have questions that are more about the real nitty gritty and less about the sales demo.  We’d love to talk to someone or a bunch of someones who already have it installed, even if just in a test environment.

    Viewing 5 reply threads
    • Author
      Replies
      • #78567
        Russ Ross
        Participant

          I would suggest that you include the Operating System your cloverleaf servers are running under because my knowledge is applicable to Cloverleaf 6.0 running under AIX 6.1 in a HACMP 2 node active/passive cluster.

          I have done a scratch install of Cloverleaf 6.0 and AIX 6.1 in a 2 node HACMP cluster for both TEST and PROD.

          Then did NFS mount read only of our AIX 5.3 Cloverleaf 5.6rev2 AH cluster onto our new Cloverleaf 6.0 HA cluster and ran hcirootcopy to upgrade a handful of test sites from Cloverleaf 5.6rev2 to Cloverleaf 6.0 so far.

          I’m enhancing the HA script this week but expect to start upgrading sites fast and furious as early as next week then proceed to data validation to gain confidence that cloverleaf 6.0 is generating same message as Cloverleaf 5.6rev2.

          We have not gone live with any sites but have gotten past most of the nitty gritty I think.

          I haven’t done an in palce upgrade in the last 10 years because it increases the risk of mishap and raises my stress level considerably.

          With this scratch install and then NFS(read only mount)/upgrade approach the benifits are so much more greater I feel like I died and went to heaven.

          The nitty gritty is still there but without any of the risk or stress becuase the live production is completely insulated from harm and upgrading the OS is clean since it is done from scrtach as is cloverleaf 6.0.

          Russ Ross
          RussRoss318@gmail.com

        • #78568
          Peter Heggie
          Participant

            We upgraded from 5.8.4 to 6.0 in Production last month. We also run on AIX 6.1 TL7 with HACMP.

            Our local software configuration is:

            /hci/cis5.8/integrator…

            /hci/cis6.0/integrator…

            Our SAN (HACMP-managed) drive configuration is:

            /prod/cis5.8/site1/…

            /prod/cis5.8/site2/…

            /prod/cis6.0/site1/…

            /prod/cis6.0/site2/…

            With softlinks in /hci/cis6.0/integrator/site1 -> /prod/cis6.0/site1

            etc

            We have another SAN drive mounted to each server that contains the installation files, and installed from there to the above local directory.

            We also performed the hcirootcopy to copy a 5.8 site to the 6.0 (local) location; once that was completed, we moved the entire new site folder to the SAN drive (/prod/cis6.0/site1) and then created the link in the local folder to point to the SAN location.

            We were cautioned to go into NetConfig using the new GUI and save it, which applied internal reconfiguration (upgrade).

            We had to update the server.ini, in the firewall section, to add the rmi_exported_server_port entry for the hacmp service address. We also had to remember to copy over some customization for encoding and ebcdic to ascii conversions.

            The hcirootcopy changed all the smat files’ statistics to the current date so we had to figure out the time range by the julian date and counter number.

            We had to update some hardcoded directory names in some HACMP scripts; we had to fix a directory name in the/etc/environment file.

            We got a little tight on space on the SAN drive and had to manually delete  selected files.

            We had to manually create additional links in the 6.0/integrator directory that pointed to the smat archive folders (for very old archives).

            Using a master site with HACMP caused problems so we backed that out (of Test) and did not use it at Production.

            Inter-site routing still has a problem with (old) ports in use, so we dropped that project in Test.

            We have had an intermittant issue (or maybe this is normal) with resends from the SMAT file, when we use a string/regex filter to create a view (like ‘ADT”) and from the resulting records, we selectively remove other records (like a date range), the resubmission seems to work, but the messages actually do not get put on a queue. I’ve seen this three times, but only on a single SMAT file (this was two weeks after the upgrade).

            Otherwise its been fine. We will be creating another site by splitting an existing site – the icons on the new NetConfig and NetMonitor are too cluttered now.

            We have used the database protocols in Test and will be using them in Production soon. As I related in another thread, the ODBC ini file had a slight change between 5.8 and 6.0 for the SQLServer driver stanza name.

            We have four production and five test sites for two facilities,

            Hope this helps.

            Peter Heggie

          • #78569
            Russ Ross
            Participant

              Peter:

              When you say,

              Using a master site with HACMP caused problems so we backed that out (of Test) and did not use it at Production.

              I’m curious what problems you observed.

              We are using a master site with HACMP and at this early stage not aware of a show stopper issue with using the master site in cloverleaf 6.0 under HACMP but time will tell as we are live.

              We did notice that hcixltconvert was ignorant of the existance of the master site.

              Russ Ross
              RussRoss318@gmail.com

            • #78570
              Peter Heggie
              Participant

                Actually I don’t think it was a problem with 6.0 as much as it was a problem with HACMP.

                I’m still fuzzy about what happened, but after a failover of our prod instance onto our test server, the default root was (still) set to the test master site instead of the prod site (which should have been set by the HA logon scripts). We did not have a master site set in prod.

                Peter Heggie

              • #78571
                Russ Ross
                Participant

                  Our PROD enviornment is on its own active/passive HA cluster, while TEST is on a separate active/passive HA cluster, so both are totally independent of each other.

                  Sounds like you have an active/active HA cluster with PROD running on one node and TEST running on the other and then PROD fails over to the TEST node and I assume TEST is shut down and not available until PROD fails back to its primary node.

                  If that is the case, I can see how that could be more challenging.

                  Russ Ross
                  RussRoss318@gmail.com

                • #78572
                  Peter Heggie
                  Participant

                    we like to do things the hard way. And we’re cheap.

                    Peter Heggie

                Viewing 5 reply threads
                • The forum ‘Cloverleaf’ is closed to new topics and replies.