symbolic links in site

Homepage Clovertech Forums Read Only Archives Cloverleaf Cloverleaf symbolic links in site

  • Creator
    Topic
  • #48086
    William Grow
    Participant

    Hi,

     During some discussions concerning site architecture, we decided that we may want to try to split a new environment into multiple sites.  To increase the manageability of these seperate sites, we wanted to use symbolic links in the second site to refer to directories in the first site. Specificially, we wanted to perform the following linkages:

    site2/tclprocs -> site1/tclprocs

    site2/Xlate -> site1/Xlate

    site2/formats -> site1/formats

    site2/Tables – > site2/Tables

    Does anyone have comments, suggestion, positive or negative feedback on trying this etc. Please note Jim’s mentioning of the breaking of symbolic links by the engine toolset.

    I found this artical in the archives

    Anther approach is to use a ‘global’ site. That is a Cloverleaf site which

    will ONLY be a repository for the objects (Tables, Variants, Tcl procs,

    etc.) which are needed in multiple sites. One would never expect to see any

    integrations actually defined here.

    Utilize the same symbolic linking technique.

    An advantage of the site technique is that when upgrading, the entire site

    can be upgraded taking advantage of the vendor supplied scripts and tools.

    Also the existing engine toolsets can be used for managing/maintaining the

    objects.

    A serious caveat when utilizing symbolic links. The engine configuration

    toolsets create a .bak file of the objects being edited. Because of the

    method being used to create the .bak file (probably a mv instead of a cp),

    the symbolic link actually stays with the .bak file rather than the source

    file. This effectively breaks the symbolic link. It is therefore imperative

    to use the configuration tools in the ‘global’ repository and to develop

    audit tools to assure links have not been broken. Otherwise despite your

    best intentions, you could end up with many ‘local’ copies and the ‘global’

    copy may become not only obsolete, but worthless.

    I have formally asked Quovadx to develop an understanding within the engine

    of the concept of ‘global’ objects and to support their existence natively

    within the engine environment.

    Additionally, the symbolic links in the HCIROOT tclprocs directory will

    disappear with a new release (the install rebuilds the directory without

    regard for what you might have placed there). I have written a Tcl proc

    which saves off the link information and restores the links under argument

    control. The proc is run prior to upgrade to save off the link info, then

    after to restore the links.

    There are many ways to architect your multi-site environment. Rarely have I

    seen such re-architecting introduce any performance issues (normally quite

    the opposite happens as most people have eventually gotten themselves into

    a performance issue trying to stay with one site). It is worth the effort

    to have an intensive brainstorming session to decide what your goals for

    restructuring are and how to best achieve those goals.

    My personal belief is a new environment should consider the use of a multi

    site architecture from the start and lay the groundwork up front thereby

    reducing the necessity to split a site.

    Please understand, architecting into multiple sites does NOT guarantee you

    will never have to do a site split. As time goes by and your integration

    architecture becomes more complex, you may find a need to revisit the

    issue. A good idea is to have a periodic review (every six months?) of your

    architecture and evaluate it’s effectiveness.

    One of the really good benefits of a multi site environment, is the ability

    to control the integration operational environment with greater

    granularity. Thus the need to take down a site (which happens from time to

    time) only affects the threads which reside in that site. Other

    administration activities (such as reporting) can also become improved.

    But there is no free lunch – some activities such as HACMP may become more

    complicated or complex.

    I have also asked Quovadx (through group conversations aimed at requesting

    enhancements) to make the engine toolset multi site sensitive such that the

    defining of and connecting together multiple sites is a non event. when

    deciding what thread(s) to route as destinations for example, the GUI

    should present a hierarchy of defined sites and their threads. The Engineer

    then selects the appropriate destination (no matter what site it is in) and

    all of the necessary configuration is constructed ‘under the covers’ to

    make that happen (such as defining localhost ports to connect the various

    sites). Tools should be deveolped to assist in the management of multiple

    site environments. As is obvious from the response to this question now and

    in the past, multiple site production environments are here to stay and are

    growing. The engine should do a better job of facilitating that

    architecture.

    When I was at Oakwood Hospital, we were among the first to deploy multiple

    site architecture in a production environment (nearly eight years age – can

    that be correct). At that time I noticed the underpinnings in the engine to

    support what has been referred to as virtual hubs (domains).

    I have been predicting for a while now that the need for such cross

    platform (as well as multi site) is coming fast. In my opinion, Quovadx

    would be well served getting ahead of that particular curve.

    Whew!

    Jim Kosloskey


    Original Message


    From: Montoya, Francisco [SMTP:Francisco.Montoya@bannerhealth.com]

    Sent: Friday, October 10, 2003 4:21 PM

    To: Technical Issues

    Subject: [clovertech] RE: Looking for information about splitting up sites

    AIX 4.3.3

    QDXi 3.8.1

    When we split our environment into three sites, we had several tcl

    procs, tables, variants and xlates that were needed in all three new

    sites. Instead of having three verisons of these files (one in each

    site), we created a shared directory under the hci root directory and

    then created a link to this directory.

    Our directory sturcture:

    /hci/root3.8.1P/prod_formats

    /hci/root3.8.1P/prod_xlate

    /hci/root3.8.1P/prod_tables

    /hci/root3.8.1P/prod_tclprocs

    Then within each site there is a link pointing back to the directory

    created above the site level for each of these directories.

    formats -> /hci/root3.8.1P/procd_formats

    Xlate -> /hci/root3.8.1P/prod_xlate

    Tables -> /hci/root3.8.1P/prod_tables

    tclprocs -> /hci/root3.8.1P/prod_tclrocs

    This has been a great time saver for us and we have avoided a management

    nightmare by not having to manage three versions of the same file.

    Francisco Montoya

    Senior Programmer Analyst

    Banner Health

    1441 N 12th Street

    Phoenix AZ 85006

    Phone: 602-495-4971

    francisco.montoya@bannerhealth.com


    Original Message


    From: Jason Alexander [mailto:jalex@u.washington.edu]

    Sent: Thursday, October 09, 2003 3:38 PM

    To: Technical Issues

    Subject: [clovertech] Looking for information about splitting up sites

    We currently run one site on our production machine and we’re

    seeing some performance problems/degradation. (We’ve got lots of

    development sites on a different machine, but the prod machine only runs

    the prod site.)

    Our primary is a cluster of 2 IMB RS6000 M80s with 4 CPUs each,

    6 Gig of real RAM and 4 Gig of virtual RAM. Our production site is 20

    processes with roughly 100 (user defined) threads. When we run HCI

    commands today (hciconnstatus, hciprocstatus, etc.) it is not unusual to

    have the command time-out. We’ve also seen general performance

    degradation as the number of threads has grown.

    I’m looking for folks who have split one functional site into

    two or more sites for any information you might have on

    1) How to do so with minimal disruption to the user community

    2) Information about system performance that might affect the choice of

    configuration (how many threads can a site handle, aside from disk usage

    is there any other penalty to running many sites that might dissuade me

    from splitting into 10 sites instead of 2)

    3) Experience regarding sending messages from one site to another (we

    already use localhost threads to minimize interprocess communication

    which has in the past caused us severe state 7 latency so I suspect that

    this will be a similar process)

    I’d be just as happy to take this discussion to private e-mails

    as well so that we don’t subject the whole list to follow-up questions.

    Jason Alexander

    Systems Programmer

    UW Medicine ITS

    (206)685-8129

    Privileged, confidential or patient identifiable information may be

    contained in this message. This information is meant only for the use of

    the intended recipients. If you are not the intended recipient, or if

    the message has been addressed to you in error, do not read, disclose,

    reproduce, distribute, disseminate or otherwise use this transmission.

    Instead, please notify the sender by reply e-mail, and then destroy all

    copies of the message and any attachments.

    You are currently subscribed to clovertech as:

    Francisco.Montoya@bannerhealth.com

    To unsubscribe send a blank email to unsub-clovertech@ajax.quovadx.com

    You are currently subscribed to clovertech as: chiliman@todaylink.com

    To unsubscribe send a blank email to unsub-clovertech@ajax.quovadx.com

Viewing 3 reply threads
  • Author
    Replies
    • #57597
      Jim Kosloskey
      Participant

      Bill,

      I will be at the User Conference next week.

      I am already going to discuss this topic with someone else. If you are going to be there we can discuss this in more detail there.

      For now, I recommend setting up a ‘dummy’ site to house your ‘global’ engine objects (Tables, etc.) and everyone that needs to link to that set of directories.

      The ‘dummy’ site would not have any integrations actually defined but simply be a repository for global objects.

      Jim Kosloskey

      email: jim.kosloskey@jim-kosloskey.com 29+ years Cloverleaf, 59 years IT - old fart.

    • #57598
      Richard Hart
      Participant

      William.

      We have been using links for many years and have roughly (there are go-lives in progress) 15 types of application used in 13 hospital sites and we have 70 active production sites.

      We use links for tclprocs at a ‘global’ level, so that the same tclprocs (we use TCL for almost all our translations) is available to all roots and have the other at a ‘local’ level.

      ie

      We are ‘InfoHEALTH’ and have /hci/InfoHEALTH/src/tclprocs/app1|app2|app3…

      and /hci/rootx.x.xP/InfoHEALTH/tclprocs|Tables|Xlate etc

      We believe that this provides significant saving in maintenance etc.

      En example is an update to a new TCL translation script.  We create a second directory, adding the new code and the update is simply to change the link and bounce the threads.  If we need to back-out the change, the link is reverted and the threads are bounced again.

      The ‘real’ benefit comes because we can test the actual production code by using a ‘dummy’ production site.

      We use RCS id’s in all TCL scripts that output to a log file on thread startup, so we use a shell script to change sites, move links and bounce threads and prove that the correct revision is installed.

      I hope this helps

    • #57599
      William Grow
      Participant

      Jim,

       Two of my company’s senior integrators will be present. Marcelo Trujilo and Kevan Riley.  I couldn’t make it this time.  Is it a bad idea to link directories from one site to another active site?

    • #57600
      Anonymous
      Participant

      As Jim mentioned, there can be problems with symbloic links and caution would need to be exercised.  symlinking of directories will give you a visual indication when you do ls -l that there are symlinks.. You could do hard links but these don’t work across filesystems ( if you split you sites across filesystems) are are always easy to see. tar (aix) does not follow symlinks without the -h option so caution must be used here also.

      a revision control system like cvs would allow for common code and has the advantage of allowing one site not having the same revision as another site. The problem may arise in quovadx lack of support of #comments in some files which would be needed for revision control.

      with revision control you could inventory all sites for current revisions, see who is at a different revision (comements in the revision would tell you why), etc.

      I opt not to use the standard tclproc header for one that has accountability and revision information. (also used in local libs).  The usefulness is for instance a local lib proc is at rev 1.2a on 1 site and 1.3 on another (keept in accesable variable as weel as comments). a tclproc is made and inserted into the sites. the tclproc in the start switch check for the minimum revision of the lib needed and if less sets a flag that will hold the messages in an unprocessed state and echo the problem to the processlog.

Viewing 3 reply threads
  • The forum ‘Cloverleaf’ is closed to new topics and replies.

Forum Statistics

Registered Users
5,117
Forums
28
Topics
9,292
Replies
34,435
Topic Tags
286
Empty Topic Tags
10