Deployment Strategies: Do You Compile ObjectScript on a Production Site?
Hi, Community!
Please share your experience on code deployment on production site. Do you compile ObjectScript on Production? Is it OK?
Or you only compile on Test site and copy cache.dat to a Production?
If DATA and CODE are separated then taking over CACHE.DAT from a final test environment could be an option.
But as the default for a namespace is DATA+CODE and this is widely spread also in large applications in

real environments recompiling is the only possibility. Many years back even a special change was implemented in
Caché to support compiling during runtime of the code.
I personally dislike both and fought for clear separation of CODE from DATA. With very limited success.
I agree totally with Robert, I much prefer the separation of routines and data but I like the ability to simply replace cache.dat assumes nothing else is going on.
for me, one downside of replacing cache.dat for the routines is that you have to stop the cache instance to allow the replacement at file level,
I also dislike the compile options. I had a problem where (my mistake), I compiled the main class assuming dependant classes would also be compiled. That was a mistake, none of the dependant classes knew of my change.
that problem was solved by adding extra letters to the compiler in particular, I was told to add "bry" to the compile options. Not had a problem since, but if Intersystems knows this, then why not make that the default ? (I also had to add those letters to the default of all users that could issue the compile command - a real pain)
Hi Robert and all
You can achieve the separation quite easily with routine and class package mapping.
I have a client that has overseas affiliates they all share the same code/class base but have their own namespace with routines and packages mapped to the master namespace.
Works just fine
The only issue is that developing the code is more complex as the different affiliates have started to need different functionality starting from the same base screen
Peter
Just realised that copying the Cache.dat is only sensible for a single developer
If you have more than one developer working on different projects all deploying to the same Test server then copying won't work - you would get bits and pieces from different projects
Even for a single developer it's a bit dodgy - you could be working on two or more projects at the same time - waiting on user acceptance testing - if one passes then you want to deploy that to live but not the others.
The more I think about it the more I believe that my method of working is the only one that works for all possibilities - or so I believe - if anyone has a better method please tell
Peter
resolving the dependencies in the correct order.
That's probably the weakest spot and requires often manual intervention.
Just had to do this at a client
And missed out another thing that I have to do....
This is a ZEN app that has custom components sub classed from ZEN - these create css and js files
But only in the master namespace/csp application
These files need to be copied to the other namespace/csp applications
peter
We added "bry" to the default compiler flags
do $System.OBJ.ShowFlags()
You can set them for a namespace or system-wide
SetFlags(flags,system)
remember that my "bry" code was in addition to the code that already exists, so my settings now read "cukbry"
kevin
Hi, Kevin!
This is useful, thank you. My "default" for development environment is "cuk" which provides int-code for debugging.
Kevin,
We have the following solution when having the code and data dats separated. We would have a CODE1 dat and a CODE2 dat. Which ever code dat wasn't being used by the namespace, we would overwrite that file and then switch the namespace to point to the new code database. Keeps our deployment downtime to only a few milliseconds.
The only situation we've run into is this doesn't seem to be ideal for ensemble without having to stop the ensemble production.
It's indeed incredible simple. You just have to bend your fingers a little bit.
And virtual namespace %ALL allows even to have your own "SYSLIB" like behavior with common code.
Indeed. IMO, %ALL is an unnecessarily well-hidden secret and I wish new installations created %ALL by default. See this post of mine for more info.
Hi John
Well hidden indeed - you would only find it by reading every line of the docs for each release
Peter
Hi, Peter!
I never mentioned deploying cache.dat on a test server. Only for the production site. Compilation on a test server is OK.
Compilation on a production can be unsuccessful - what do you do in this case?
Hi Evgeny
What situation do you have in mind that could cause the compilation to be unsuccessful?
With proper version control and release procedure this has rarely happened in my experience - when it does it's been due to unresolved dependencies - and in this case a re-compile fixes it.
There is one possibility where it *could* happen - that is if the VC system allows multiple reservations/branches for the same class - bet we don't allow that.
= =
I can't see how deploying/copying the Cache.dat will avoid problems when you have multiple developers or multiple projects on the test server.
= =
I guess the only 100% way is to have a staging server where a deployment can be copied to and tested before deploying to the Live server - in this case it is tightly controlled and copy the Cache.dat is possible
Peter
Hi, Peter!
E.g. compilations using a projection when the result of compilation could be totally unpredictive.
Also, compilation can be a time-consuming process, comparing to replacing cache.dat file - so it potentially a longer pause in production operation.
I'm not saying that coping cache.dat strategy should be used for a test server. Indeed we can compile the branch on a build/test and then transfer cache.dat to a production if testing goes well.
Hi Evgeny
Fascinating conversation.....
I am aware of projections but don't use them in my systems.
I think there is some confusion when I use the term "Test" server - in my usage this is used for End User Acceptance testing and there are usually more than one project/release undergoing End User Acceptance Testing at any one time - copying the Cache.dat would take over releases that are not ready to go.
= =
I guess it depends on the nature of the operation - I work for individual clients rather than having a monolithic product - and (as above) there will be several projects on the go at each time for each client - so what I do works for me.
If there is a possible problem with the compile (your projections) then, I think, the solution is a staging server - where individual releases are deployed to this and once proven that cache.dat is copied to the Live
My method works in my situation
I guess there is no single "correct" solution that will work for all cases.
Peter
Hi Peter!
Sure, that's why I raised the topic - to gather the best practices of "what works" in production, preferably "for years".
Thanks for sharing your experience.
BTW, do you want to share your Source Control library on DC someday?
Hi Evgeny
Source control library is not mine - It was a commercial product created by GlobalWare - saddly did not make it commercially and the company went defunct a few years ago.
But...
The main owner of the company now works for ISC in Boston wrc - Jorma Sunamo by name - maybe you should contact him direct (jorma.sunamo@intersystems.com) to discuss.
Peter
Hi All
The way I work is:- Personal Development Machines - Deploy to Test Server for User Acceptance Testing - Deploy to Live
The version control system that I use is TrakWarePro from Globalware - sadly no longer in existence - but it works for us. Not only to maintain versions but to ship releases between the three environments.
When deploying the classes need to be compiled (obviously) but I don't trust ISC compiling *all* the required classes and SQL statements etc. Neither does $System.OBJ.CompileAll() work 100% in resolving the dependencies in the correct order.
Also a release will need to set SQL access on tables for any new data classes.
So I have developed a do ##class(setup.rCompileAll).doit() method that does all the necessary - in the correct order, set the SQL Access etc.
Usually a deployment will require changing data/updating indices/adding pages to the access database etc etc - so there is usually a setup class that contains the code to do this.
So I have
And all this works 99.9% over 10 plus years - I can't actually remember wen it went wrong but nothing in this world is 100%
The downtime can be as little as 5 minutes for a simple release or up to 1 hour or so if the data changes are complex.
The only downside is system is unavailable to the user whilst this process is happening - I know about the technique of using mirroring and updating one mirror and then swapping - but this is overkill for my users.
Peter
Hi Peter,
I know its an old post but would you mind sharing what does your "do ##class(setup.rCompileAll).doit()" method or the enitre setup.rCompileAll class look like ?
regards
Hi, I am new to IRIS and We are planning to setup a CI pipeline on AWS VM deploying the iris data platform container. I am trying to find out which folders needs to be inside the source control and where (exact folder) the updated code needs to be pulled in the container. I would be much obliged if anyone cant point the CI CD related documentation.
Thanks,
Raj