Hi Pavel,

your solution sounds interesting.

2) when mirror/backup goes up we use ZMIRROR hooks

Do you scan the queue only on the backup node startup, as this is the time when ZMIRROR hooks are called? What if the node does not restarted several months, can the queue become too long to delay the startup completeness?

Vitaliy, thanks for the contribution. It seems that it ruins another myth, that Xecute is always slower than $[class]method. I've slightly reformatted an output of your ClassVsInst() method just to make it easier to compare results. Here are mine (using i5-4460 3.20 GHz):

 USER>d ##class(Scratch.test).ClassVsInst(1e7)
Cache for Windows (x86-64) 2017.2.2
 
dummyClass10*   total time = 2.669215 avg time = .0000002669215
dummyClass10    total time = 2.375893 avg time = .0000002375893
XdummyClass10   total time = 2.676997 avg time = .0000002676997
 
dummyInst10     total time = 2.221366 avg time = .0000002221366
XdummyInst10    total time = 2.276357 avg time = .0000002276357
 
dummyClass5*    total time = 2.540907 avg time = .0000002540907
dummyClass5     total time = 2.232347 avg time = .0000002232347
XdummyClass5    total time = 2.541123 avg time = .0000002541123
 
dummyInst5      total time = 2.070013 avg time = .0000002070013
XdummyInst5     total time = 2.049437 avg time = .0000002049437
 
dummyClassNull* total time = 2.362451 avg time = .0000002362451
dummyClassNull  total time = 2.097653 avg time = .0000002097653
XdummyClassNull total time = 2.352748 avg time = .0000002352748
 
dummyInstNull   total time = 2.018773 avg time = .0000002018773
XdummyInstNull  total time = 2.056379 avg time = .0000002056379

It seems that Xecute is (not surprisingly) very close to $classmethod and usually slower than $method. But what are we talking about? The difference is about several nanoseconds per call only. 

Dear colleagues,

Thank you for paying so much attention to this tiny question. Maybe it was too tiny formulated: I should mention that objects instantiation impact is beyond the scope of the question as all of them are instantiated once; correspondent OREFs are stored in global scope variables for "public" use.

Going deep inside with %SYS.MONLBL is possible, while I'm too lazy to do it having no real performance problem. So, I've wrote several dummy methods, doubling instance and class ones, with different numbers of formal arguments, from 0 to 10. Here is the code I managed to write.

Class Scratch.test Extends %Library.RegisteredObject [ ProcedureBlock ]
ClassMethod dummyClassNull() As %String
{
  1
}

Method dummyInstNull() As %String
{
  1
}

ClassMethod dummyClass5(a1, a2, a3, a4, a5) As %String
{
  1
}

Method dummyInst5(a1, a2, a3, a4, a5) As %String
{
  1
}

ClassMethod dummyClass10(a1, a2, a3, a4, a5, a6, a7, a8, a9, a10) As %String
{
  1
}

Method dummyInst10(a1, a2, a3, a4, a5, a6, a7, a8, a9, a10) As %String
{
  1
}

}

My testing routine was:

ClassVsInst
   p1="пропоывшыщзшвыщшв"
   p2="гшщыгвыовлдыовдьыовдлоыдлв"
   p3="widuiowudoiwudoiwudoiwud"
   p4="прпроыпворыпворыпворыпв"
   p5="uywyiusywisywzxbabzjhagjЭ"
   p6="пропоывшыщзшвыщшв"
   p7="гшщыгвыовлдыовдьыовдлоыдлв"
   p8="widuiowudoiwudoiwudoiwud"
   p9="прпроыпворыпворыпворыпв"
   p10="uywyiusywisywzxbabzjhagjЭ"
   run^zmawr("s sc=##class(Scratch.test).dummyClass10(p1,p2,p3,p4,p5,p6,p7,p8,p9,p10)",1000000,"dummyClass10 "_$p($zv,"(Build"))
   st st=##class(Scratch.test).%New() run^zmawr("s sc=st.dummyInst10(p1,p2,p3,p4,p5,p6,p7,p8,p9,p10)",1000000,"dummyInst10 "_$p($zv,"(Build"))
   st=""
   run^zmawr("s sc=##class(Scratch.test).dummyClass5(p1,p2,p3,p4,p5)",1000000,"dummyClass5 "_$p($zv,"(Build"))
   st st=##class(Scratch.test).%New() run^zmawr("s sc=st.dummyInst5(p1,p2,p3,p4,p5)",1000000,"dummyInst5 "_$p($zv,"(Build"))
   st=""
   run^zmawr("s sc=##class(Scratch.test).dummyClassNull()",1000000,"dummyClassNull "_$p($zv,"(Build"))
   st st=##class(Scratch.test).%New() run^zmawr("s sc=st.dummyInstNull()",1000000,"dummyInstNull "_$p($zv,"(Build"))
   q

run(what, n, comment) ; execute line 'what' 'n' times
   n=$g(n,1)
   comment=$g(comment,"********** "_what_" "_n_" run(s) **********")
   comment,!
   zzh0=$zh
   i=1:1:what
   zzdt=$zh-zzh0 "total time = "_zzdt_" avg time = "_(zzdt/n),!
   q

The results were:

USER>d ClassVsInst^zmawr
dummyClass10 Cache for Windows (x86-64) 2017.2.2
total time = .377751 avg time = .000000377751
dummyInst10 Cache for Windows (x86-64) 2017.2.2
total time = .338336 avg time = .000000338336
dummyClass5 Cache for Windows (x86-64) 2017.2.2
total time = .335734 avg time = .000000335734
dummyInst5 Cache for Windows (x86-64) 2017.2.2
total time = .280145 avg time = .000000280145
dummyClassNull Cache for Windows (x86-64) 2017.2.2
total time = .256858 avg time = .000000256858
dummyInstNull Cache for Windows (x86-64) 2017.2.2
total time = .225813 avg time = .000000225813

So, despite my expectations, oref.Method() call turned to be quicker than its ##class(myClass).myMethod() analogue. As there is only less than microsecond effect per call, I don't see any reason for refactoring.

Our case differed from yours as we had full access to Caché DB and did all this development on the  Caché side.

If you need to implement triggers, etc, you need full access to Caché DB.

Otherwise, if time_stamp_of_modification field presents in Caché table, the table can be queried with SQL SELECT statement based on the time of previous query on regular basis (e.g. each 5 seconds). Anyway, I have no idea how to achieve your goal without some development on the Caché side.

Hi Mark,

Several years ago we faced the similar problem. External system needed to pull new requests from our Caché DB and to push back the responses. We maintained a transition table in Caché where we placed new requests, external system polled the table each N seconds fetching the requests from the table and placing the responses back. Communication was implemented via ODBC.

You can do something like this filling transition table on remote Caché side using triggers associated with the "main" table.

If you prefer to code global size calculation by yourself rather than amend ^%GSIZE, the feasible option is to call

set bSize=$$AllocatedSize^%GSIZE(global)

which returns size in bytes for a global mapped to the current namespace. It recognizes the database the global is mapped from, so you don't need to do it yourself. The only thing you need is a global list for the namespace, which can be fetched in several ways, e.g. using $Order(^$G(global)). It can be used on per database basis as well. Pros of this approach:
- speed, as it neither runs query nor instantiates %SYS.GlobalQuery objects;
- AFAIR, there was an error in global size calculation with %SYS.GlobalQuery::Size() query in old Caché versions, up to 2015.1;
- starting from 2015.1, it can be used with subglobals.

Cons:
- this $$-function is not documented;
- not sure if it existed in 2010.1.

Agree, it's trivial in most cases, except that one, when there is a series of commands depending on previous ones, e.g. (not from the production code))):

set rc=$zf(-1,"[ -f /etc/environment ] && . /etc/environment && export TZ")

$zf(-1) allows to execute such series at the whole, while $zf(-100) needs to split it into parts, moving checkup logic to COS code. It's trivial as well, but ruins the idea of (semi-)formal substitution of $zf(-1) calls with $zf(-100) ones.

It seems that the docs is ambiguous here as it's not clear when one can use "" as a <null> value: in comma separated options list only, or in options array as well.

As to possible ways to exploit $zf(-1), there is some clue in ISC's announcement. It can be compromised if its arguments come from user input. Similar vulnerabilities are usually associated with dynamic SQL, not only in Caché. Other (Caché specific)  samples: Xecute, $Xecute, argument indirection. This stuff is well-known, is it a secret for anybody?
It seems that if we never use such coding style, we are safe enough. As to our company's code base, we rarely use $zf(-1), and all its usage is encapsulated in a couple of class methods.

We'll follow ISC's security recommendations, as we always do, while I don't feel myself comfortable when I don't understand the reasons of doing something. "Don't repair, if it works", as it was said by some wise man. Does it need any comment?

Using a comma-delimited list of arguments works fine, even with a null arg

Null arg is not the same as an empty string arg (""), as usual in COS. Therefore the settings of

set options(1)=""

made your first argument an empty string, and the whole command behaved as

dir "" e:\nbupg\webserver\

Didn't check your *nix version, just noticed that "NUL" should be spelled as "/dev/null".

P.S. May I ask you in turn :):

Why did you undertake this task, changing of $zf(-1) to $zf(-100), at all? Do you clearly understand the kind of treat you try to eliminate?

The solution I've found is rather simple than smart: to start a dejournalizer as a JOB with an 'answer file' specified as a principal-input device. The code prototype looks like this:  

 set fin=$zu(12)_"Temp\fin.txt"
 set fout=$zu(12)_"Temp\fout.txt"
 open fin:("NW"):1 if '$t {write "not opened!",! quit}
 use fin write "N",!,"N",!,"Y",! close fin
 job jrnrest^ztestFF("20180614.006"):(::fin:fout):1 if '$t { w "not started!" q}

Its execution resulted in "fout.txt" file like this: 

20180614.006 to 20180614.006; c:\intersystems\cache\mgr\user\ => c:\intersystems\cache\mgr\test\

Do you want to rename your journal filter? o
Do you want to delete your journal filter? o


c:\intersystems\cache\mgr\journal\20180614.006
   8.88%  14.29%  15.26%  16.79%  18.08%  19.31%  20.35%  21.32%  ....
***Journal file finished at 16:39:36
Do you want to rename your journal filter? es
Journal filter ZJRNFILT renamed to XJRNFILT


[journal operation completed]

"o" and "es" were provided by the auto-completion of "N" and "Y" answers.

Cons of this approach is that ISC can change the ^JRNRESTO dialog in some future version.
Pros: no need in reverse engineering of ^JRNREST stuff to derive a non-interactive journal restore utility.

Hope my writing will be useful for somebody besides myself. Happy coding!

WRC answered that:
1) The reason of the error with "CSP.StudioTemplateMgr_Templates" is because the "CSP.StudioTemplateMgr" class is owned (has Owner = "...") by %Developer. So a user with %Developer had no issue executing this stored procedure "Templates". Prior to 2016.2 the class did not have the Owner specified.

and

2) This change was intentionally added for security reasons when Atelier support was added to Cache'.

(1) reassured me that I had taken the right way to solve the issue.
I didn't understand (2), but... 

There are more things in heaven and earth, Horatio,
Than are dreamt of in your philosophy.

The developer insists that the attempt to compile the class was before its first call. Here is a fragment of his internal log. Alas, no idea how to reproduce the error as so called Updater is not a new project and runs without any problem at many production sites.

***
[06.05.2018 15:54:11.897] Starting importing classes
[06.05.2018 15:54:11.897] &runUpdateCommon.int,Update.Import
Error while importing classes: ERROR #5123: Could not find an entry point 'zguiUpdateFileAction' in routine 'Update.Import.1'
  > ERROR #5030: Error compiling class Update.Import
ERROR #5123: Could not find an entry point 'zguiUpdateFileAction' in routine 'Update.Import.1'
   > ERROR #5030: Error compiling class Update.Import
[06.05.2018 15:54:12.382] Classes sucessfully imported
[06.05.2018 15:54:12.382] Starting importing globals
[06.05.2018 15:54:12.382] There is no globalList
[06.05.2018 15:54:12.382] Starting processing afterAction
[06.05.2018 15:54:12.382] d ##class(Update.Import).update(0)
Error while processing afterAction: <METHOD DOES NOT EXIST>processAfterAction+30^updaterV201712261