A possible work-around could be the class below. In short, you work with your json property as intended, merely before saving the object, you save the json-property into a stream and after opening an instance, you restore the json-property from the the stream - that's all. The drawback, no SQL over the json property...

Class DC.Dyn Extends %Persistent
{
Property json As %DynamicObject [ Transient ];
Property jstr As %GlobalCharacterStream [ Internal, Private ];

ClassMethod MyTest(kill = 0)
{
   if kill do ..%KillExtent(1,1)

   set obj=..%New()
   set obj.json.short="A short test text"
   set obj.json.maxstr=$tr($j("",$$$MaxStringLength)," ","X")
   do obj.json.%Set("hugedata",..stream(obj),"stream")

   write "Status : ",obj.%Save(),!
   set id=obj.%Id()
   write "ID : ",id,!
   kill (id)

   set obj=..%OpenId(id)
   write "short : ",obj.json.short,!
   write "maxstr : ",$e(obj.json.maxstr,1,20),"... Size: ",$length(obj.json.maxstr),!
   set stream=obj.json.%Get("hugedata",,"stream")
   write "hugedata: ",stream.Read(20),"... Size: ",stream.Size,!
}

ClassMethod stream(obj)
{
   set stream=##class(%Stream.TmpCharacter).%New()
   do stream.Write(obj.json.short)
   do stream.Write(obj.json.maxstr)
   do stream.Write(obj.json.maxstr)
   quit stream
}

Method %OnOpen() As %Status [ Private, ServerOnly = 1 ]
{
   if ..jstr {
      do ..jstr.Rewind()
      set ..json=##class(%DynamicAbstractObject).%FromJSON(..jstr)
   }
   Quit $$$OK
}

Method %OnAddToSaveSet(depth As %Integer = 3, insert As %Integer = 0, callcount As %Integer = 0) As %Status [ Private, ServerOnly = 1 ]
{
   do ..jstr.Clear(), ..json.%ToJSON(..jstr)
   Quit $$$OK
}
}

Some testing...

IDEV:USER>d ##class(DC.Dyn).MyTest(1)
Status  : 1
ID      : 1
short   : A short test text
maxstr  : XXXXXXXXXXXXXXXXXXXX... Size: 3641144
hugedata: A short test textXXX... Size: 7282305

If your code uses obj.%Reload() then %OnReload() and %OnOpen() should contain the same code.

Your solution is just perfect. And fast.

But yes, you can avoid string manipulations... This one, for example, uses math only, merely it's neither short nor looks elegant:

 set dt=$h write $zd(dt,8)*100+($p(dt,",",2)\3600)*100+($p(dt,",",2)#3600\60)*100+($p(dt,",",2)#60)

but gives the same result as your short and nice solution...

 set dt=$h write $zd(dt,8)*100+($p(dt,",",2)\3600)*100+($p(dt,",",2)#3600\60)*100+($p(dt,",",2)#60),!,$tr($zdt(dt,8)," :")

On the other hand, you can install new brakes on your car, as suggested by others... ;-))

Just compare those codes with yours:

set h=$h, t=$zh for i=1:1:1E6 { set x=$tr($system.SQL.TOCHAR($h,"YYYY^MM^DD^HH24^MI^SS"),"^") } write $zh-t,!

set h=$h, t=$zh for i=1:1:1E6 { set x=$tr($zdt(h,8)," :") } write $zh-t,!

The choice is yours...

Just to put things in right perspektive, those "one letter commands" and "a lot of them in the same line" were neither tempting nor addictive, they were simply necessity!

At the time of the birth of MUMPS (the core of Cache/IRIS/etc.), more then 50 years ago in the second half of 1960es,  memory (which was a real core memory at the time) was rare and expensive and was measured in units of kilobytes! Just to contrast, today's server have the same amount of RAM, but in gigabytes, that's a factor of one million!

As a consequence of memory shortage and because MUMPS of that time was interpreted (i.e. you loaded the sourcecode into memory), one had to utilize each and every possibility to save memory. One of those possibilities were the ability of the language to short each command to one letter and to put as many commads as possible into one line (thereby saving line-ending bytes).

The tools (to save memory) of that (ancient) time were argumentless IFs and ELSES, short (variable-, global- and routine) names, commands with postcondition and sophisticated programming.

Last but not least, if one aims to "modernize" thos old applications, should be keept in mind, especially, if one is not so familiar with the old fashioned style and methods, there will be many unexpected pitfalls.

Sample1: on old printers, the line with "Total..." will be printed "bold-alike"

 write "last item",?15,$j($fn(val,",",2),10),!
 write ?15,"----------",! do  do  do
 . write $c(13),"Total",?15,$j($fn(sum,",",2),10)
 write !!!,"Due date for payment ....",!

Sample2: converting from:

 ; normal flow
 do
 . ; nested
 . ; commands
 ; normal flow

into:

 ; normal flow
 if 1 {
   ; nestd
   ; commands
 }
 ; normal flow

will be in most cases OK, except, if the nested part uses the current value of $STACK:
this is now one less then in case of argumentless DO!

I know exactly nothing about HealthShare... so I can just suggest two ways to remove the unwanted characters from a string:

use $zstrip()

set inpData="some wild sequence of characters"
set cleanData = $zstrip(inpData, "*C") // this removes all control characters (0x00-0x1f, 0x7f-0x9f)

the other way is to define a set of valid characters and remove all others

set inpData="some wild sequence of characters"
set validChars = "012...89ABC..Zabc..z..."
set badChars = $translate(inpData, validChars) // remove from input all valid chars, leftover are bad chars
set cleanData = $translate(inpData, badChars) // remove all the bad chars

or just the short version

set cleanData = $translate(inpData, $translate(inpData, validChars))

Usually, I solve such problems (it's faster then searching for some funy SQL or other functions) by writing my own function/method/expression, depending on the current requirement. 

ClassMethod TimeZoneToHorolog(tz)
{
   set t=$zdth(tz,3,5), t=t*86400+$p(t,",",2)+($e(tz,20,22)*60+$e(tz,23,24)*60)
   quit t\86400_","_(t#86400)
}

Assuming, tz contains a timezone formatted string like:

2021-11-04T11:10:00+0300

According to documentation,  the tformat paramer 5 is ignored:

"Specify time in the form "hh:mm:ss+/-hh:mm" (24-hour clock). The time is specified as local time. The following optional suffix may be supplied, but is ignored: a plus (+) or minus (–) suffix followed by the offset of local time from Coordinated Universal Time (UTC). A minus sign (-hh:mm) indicates that the local time is earlier (westward) of the Greenwich meridian by the returned offset number of hours and minutes. A plus sign (+hh:mm) indicates that the local time is later (eastward) of the Greenwich meridian by the returned offset number of hours and minutes."

The same goes for the parameter values 6, 7 and 8

write $zdth("2021-11-04T11:10:00+0100",3,5)  --> 66052,40200
write $zdth("2021-11-04T11:10:00+0200",3,5)  --> 66052,40200
write $zdth("2021-11-04T11:10:00-0100",3,5)  --> 66052,40200

OK, I start with the second question. I'm not aware of a function to see if a specific global is in a buffer or not but there is a routine which shows which globals are using the most buffers:

znspace "%SYS"
do ^GLOBUFF

For the first question: if a global is used continuously, then it will always be in buffer. That's the simple answer. The reality depends on many other factors: the size of the global, the size of the buffer pool, how many other globals are in use, how often is a global used, etc.

To keep a few specific global(s) always in a buffer, there is a simple trick (assuming, your Cache/IRIS installation uses the default setup and you have an unused block size):

1) Goto SystemAdministration-->Configuration-->AdditionalSettings-->Startup: and edit the DBSizesAllowed setting, by checking one of the 16K or the 32K checkboxes

2) Create a new database with the newly enabled block size. This database will hold those few (always needed) globals.

3) Goto SystemAdministration-->Configuration-->SystemConfiguration-->MemoryAndStartup: and allocate (plenty of) memory for the newly created buffersize. Please consider,  after this chanhe, you have to RESTART your system!

4) Copy the global(s) in question into the newly created database:

  merge ^|"^^c:\path_to_new_database\"|GlobalName = ^|"^^c:\path_to_old_database\"|GlobalName

5) Create a Global mapping for the globals in question to the new location.

6) Start working... If everything is OK (which should be) and you are happy, delete the old global data to free up database space:   

kill merge ^|"^^c:\path_to_old_database\"|GlobalName

7) In a standard installation, you have allocated  one buffer pool (with the standard 8KB buffer size). So all your processes faiting to get the needed globals into that buffer pool.

With the above configuration you have two buffer pools, one for the standard 8KB database blocks and one for the new 16KB (or wahtever size you have choosen) database blocks. So you can keep important globals in a separate buffer pool. If you can manage (this will be application dependent) to give this buffer pool the same size as the database itself, the you will have all data (of this database) in the memory all day long.