SQL is a declarative language (same as HTML for example). It does not DO anything. All it is is a description of what do you want to achieve. 

In the case of InterSystems IRIS (and any other dbms for that matter) we take SQL statement, parse it into AST and generate code in other imperative language from AST. Now this generated code gets executed.

In case of InterSystems IRIS we generate the code in ObjectScript language. For DROP statement it would include  this line:

kill ^globalD

If you want to you actually can see generated OjectScript code. To do that

Open SMP > System Administration > Configuration > SQL and Object Settings > General SQL Settings.
Enable "Cached Query - Save Source" — This specifies whether to save the routine and INT code that Caché generates when you execute any Caché SQL except for embedded SQL.

After that purge query from portal if it already exists and execute the query again in SMP.

You'll see something like this:

Cached query %sqlcq.<NAMESPACE>.cls<NUMBER>

Go to studio and open %sqlcq.<NAMESPACE>.cls<NUMBER> class - it would contain generated ObjectScript code that executes for your SQL query.

Great points, Sean.

However there are, I think, areas where macros are needed:

  • Compile-time execution (if it's possible to resolve stuff at compile time, do it)
  • Constants/Enums
  • Trivial wrappers over system functions (i.e. I use $$$ts instead of $zdt($h, 3, 1, 3) because it's everywhere)

But I definitely agree that overabundance of custom macros makes code unreadable.

Let's say you created class Point:

Class try.Point Extends %Persistent [DDLAllowed]
{

Property X;

Property Y;

}

You can also create it via DDL:

CREATE Table try.Point ...

It would create the same class.

After compilation our new class would have autogenerated Storage structure which is a mapping of global to data to columns and properties:

Storage Default
{
<Data name="PointDefaultData">
<Value name="1">
<Value>%%CLASSNAME</Value>
</Value>
<Value name="2">
<Value>X</Value>
</Value>
<Value name="3">
<Value>Y</Value>
</Value>
</Data>
<DataLocation>^try.PointD</DataLocation>
<DefaultData>PointDefaultData</DefaultData>
<IdLocation>^try.PointD</IdLocation>
<IndexLocation>^try.PointI</IndexLocation>
<StreamLocation>^try.PointS</StreamLocation>
<Type>%Library.CacheStorage</Type>
}

What is going on here?

From the bottom and up (bolded are important, ignore the rest):

  • Type: type of generated storage, in our case the default storage for persistent objects
  • StreamLocation - name of global where we store streams
  • IndexLocation - name of global for indices
  • IdLocation - name of global where we store ID autoincremental counter
  • DefaultData - name of storage XML element which maps global value to columns/properties
  • DataLocation - name of global to store data

Now our default data is PointDefaultData so let's see it.  Essentially it says that global node has this structure:

  • 1 - %%CLASSNAME
  • 2 - X
  • 3 - Y

So we might expect our global to look like this:

^try.PointD(id) = %%CLASSNAME, X, Y

But if we output our global it would be empty because we didn't add any data:

zw ^try.PointD

Let's add one object:

set p = ##class(try.Point).%New()
set p.X = 1
set p.Y = 2
write p.%Save()

And here's our global

zw ^try.PointD
^try.PointD=1
^try.PointD(1)=$lb("",1,2)

As you see our expected structure %%CLASSNAME, X, Y is set with $lb("",1,2) which corresponds to X and Y properties of our object (%%CLASSNAME is system property, ignore it).

We can also add a row via SQL:

INSERT INTO try.Point (X, Y) VALUES (3,4)

Now our global looks like this:

zw ^try.PointD
^try.PointD=2
^try.PointD(1)=$lb("",1,2)
^try.PointD(2)=$lb("",3,4)

So the data we add via objects or sql is stored in globals according to storage definitions (you can modify manually storage definition by replacing X and Y in PointDefaultData  - check what happens to new data!)

Now, what happens when we want to execute SQL query?

SELECT * FROM try.Point

It is translated into ObjectScript code that iterates over ^try.PointD global and populates columns based on storage definition - PointDefaultData  part of it precisely.

Now for modifications. Let's delete all the data from the table

DELETE FROM try.Point

And let's see our global after it:

zw ^try.PointD
^try.PointD=2

Note that only ID counter is left, so new object/row would have an ID=3. Also our class and table continue to exist.

But what happens on:

DROP TABLE try.Point

It would destroy the table, class and delete the global.

zw ^try.PointD

Stores data on disk? Meaning, that if we create these persistent classes, the data is stored twice? Once in the global node and in some other format as defined (or not defined by the class)?

The data is stored in globals and only globals. Globals themselves are physically written on disk.

But just to reiterate, how does the DDL interpret a read definition into a write definition? The data stored in the global node and the definition do line up exactly (in our case, almost not at all).

Please expand your question. Do you mean how does SELECT or DROP or whatever SQL statement against table interacts with globals?

Globals store data. That's the only way data can be stored inside Intersystems products.

Tables and classes are projections of this data. In your created classes (at the end) there's a Storage element.

It contains mappings of class properties (and table columns) to global nodes.

So if you delete this representation it does not delete the underlying global data, because class is essentially just a description of how to read and write into global.

Added a dummy BS to production with OnInit:

Class test.InitService Extends Ens.BusinessService
{

Parameter ADAPTER = "Ens.InboundAdapter";

Property Adapter As Ens.InboundAdapter;

Method OnProcessInput(pInput As %RegisteredObject, Output pOutput As %RegisteredObject) As %Status [ CodeMode = expression ]
{
$$$OK
}

/// This user callback method is called via initConfig() from %OnNew() or in the case of SOAP Services from OnPreSOAP()
Method OnInit() As %Status
{
    $$$TRACE("INIT")
    set sc = ..SendRequestAsync(...)
    quit sc
}

What's the use case? I think it's better to keep independent tasks separate.

Anyway:

1. Create your own task class by extending %SYS.Task.Definition.

2. Program your task

  • Specify TaskName parameter to provide readable task name
  • Add class properties if needed - they are task arguments
  • Override OnTask method to program your logic

3. Create a new task, choosing your task class (by TaskName).