1. If you replace:

set sc=gc.SetParameter(hstmt,$LB(gc.PutData(hstmt,.temp)),2)

with:

set sc = gc.PutData(hstmt, "TEXT")

or with:

set sc = gc.SetParameter(hstmt,$LB("TEXT"),2)

What do you get?

2. Also in your original 4-parameter generated code the stream is bound via:

 s err=$zf(-5,%SQLGateway,62,QHandle,4,1,1,12,500,0,500)

which is equal to

set sc=gc.BindParameter(hstmt, 4,1,1,12,500,0,500)

In your new code you bind the stream with:

Set sc=gc.BindParameter(hstmt,2,1,1,12,0,0,-2)

Is it from some other generated code?

Just one point it's Angular2 not AngularJS - I think AngularJS could be delivered by CSP but I doubt that it would be the best way to deliver Angular2 

What's the difference between AngularJS and Angular2 in regards to CSP?

AngularJS is the first version, Angular 2-5 are collectively referred as Angular and somewhat compatible between themselves. @Sergey.Sarkisyan?

Connection is not license.  From docs:

  • The Caché licensing system attempts to identify distinct users and to allocate one license unit per user. A user is identified by a license user ID, which can be an IP address, a username, a CSP session ID, or some other identifier depending on how the user connects.
  • Multiple processes started by or for a single user share a license unit up to the maximum number of processes per user. If the number of processes exceeds this maximum, a transition occurs and Caché begins allocating one license unit per process for that user ID. The system assumes that if the number of processes associated with a user ID exceeds the maximum, multiple users are accessing Caché through an intermediary (for example, a firewall system), so additional license units are required. (Processes started by the Job command are counted under the user ID invoking the command.)
  • Even if the number of processes under the user ID drops back under the maximum, Caché continues to allocate one license unit per process for that user ID. Only when all connections by the user ID are closed and there are no more processes under the user ID does license allocation reset to one unit for that user ID.

Maximum number of processes (connections) per user is 12.

Will this affect our own internal tasks?

It shouldn't as that would be one connection and license slot.Still, I'd recommend contacting WRC to clarify the issue completely.

The CSP part (REST for sure and maybe SOAP) could be managed by these two tools:

  • Session events - provide callbacks for session start/end and request execution.
  • SetConnectionLimit method can be used to set maximal number of connections per user - I'm not sure if this would work for xDBC but it would for sure work for CSP.

Additionally you cat use ^ZSTART routine to check if user can login.

Execute queries and show tables

State is not required (and in fact harmful) for that case. Here's why.

Users do not care for thousands of results.

That doesn't happen. User most often cares about one specific result, or a small group of them - dozen(s), rarely up to a hundred but to be extremely  generous let's say that user cares about 1000 individual records at the same time tops.

So, how do we work with that assumption?

There are several things to do.

  1. Add TOP 1000 to all of your queries, if the query actually returns 1000 results show 1000+ to the user and if he explicitly asks for more results reexecute the query (with new hidden condition to skip thousand of the results if possible) and show the next thousand. But don't forget to point him to docs page explaining how to refine search results before that. It's an extremely rare use case where user needs more than a thousand results - it should not happen and if that happens, that it's definitely something architects/product owners need to address.
  2. Why would the user scroll through thousands of the results? Because the results of  the query are irrelevant to him (low relevance). To solve that problem the user should be able easily, but precisely, express what does he need fom the system. This is a very complex problem, but some of the features that can be added are:
    • Automatic filtering by all (most) fields - if a new column appears, user should be able to sort/filter by it automatically, as in without developer spending time on adding the new field to the UI.
    • Filtering and sorting comparisons should be at least (dependent on datatype): equals, more, lest, between, contains ...
    • Filters could be combined via: AND, OR, NOT
    • High-relevancy fields should be easily accessible for the user. For example in one of our PoCs we've seen a problem with existing search - users overloaded it, by performing full-text search en mass. Examining the logs, we found that most of the search keywords were in a form of abc/xyz where abc and xyz are integers. That abc/xyz turned out to be values from one specific field (let's say ItemId), but fulltext search was the biggest search field and placed at the top of the search page. Our solution was redesign - placing ItemId search field at the top.

One other case where user might be actually interested in the exact number of the results is when he needs that number only. For example our user might be interested in the number of incidents per month and calculate it himself by filtering incidents by date and getting the results count. This requirement could be addressed in several ways:

  • The better solution would essentially be reports, maybe even some visual design, or by request. That is a preferable solution.
  • Add "Count results" button alongside "Search" button. When the user presses this button SQL query would contain only COUNT(1) select field and only that result would be shown.

tl;dr users are not interested in thousands of results, so we need to build systems where they get only the results they really need.

The call to

$$setStream^%apiGTW(QHandle,RawData,0,1)

Is roughly equal to:

 #dim gc As %SQLGatewayConnection
#dim RawData As %Stream.Object
//set sc = gc.ParamData(hstmt, .index) //not sure if it's required
while 'RawData.AtEnd {
  set string = RawData.Read(16000)
  quit:string=""
  set sc=gc.PutData(hstmt, string) // PutDataW
}

Examples are available in EnsLib.SQL.Common class:

 Method putLOBStream(pHS As %String, pStream As %Stream.Object, tBin As %Boolean) As %Status
{
  Set tSC=..%Connection.ParamData(pHS,.tInd) Set:$$$SQLCODENeedData=..%Connection.sqlcode tSC=$$$OK  Quit:$$$ISERR(tSC) tSC
  Set temp=pStream.Read(16000)
  If (temp = "") {
    Set err=..%Connection.PutData(pHS,"") 
  } Else {
    While ""'=temp {
      If ..IsUnicodeDLL&&'tBin { Set tSC=..%Connection.PutDataW(pHS,temp) }
      Else { Set tSC=..%Connection.PutData(pHS,temp) }
      Quit:$$$ISERR(tSC)
      Set temp=pStream.Read(16000)
    }
  }
  Quit tSC
}

All brokers effectively have Parameter UseSession = 1;
But the default value in %CSP.REST is Parameter UseSession As BOOLEAN = 0;
So the developer has to remember to override this every time (or sub-class)

I recommend having an abstract broker, which does technical stuff like CORS, UseSession, Encoding, JSON transformation. All other brokers must extend it (and they don't do technical stuff, only business logic). More on that.

Use same GroupById

If two (or more) web applications share the same GroupBy value, then session opened in one application would be valid in all other applications with the same GroupBy value. This way user needs to login only once (or not at all if we have domain/SSO authentication configured). It's also documented (but hard to find), more docs.

But if it's a third party app then there is no CSP/ZEN app - the use case I have in mind is a 3rd party web developer is creating a complex shop system that needs to communicate with Caché

CSP app could contain only HTML/JS/CSS. So it could be an AngularJS web application, but also hosted with Caché (as a Caché CSP web application).


I have no idea or interest in what technology they are using and it may be that their programming language does not easily support cookies so the CSPCHD (the session cookie) does not get passed.

If you host a web-application via Caché (Ensemble, HealthShare, InterSystems IRIS) then your web application would be authorization-aware automatically. Browser sends relevant cookies/headers with each request so developer don't need to think about it.

I am thinking that in this case the authentication needs to be passed with each Rest call - not an issue
(or use OAUTH which I know little about)

You can do it like this:

  1. Client sends login/pass to some /login REST endpoint and receives token
  2. After that client only sends token with every request
  3. Token should be periodically refreshed

This way you pass login/pass only once instead of with every call. But I'd still recommend Caché security system.

Security (link under construction)
Using JSON - how do you implement a logon?

What are the licensing issues?

For password authenticated web applications it is possible by following these steps:

  1. All brokers effectively have Parameter UseSession = 1;
  2. REST web application and client web application allow only authenticated (i.e. password) access.
  3. REST web application and client web application have reasonable Session timeout (i.e. 900, 3600).
  4. REST web application and client web application have the same GroupById value.
  5. REST web application and client web application have the same cookie path.

If all these conditions are met, user would only consume one license slot per session and perform only one login.

How do you prevent users hacking restful calls that they have no access to?

Authentication as a start, SQL security for basic data access checks, app-level checks for the most specific cases