Hi Javier,

COS does not have a Generics implementation, mainly as its a loosely/duck typed language.

You can however write generic code without needing Generics.

Make your property a base class of your Info classes, this can be %RegisteredObject...

Class Response Extends %RegisteredObject  {
​    Property Code As %String;
    Property Info As %RegisteredObject;
}


You can now assign any valid object to that property at run time.

You won't be able to assign a string to this property, so create a class with a single property of type string that you can assign it to.

Try that and if you get stuck with the JSON serialisation then post back the code that is not working.

Sean.

Great answer Rubens.

The class documentation makes no mention of the second parameter and I was not aware that it existed.

Fortunately I've only had to deal with documents under the large string size to date and did wonder how I would might need to work around that limitation at some point.

Question, the length the XML writer uses is set to 12000. Would this solution work for 12001 or does the size have to be divisible by 3? I'm wondering because 3 characters are represented by 4 characters in base64.

Sean.

100,000 per second is a synthetic benchmark, a for loop in a terminal window will only just do 100,000 global sets a second, and this is without any data validation, data loops, referential integrity etc

you also don't mention if this is done via the API or over the network, I would only be interested in the over the network benchmarks

what I would be really interested in are real world benchmarks that track the number of http requests handled per second, so not some tight benchmark loop, but real end to end http requests from browser, federated through node, to cache.node and Caché and back again

plus I am not really interested in global access from node, I want to work with objects everywhere and gain the performance of letting optimised queries run on Caché without shuffling data back and forth unnecessarily

i know cache.node does handle objects, but it just doesn't fit my needs, I'm not a fan of the API and it is missing some functionality that i need

fundamentally there is a missmatch with the CoffeeTable framework that I have developed and the cache.node API

basically it just didn't seem like a good idea to end up using cache.node as nothing more than a message forwarder with potential overhead that I can't see, what I ended up with is a lean 142 lines of node code that is practically idling in the benchmarks that I have done so far

i also have concerns over the delays I have read about with the cache.node versions keeping up with the latest Node.JS version

the other thing is where is its open source home, I looked and couldn't find it, would have been nice to inspect the code, see how it works and fill in the gaps that the documentation does not go deep enough into

ultimately, why not have alternatives, different solutions for different needs

Its a very simple JSON-RPC wire protocol. The JSON is stripped of formatting. Its then delimited with ASCII 13+10 which are already escaped in the JSON. Nothing more complicated than that.

> How do you deal with license usage? How much does it escalates with a fair amount of users and how do you manage all of that?

I can only refer to benchmarks at the moment, hence why the node connector is still marked as experimental.

The set up was on a single 3 year old commodity desktop machine running a stress tool, node, cache and about 10 other open applications.

The stress tool would simulate 50 users sending JSON-RPC requests over HTTP to a Node queue, a single Caché process would collect these requests over TCP, unpack the JSON, perform a couple of database operations, create a response object, serialise it and pass it all the way back.

With one single Caché process running one single licence I recorded an average of 1260 requests per second.

As requested, here are some snippets of the ORM library that works for both browser and Node.JS. This is from some of 30,000 unit tests that I built on top of the Northwind database data.

The solution starts with a Caché class that extends the Cogs.Store class, this is just a normal %Persistent class with extra methods.

Class Cogs.CoffeeTable.Tests.Northwind.Customers Extends Cogs.Store
{

Parameter DOMAIN = "northwind";

Property CustomerID As %String;

Property CompanyName As %String;

Property ContactName As %String;

Property ContactTitle As %String;

Property Address As %String;

Property City As %String;

Property Region As %String;

Property PostalCode As %String;

Property Country As %String;

Property Phone As %String;

Property Fax As %String;

Index CustomerIDIndex On CustomerID [ IdKey, PrimaryKey, Unique ];

}

There are then two approaches to develop in JavaScript. The first is to include a client API script that is dynamically created on the fly, this includes a promise polyfill and an HTTP request wrapper. This is a good approach for small to medium projects.

In this instance there will be a global object called northwind that will contain a set of database objects, each with a set of CRUD methods

A basic example of using find...

northwind.customers.find().then( function(data) { console.log(data) } )

The second approach uses TypeScript and Browserify using a modern ES6 approach.

A code generator produces a TypeScript Customer schema class...

import {Model} from 'coffeetable/Model';

export class CustomerSchema extends Model {

    static _uri : string = '/northwind/customers';

    static _pk : string = 'CustomerID';

    static  _schema = {
        Address : 'string',
        City : 'string',
        CompanyName : 'string',
        ContactName : 'string',
        ContactTitle : 'string',
        Country : 'string',
        Fax : 'string',
        Phone : 'string',
        PostalCode : 'string',
        Region : 'string',
        CustomerID : 'string'
    };

    CustomerID : string;
    Address : string;
    City : string;
    CompanyName : string;
    ContactName : string;
    ContactTitle : string;
    Country : string;
    Fax : string;
    Phone : string;
    PostalCode : string;
    Region : string;

}

as well as a model class which can then be extended without affecting the generated class...

import {CustomerSchema} from '../schema/Customer';

export class Customer extends CustomerSchema {

    //extend the proxy client class here

}

now I can develop a large scale application around these proxy objects and benefit from schema validation, auto type conversions as well as having object auto complete inside IDE's such as WebStorm.

Create and save a new object...

import {Customer} from "./model/Customer";

var customer = new Customer();
//Each one of these properties auto completed
customer.CustomerID = record[0];
customer.CompanyName = record[1];
customer.ContactName = record[2];
customer.ContactTitle = record[3];
customer.Address = record[4];
customer.City = record[5];
customer.Region = record[6];
customer.PostalCode = record[7];
customer.Country = record[8];
customer.Phone = record[9];
customer.Fax = record[10];
customer.save().then( (savedCustomer : Customer) => {
    console.log(customer)
}).catch( err => {
    console.log(err)
})

Open it...

Customer.open('ALFKI').then( customer => {
    console.log(customer.CompanyName);    
})

Search...

Customer.find({
    where : "City = 'London' AND ContactTitle = 'Sales Representative'"
}).then( customers => {
    console.log(customers);
});

The last example returns a managed collection of objects. In this instance the second approach includes a more sophisticated client library to work with the collection, such that you can filter and sort the local array without needing to go back to the server.

customers.sort("Country")

this triggers a change event on the customers collection which would have been scoped to a view, so for instance you might have a React component that subscribes to the change and sets its state when the collection changes

Motivation

I needed to develop an application that could run on existing customer databases, Ensemble -> Caché, Mirth -> PostgreSQL as well as MongoDB. Such that the database can be swapped in and out without changing a line of client code.

I looked to adapt one of the existing ORM libraries such as Sequelize or Sails but it was easier to start out from scratch to leverage on Caché without needing to use lots of duck tape to get it working.

This new solution required a JSON-RPC interface and more JSON functionality from Caché, hence re-engineering some old JSON libs and building out the Cogs library.

Moving forward the plan is to release CoffeeTable as a seperate NPM library and Cogs will essentially be a server side adapter to it.

Probably the wrong forum to talk about GT.m, but I have a long standing internal library that was designed for this eventual abstraction and will be one of the databases that will be added to CoffeeTable down the line.

I ended up writing my own solution in the end.

It's a TCP wire based solution that uses the JSON-RPC messages as the main protocol.

Node starts up a concurrent TCP listener and then Caché jobs off as many client connections as required.

It surprisingly simple on the Node side, minimal glue to bind HTTP requests to TCP messages with zero blocking.

I did quit a lot of testing on it at the time I wrote it and found that I could get twice as many RPC messages into Cache via Node than I could via CSP. My guess is that the RPC route does not have to deal with all the HTTP protocols.

I then wrapped the same event emitter used for the HTTP requests with a small promise caller and was able to do some testing of proxy objects inside Node itself. It's a little bit experimental on the Node side, but I am able to run the 30,000 browser unit tests (lots of automated ones in there) over the ORM library and it just works.

Not sure I would want to put it into production until its been kicked around some more.

Hi Alexy,

You've fished out a property that is of type Cogs.Lib.Types.Json,

In its property state the JSON is stored as a pure string, hence seeing the odd escaping.

When its serialized back out to JSON it will be correctly escaped, which you can see in the JSON dump I posted before it.

This provides the best of both worlds, schema driven properties that can have one or more non schema properties for generic data storage.

btw, Cogs includes JSON classes for serialising and de-serialising to and from arrays and globals as well, interestingly they are only 50 lines of code each, so will be interesting to compare them.

Sean.