go to post Rob Tweed · Sep 11, 2017 Let me repeat this, a programming language MUMPS-Cache objectscript with a built-in database. I think this is a fundamental aspect that they have been missing when others invented new programming languages. They are missing the innate common characteristic that both databases and programming languages share which is the pointer, reference based logic. So I believe it's time to return back and fix this for new generation databases AND post-modern programming languages too.This is a core part of the QEWD.js project: to make JavaScript a first-class language for Global Storage databases - and therefore give JavaScript a built-in database. The cache.node module provides the high-performance in-process connection needed to allow the intimate relationship between JavaScript and the Cache database engine. The ewd-document-store module aims to provide the JavaScript equivalent of the ^ in COS (ie blurring the distinction between in-memory and on-disk JavaScript objects). JavaScript's dynamic, schemaless objects are a perfect fit with the dynamic, schemaless nature of Global Storage, making it an ideal modern substitute language instead of COS. For more information see the online tutorial at http://docs.qewdjs.com/qewd_training.html - specifically parts 17 - 27
go to post Rob Tweed · Sep 5, 2017 I've now implemented the functionality for JSON Web Tokens and QEWD-based MicroServices.All the detail is described in the following parts of the Online Course:https://www.slideshare.net/robtweed/ewd-3-training-course-part-43-using-json-web-tokens-with-qewd-rest-serviceshttps://www.slideshare.net/robtweed/ewd-3-training-course-part-44-creating-microservices-with-qewdjshttps://www.slideshare.net/robtweed/ewd-3-training-course-part-45-using-qewds-advanced-microservice-functionality If you're just getting started with QEWD. t's a good idea to understand how to use it as a straightforward REST Server first:https://www.slideshare.net/robtweed/ewd-3-training-course-part-31-ewdxpress-for-web-and-rest-servicesAs you'll see once you start to delve into these tutorials, this is a very powerful technology, aimed at delivering massively scalable, high-performance, highly-secure distributed and federated solutions - all available today for your Cache-based applications today with Open Source software.----------------------------------------------------------As a reminder to anyone new to QEWD: an introduction on the thinking and architecture behind QEWD: https://medium.com/the-node-js-collection/having-your-node-js-cake-and-e...For an overview of the whys and hows of QEWD's JWT and MicroService Architecture:https://www.slideshare.net/robtweed/qewdjs-json-web-tokens-microservices
go to post Rob Tweed · Aug 10, 2017 Folks may be interested to see that this article has now been re-published by the Node.js Federation themselves - currently headlining at https://medium.com/the-node-js-collectionSpecifically:https://medium.com/the-node-js-collection/having-your-node-js-cake-and-e...
go to post Rob Tweed · Jul 20, 2017 By the way, using cache.node as the interface gives the benefit of a high-performance in-process connection to Cache from JavaScript - significantly faster than using a networked connection, as it connects at a very low-level directly into the core global engine via Cache's C-based call-in interface. There's still currently a limitation however, due to a V8 API bottleneck, described here:https://bugs.chromium.org/p/v8/issues/detail?id=5144#c1My simple experiments comparing Global access performance via native COS versus cache.node and JavaScript show that connecting from Node.js via cache.node provides only about 10% of native COS performance - so a 90% performance reduction, apparently all due to this V8 bottleneck! Nevertheless, my comparisons of MongoDB performance versus using Globals as a document database show that MongoDB is only relatively slightly faster.If this V8 API problem was fixed, then access to Cache via cache.node would be as fast as using native COS. The outcome, if my experimentation has been correct, would be that MongoDB would be significantly out-performed by Cache as a document database. Additionally, my prediction would be that your MongoDB emulation could also outperform the real thing too - a rather interesting and somewhat startling result if true.I would love to see this V8 bottleneck resolved. I wonder if there's anyone in this community that could take on that challenge, or perhaps knows someone with the skills to take it on? I think a lot of people would sit up and take notice of what could become the fastest Node.js-connected NoSQL database on the planet.Rob
go to post Rob Tweed · Jul 19, 2017 > The Caché Node.js driver could not access Caché classes, but could make program calls from Caché. This fact resulted in the>creation of a small tool - a kind of bridge between the driver and Caché classes.This is incorrect - cache.node can access Cache Objects. See:http://docs.intersystems.com/latest/csp/docbook/DocBook.UI.Page.cls?KEY=... Also take a look at this:https://github.com/chrisemunt/mongo-dbxas an alternative MongoDB interface
go to post Rob Tweed · Jul 11, 2017 You miss the point - Node.js / JavaScript is now one of the most used back-ends / languages. By comparison, COS is almost unknown in the IT mainstream. Sure, you can do everything in COS, but to do so, you need to learn COS. If you have to learn COS to use Cache, most people in IT won't use it as the back-end to web applications.Write your apps the way Ward suggests, and you open up Cache to a massively bigger audience, and you can also recruit JavaScript guys to do the back-end development, without them having to learn COS, but they get all the benefits of the database.Regarding performance, see my last comment for this article:https://community.intersystems.com/post/building-qewd-nodejs-cache-rest-...and see for yourself - look at the browser console network logs and look at the X-ResponseTime response header values to see the back-end response time using Node.js + Cache - here's the demo link:http://34.201.135.122:8080/Suggestion - put together a COS/CSP version of the RealWorld Conduit back-end and see how it performs by comparison, so you get an apples v apples comparison
go to post Rob Tweed · Jul 11, 2017 > So far NodeJS is only used as a general dev environment and build pipeline manager.This is just not true
go to post Rob Tweed · Jul 8, 2017 Representing XML and JSON in Global Storage is an interesting area.In the case of XML, things are a little more complex than described in your article, since there's something of an ambiguity between information stored in the text value of an element and an attribute of an element. For this reason, the better representation is to model the XML DOM in Global Storage nodes. You'll find a pretty thorough (and free, Open Source) implementation here:https://github.com/robtweed/EWDThis article provides more information:https://groups.google.com/forum/#!searchin/enterprise-web-developer-comm...Once in DOM format you can apply cool stuff such as XPath querying. See:https://groups.google.com/forum/#!searchin/enterprise-web-developer-comm...The DOM is essentially modelled in Global Storage as a graph. DOM programming is extremely powerful, allowing all sorts of complex things to be performed very efficiently and simply. JSON is much simpler, being a pure hierarchy. The only ambiguity with Global Storage is that JSON only allows leaf nodes to hold data - intermediate nodes cannot. However, Global nodes can be intermediate nodes AND store data. So whilst all JSON trees can be represented as a Global tree, not all Global trees can be represented as JSON.The Node.js-based QEWD.js framework uses Global Storage as an embedded database to provide Session storage, persistent JavaScript Objects and a fine-grained Document Database. To see how this is done, see the training course slide decks:http://ec2.mgateway.com/ewd/ws/training.html..specifically parts 17 - 27Keep up the great work on this series of articles!Rob
go to post Rob Tweed · Jun 1, 2017 Perhaps someone from InterSystems should respond with respect to Sean's views on cache.node?
go to post Rob Tweed · Jun 1, 2017 I don't know why you don't use the cache.node interface? It will support in excess of 100k global sets/second per connection. Is there something you believe it doesn't do that you need?
go to post Rob Tweed · May 31, 2017 Very good article - good to see Global Storage being discussed. Thanks for the compliments! We need this kind of discussion to be promulgated out to a wider audience, not just preaching here to the (largely) already converted. I despair when I go to technical conferences where not one single attendee I speak to has heard of Cache or Global Storage. Yet it's a database technology and architecture that is ideally suited to todays requirements (and particularly suited to the burgeoning JavaScript world), and is crying out to be more widely known about. I do my best to get new interest in the mainstream, but feel I'm something of a lone voice in the wilderness.The other thing I'd love to see is to have at my disposal within Node.js a similar level of performance as the article has described when using native Cache Objectscript. It turns out there's just one Google V8 bottleneck in the way - if that could be sorted out, the idea of having a database in Node.js that could persist JSON data at speeds in excess of 1 million name/value pairs per second would blow every other database clean out of the water. I would LOVE to see this rectified , and if fixed, it could create a huge wave of interest (people at those conferences I go to might actually want to find out about it!)Here's the issue:https://bugs.chromium.org/p/v8/issues/detail?id=5144#c1Anyway, looking forward to part 2 of the article.Someone should do a talk at the Developers Conference.... ??
go to post Rob Tweed · May 27, 2017 Have you tried some tests just using a simple test harness Node.js file built around cache.node, and building out your example that uses cache.node's invoke_classmethod, extending it step by step until it fails?
go to post Rob Tweed · May 20, 2017 As far as the underlying database storage engine is concerned, yes:https://www.slideshare.net/robtweed/ewd-3-training-course-part-18-modell...http://mgateway.com/docs/universalNoSQL.pdf
go to post Rob Tweed · May 11, 2017 I've just pushed out a new set of enhancements to QEWD that are described here:https://robtweed.wordpress.com/2017/05/11/qewd-now-supports-koa-js-and-u...I've upgraded the Cache-based RealWorld Conduit demo to make use of Koa.js. As suggested in the article, take a look at the X-ResponseTime response headers in our browser's JavaScript console to get an idea just how fast the combination of QEWD, Koa.js, Node.js + Cache really is. The URL for the live demo is: http://34.201.135.122:8080Rob
go to post Rob Tweed · May 10, 2017 Hi David Glad to hear your success in getting it working for you.There's a right way and a somewhat dodgy way to do what you want to do.The right way is to have separate instances of QEWD, each connected to a particular namespace and listening on a different port. You could probably proxy them via some URL re-writing (eg with nginx at the front-end)The dodgy way which I think should work is to have a function wrapper around $zu(5) (or equivalent) to change namespace, and make a call to this in each of your back-end handler functions. If you do this you need to make sure that you switch back to the original namespace before your finished() call and return from your function. If an unexpected error occurs, you need to realise that your worker process could end up stuck in the wrong namespace.Your namespace-switching function would need to be callable from all your Cache namespaces - use routine mapping for this.Doing this namespace switching will be at your own risk - see how it goesBTW for these type of situations where you want to do the same thing before and after every handler function, you might find the latest feature (beforeHandler and afterHandler) described here very useful:https://groups.google.com/forum/#!topic/enterprise-web-developer-communi...Rob
go to post Rob Tweed · May 4, 2017 I've set up this instance of the Conduit Application:http://34.201.135.122:8080 It uses: - Cache 2015.2, running on Ubuntu Linux and Node.js v6.10, on an AWS EC2 instance- QEWD with 2 child processes- qewd-conduit RealWorld Conduit back-end- The React/Redux version of the front-end for Conduit, with its data served up via REST calls to the qewd-conduit back-endNote: no changes were needed for the application to run with Cache.
go to post Rob Tweed · May 4, 2017 QEWD has now been accepted as one of the official RealWorld Conduit back-ends:https://github.com/gothinkster/realworld
go to post Rob Tweed · May 2, 2017 Thanks Ward!I've now added a "Guided Tour" to the ReadMe of the qewd-conduit Github repo, which people will find helpful as an explanation of how the app works and is put together.Please use this as a guide to building your own QEWD/Cache applications
go to post Rob Tweed · May 1, 2017 Yes - qewd-conduit is a pure REST back-end. As such it has no front-end markup (though it could). The /api endpoint does not, in itself, have any meaning - it's just the root URL for all the valid Conduit endpoints, eg GET /api/tagsqewd-conduit returns JSON responses, not HTML
go to post Rob Tweed · May 1, 2017 I wasn't planning on it - perhaps someone should. All the instructions for setting up QEWD with Cache are here:https://www.slideshare.net/robtweed/installing-configuring-ewdxpressRob