JavaScript is evolving day by day and really, really fast. Check what ECMAScript 6 and even 7 to see what it's becoming.

COS could evolve faster as well if there was some kind of COS API for parsing and rewriting the language itself. Something that could transpile features unknown to the current COS language. Just like BabelJS does.

However COS is proprietary, and just like Linux and Windows, open-source projects  evolve faster due to the community involvement. I don't mean to make it fully open, but provide the community a way to interact with it by providing with new features proposals.

Hello Maks, surely you can. We just need to rename it to something easier to remeber,  then set up README, CODE_OF_CONDUCT and LICENSE files.

This project was actually a response for another thread that you opened a while ago. So what motivated me on doing it was the community feedback (you as well) regarding your other thread.

https://community.intersystems.com/post/declarative-development-cach%C3%A9

Hello.

A new version will be published soon. This version has support for using streams properties and SQL along with %Dynamic instances.

As these features reach stable versions, I'll be publishing a tutorial about how to use Frontier.

If you want to dive deep into it, I recommend taking a look on the router below. It's really self-explanatory.

https://github.com/rfns/frontier/blob/sql/cls/Frontier/UnitTest/Router.cls

If you import the class WebApplicationInstaller you'll be able test these features by navigating to localhost:57772/api/frontier/test/[one of routes from the router above].

P.S.: Sorry for the bump.

I would like to. But I need some clue about how to isolate monoids from the global scope, this would push the development a lot further. I mean... it would be wonderful if subroutines could be passed as parameters, you know. But this is out of my reach yet.

For now what I could do is to add more methods to handle different type of datas.

No, it's a proof-of-concept, my purpose was to discover a way to express the minimal features needed to develop in a functional fashion.

You can use it with real projects and it will work as it should, the only real issue might be regarding the performance in a long term, since this library uses %ConstructClone to keep the monoids pure.

Hello Kevin,

From my experience you cannot use Studio's output window to ask for input from the user and I think that's because the Studio's output window  is not really a terminal device.

I also tried to read the input from this device without success as well. I hope  I'm wrong and you can find an answer for that (and mine as well).

Greetings everyone.

I have decided to publish the current version for your pleasure.

I'm now working on implementing SQL support and access policies. But you can already check it out.
Remember to provide some feedback here or using the issues if you feel like it.

Also, if you want to contribute, feel free to do so.

Best regards.

It doesn't, at least not directly. I used it for boosting the comparison speed by using git diff.

 

The idea as a whole was:

diff - whenever the user selected the "Import" from the Source Control menu, it would run git diff instead, if git was installed and the current project is versioned (has .git).

add - automatically add modified and new files to the staging area whenever the item is saved on Studio.

I ended up dropping those ideas because I noticed I was wasting resources as it need to be less exploitable and it also went against my objective of creating something agnostic to version control systems.

If you want to see the result, here's the project in question.

Hello Athanassios,

I think you're on the right track. But that should be only the beginning.


To capture "writes" from Caché you need to redirect your IO and write the data back to the device you binded your Python interpreter. This is your writer device. If you need help about how to configure your writer device, check the Job Command Example.

Remember that it's your responsibility how you'll implement a buffer and how much it can store before writing it back to the implementer. I do recommend using a stream here.

Now I don't know if simply by calling the binded method you're able to see it's output before returning. If it doesn't, you problably need to open a TCP connection from the implementer's side too and listen to the writer device using a separated thread.

Sadly I haven't the code with me anymore. because it was a a feature implemented on another branch, that's why I posted a Gist instead. I used it to preserve the important implementation parts since I deleted the branch. Check my post again to understand better regarding the placeholders.

Port.SourceControl.Extension.VCS is quite simple actually, %OnNew initialize two temp files path that serve as output and error feedback, inheriting classes use those files as parameters along with >.

Hello Benjamin,

This is how I binded a class to some git cli commands on the earlier versions of my project.

The main tip here is to use ZF along with ">" to output the result to a file. This way you can make Caché aware about what happened when the command was executed.


You'll notice that you can even create a custom query for operations like log, diff, etc.

If you don't want with the logic behind the outputs, you can simply use RunCommandViaZF from %Net.Remote.Utility.
But remember that older versions haven't this method.

Also, when reading the command string you'll notice a lot of {PLACEHOLDERS}. You don't need to implement it to work, just rewrite it to use static parameters instead.

They are:

{VCS} - The absolute path to the git executable.

{SLASH} - The resolution from \ or / depending the OS.
{Pn} - Where n is a sequential number, they are the parameters needed to call the command.

And finally, here's a sample:

do $zf(-1, "C:\Program Files (x86)\Git\bin\git.exe --work-tree="C:\Projects\Test" --git-dir=C:\Projects\Test\.git add cls 2> ""outputfilepath"" > ""errorfilepath""")

Hello Coty.

I noticed that you starred my github repository and I thank you for that. :)


Back to your question, I think you're detecting changes by applying an unusual way to do that, since you said you can't trigger an action when modifying static files. Just so you know, as long as you're working with the Studio's SourceControl API, you should be able to do whatever you want whenever an  item is modified, you're even free to decide how to restrict the implementation, all of this regardless of the item you're updating.

Look at this part to understand how it's done.

About your use-case, we're actually testing Port with this development format. We have one code base, that's our development server, multiple namespaces simulating different customer configurations and mock data (not mock, actually their test data).

Even though this model works, by our analysis it can get pretty frustrating for users coming from a distributed version control, because they notice that multiple developers interacting with their "repository". Still, it's already a step ahead from not versioning at all.

However, the team is expected to migrate all their source to projects, since Port annoys the user about trying to save default projects and even detects items that are already owned by other projects. This forces all the team to prioritize organizing their code base.

Thanks for pointing that out.

PUBLIC=1 is an inheritance from another project with support for 2010 that worked almost the same way. At that time there wasn't a way to determine whether the method should be available to be called via HTTP or if it should be blocked, so I implemented that security flag.

Nowadays, as %CSP.REST uses UrlMap to restrict callable methods, you can consider using PUBLIC as deprecated. My bad.

Hello Jon.

 

Please note that the large JSON is not problem itself, but a specific field instead.

I'm already avoiding a round trip by outputting to the device instead of using streams. 
As a side note, I noticed that parsing/serializing using %Dynamic API is much faster and more efficient than %ZEN.proxyObject.

 

So I'd prefer keep using it if possible. But since you said that Caché cannot get such values I suppose I can advice the user to enable long strings. I don't think any property content could even get close to 3GB.