The purpose of IRIS embedded source control features is to keep code changes made in the database synchronized with the server filesystem, to automate any source control provider-specific operations in a way that ensures that synchronization, and to provide concurrency controls for developers working in a shared environment (when relevant). In the days of Studio, all code changes were made in the database first, rather than on any filesystem, so you needed an embedded source control solution to get real source control at all. With client-side editing in VSCode, there are *still* some changes to code that are made "in the database first" - specifically, all the management portal-based graphical editors for interoperability and business intelligence. For such use cases, embedded source control is relevant even when you're developing against a local Docker container (which I'd consider modern best practice and prefer over a remote/shared environment where feasible) - otherwise, you need to jump through extra hoops to get your changes onto the client/server filesystem.

In a client-centric mode, it's totally fine to use git-source-control alongside the git command line, built-in VSCode tools, or your preferred Git GUI (GitHub Desktop, GitKraken, etc). However, this misses an important benefit of git-source-control: when you pull, checkout, etc., if you do it through the extension, we can automatically reflect the operation in IRIS by loading added/modified items and deleting items that have been removed in the database. If you make changes on the filesystem through one of these other channels, it's up to you to make sure things are reflected properly in IRIS.

Another benefit of git-source-control for local development is that when working across multiple IPM packages loaded from separate local repos, changes made via isfs folders will automatically be reflected in the correct repository. This is more natural for established ObjectScript developers especially (e.g.: "I just want to edit this class, then this other class in a different package") than a client-centric multi-root VSCode workspace, which could achieve the same thing but with a bit more overhead.

Hi @Jani Hurskainen - the short answer is "yes, TestCoverage is coupled to %UnitTest."

Can you elaborate on what you have in mind by "other unit testing frameworks" and/or what you're trying to achieve? %UnitTest is the only unit testing framework for ObjectScript that I'm aware of.
 Is your objective to unify Python and ObjectScript unit tests?

I strongly relate to this. Zen was a huge part of what sold me on InterSystems tech 15 years ago when I started here as an intern - for all the reasons you've described - and if I want to throw together a really quick POC that just has results of a class query shown in a table, with maybe some basic interactions with the data, I might still use it.

That said, for my team's work and even for my own personal projects, I've found the combination of isc.rest and isc.ipm.js to be *almost* as quick as Zen. With something like Angular with an IRIS back-end (consisting of a bunch of %Persistent classes), you need to write:
1. REST APIs for all your basic CRUD operations, queries, and business logic
2. Client code to call all those REST APIs
3. Client code for all the models used in those REST APIs
4. The actual UI

Suppose you want to make a simple change to one of your models - say, adding a property to a class and making it available in the UI. With Angular, this probably means changes at all four levels; with Zen, you get to skip 1-3 entirely. That's compelling. An inevitable side effect of this is that your application's API surface (and therefore attack surface) is enormous and near-impossible to fully enumerate. It is possible to secure a Zen UI, but much easier to shoot yourself in the foot.

isc.rest makes (1) super easy - add a parent class to your %Persistent class and do a few easy parameter/method overrides to get CRUD and queries basically for free, and write a bit of XML if you want to do fancier things to expose business logic or class queries. This provides enough metadata to generate an OpenAPI spec, which can then be used to automate (2) and (3) with the help of openapi-generator. So while you can't skip 1-3 entirely, this toolset makes it all significantly faster.

Sorry we missed that. I started to look around for best practices and forgot to circle back.

It's a fantastic question, and I think your gut feeling from https://github.com/intersystems/git-source-control/discussions/343 is correct - the local-to-the-server repo should be in a place accessible from all mirror members, provided you can do this in a way that doesn't introduce a single point of failure operationally.

If that location is unavailable, you won't be able to do development, but operations on the running instance shouldn't be impacted otherwise (and that location being unavailable would be something that needs to be fixed immediately anyway).