In this article, I am going to give some examples to get your own docker image with InterSystems Caché/Ensemble.
Let’s start from the beginning, from Dockerfile. Dockerfile is a plaintext configuration file which is used to build a docker image.
I would recommend using centos as a core distributive for the image because InterSystems supports RedHat, and Centos is the most compatible distributive.
FROM centos:6
You can add your name as an author of this file.
MAINTAINER Dmitry Maslennikov <mrdaimor@gmail.com>
In the first step, we should install some dependencies, and configure operating systems, as I configured TimeZone here. These dependencies needs for the installation process, and for Caché itself.
# update OS + dependencies & run Caché silent instal RUN yum -y update \ && yum -y install which tar hostname net-tools wget \ && yum -y clean all \ && ln -sf /etc/locatime /usr/share/zoneinfo/Europe/Prague
Let’s define the folder where we will store installation distributive.
ENV TMP_INSTALL_DIR=/tmp/distrib
Let's set up some arguments with default values. These arguments can be changed during the build process.
ARG password="Qwerty@12" ARG cache=ensemble-2016.2.1.803.0
Then we should define some environment variables for silent installation.
ENV ISC_PACKAGE_INSTANCENAME="ENSEMBLE" \ ISC_PACKAGE_INSTALLDIR="/opt/ensemble/" \ ISC_PACKAGE_UNICODE="Y" \ ISC_PACKAGE_CLIENT_COMPONENTS="" \ ISC_PACKAGE_INITIAL_SECURITY="Normal" \ ISC_PACKAGE_USER_PASSWORD=${password}
I decided to set security to the normal level, and I should set some password.
You can look at the documentation to find more options.
WORKDIR ${TMP_INSTALL_DIR}
Working directory would be used as a current directory for the next commands. If the directory does not exist, it would be created.
COPY cache.key $ISC_PACKAGE_INSTALLDIR/mgr/
You can include license key file if you are not going to publish this image in public repositories.
Now we should get the installation distributive, and there are several ways to do it:
- Download manually and place this file near to Dockerfile and use this line.
ADD $cache-lnxrhx64.tar.gz .
This command will copy and extract distributive to our working directory.
- Download file directly from the WRC.
RUN wget -qO /dev/null --keep-session-cookies --save-cookies /dev/stdout --post-data="UserName=$WRC_USERNAME&Password=$WRC_PASSWORD" 'https://login.intersystems.com/login/SSO.UI.Login.cls?referrer=https%253A//wrc.intersystems.com/wrc/login.csp' \ | wget -O - --load-cookies /dev/stdin "https://wrc.intersystems.com/wrc/WRC.StreamServer.cls?FILE=/wrc/distrib/$cache-lnxrhx64.tar.gz" \ | tar xvfzC - .
In this case, we should pass login password for the WRC. And you can add this lines in this file above.
ARG WRC_USERNAME=”username” ARG WRC_PASSWORD=”password”
But you should know that in this case, login/password could be extracted from the image. So, it is not the secure way.
- And the preferable way, publish this file on internal FTP/HTTP server in the company.
RUN wget -O - "ftp://ftp.company.com/cache/$cache-lnxrhx64.tar.gz" \ | tar xvfzC - .
Now we are ready to install.
RUN ./$cache-lnxrhx64/cinstall_silent
Once installation is being completed shutdown the instance.
RUN ccontrol stop $ISC_PACKAGE_INSTANCENAME quietly
But it is not over. We should have some control process In a Docker image and this task could be done by ccontainermain project made by Luca Ravazzolo. So, download it directly from the github repository.
# Caché container main process PID 1 (https://github.com/zrml/ccontainermain) RUN curl -L https://github.com/zrml/ccontainermain/raw/master/distrib/linux/ccontainermain -o /ccontainermain \ && chmod +x /ccontainermain
Clean up the temporary folder.
RUN rm -rf $TMP_INSTALL_DIR
In case if your docker daemon uses overlay driver for storage, we should add this workaround to prevent starting Cache with error <PROTECT>.
# Workaround for an overlayfs bug which prevents Cache from starting with <PROTECT> errors COPY ccontrol-wrapper.sh /usr/bin/ RUN cd /usr/bin \ && rm ccontrol \ && mv ccontrol-wrapper.sh ccontrol \ && chmod 555 ccontrolWhere ccontrol-wrapper.sh, should contain
#!/bin/bash # Work around a werid overlayfs bug where files don't open properly if they haven't been # touched first - see the yum-ovl plugin for a similar workaround if [ "${1,,}" == "start" ]; then find $ISC_PACKAGE_INSTALLDIR -name CACHE.DAT -exec touch {} \; fi /usr/local/etc/cachesys/ccontrol $@You can use this command to check which driver is using docker.
docker info --format '{{.Driver}}'
Here we say that our image exposes two standard for Caché ports 57772 for web and 1972 for binary connections.
EXPOSE 57772 1972
And finally we should say how to execute our container.
ENTRYPOINT ["/ccontainermain", "-cconsole", "-i", "ensemble"]
In the end our file should look like this:
FROM centos:6 MAINTAINER Dmitry Maslennikov <Dmitry.Maslennikov@csystem.cz> # update OS + dependencies & run Caché silent instal RUN yum -y update \ && yum -y install which tar hostname net-tools wget \ && yum -y clean all \ && ln -sf /etc/locatime /usr/share/zoneinfo/Europe/Prague ARG password="Qwerty@12" ARG cache=ensemble-2016.2.1.803.0 ENV TMP_INSTALL_DIR=/tmp/distrib # vars for Caché silent install ENV ISC_PACKAGE_INSTANCENAME="ENSEMBLE" \ ISC_PACKAGE_INSTALLDIR="/opt/ensemble/" \ ISC_PACKAGE_UNICODE="Y" \ ISC_PACKAGE_CLIENT_COMPONENTS="" \ ISC_PACKAGE_INITIAL_SECURITY="Normal" \ ISC_PACKAGE_USER_PASSWORD=${password} # set-up and install Caché from distrib_tmp dir WORKDIR ${TMP_INSTALL_DIR} ADD $cache-lnxrhx64.tar.gz . # cache distributive RUN ./$cache-lnxrhx64/cinstall_silent \ && ccontrol stop $ISC_PACKAGE_INSTANCENAME quietly \ # Caché container main process PID 1 (https://github.com/zrml/ccontainermain) && curl -L https://github.com/daimor/ccontainermain/raw/master/distrib/linux/ccontainermain -o /ccontainermain \ && chmod +x /ccontainermain \ && rm -rf $TMP_INSTALL_DIR WORKDIR ${ISC_PACKAGE_INSTALLDIR} # TCP sockets that can be accessed if user wants to (see 'docker run -p' flag) EXPOSE 57772 1972 ENTRYPOINT ["/ccontainermain", "-cconsole", "-i", "ensemble"]
Now we are ready to build this image. Execute the following command in the same folder where you've placed our Dockerfile, execute command
docker build -t ensemble-simple .
You will see all process of building an image, since downloading source image, to installation Ensemble.
To change default password or build of cache
docker build --build-arg password=SuperSecretPassword -t ensemble-simple . docker build --build-arg cache=ensemble-2016.2.1.803.1 -t ensemble-simple .
And we are ready to run this image, with commad
docker run -d -p 57779:57772 -p 1979:1972 ensemble-simple
Here 57779 and 1979, are the ports which you can use to access to inside our container.
docker ps CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES 5f8d2cb3745a ensemble-simple "/ccontainermain -..." 18 seconds ago Up 17 seconds 0.0.0.0:1979->1972/tcp, 0.0.0.0:57779->57772/tcp keen_carson
This command shows all running containers with some details.
You can now open http://localhost:57779/csp/sys/UtilHome.csp. Our new system running and available.
Sources can be found here on GitHub.
UPD: Next part of this article have already available here.
It's super satisfying being able to spin up full cache systems with a single command!
Have you got any strategies for persistence? Say I log in, add a cache user, edit some globals in USER and then notice Cache 2017.1 is out - how best to upgrade?
Stay tuned, I'm going to write next about it. In this article, we did a basic image. Wich will be as a source FROM for next images with some our application.
Hey Dmitry,
a couple of us on my team have been experimenting with windows containers and i see how easy it is to install cache on linux in a container what about on windows server core container. any ideas on direction with this.
Hi, I would not recommend thinking seriously about Caché in windows server core container, I think it is too early, yet.
Hi guys,
Thank you for the thread! Containers are here to stay, suffice to say that all public and most private cloud providers offers specific services to support just containers. However, we should look at them as a new system. There are many gotchas but also many aspects about them that will help the way we work.
I have some comments
On the Dockerfile above we need to make sure we tell the full story and educate as appropriate as we are all learning here:
-Container layer OS: Running an OS updates on every single triggered build from your CI/CD pipeline might not be the best thing to do if you truly want to know what you have running in production. It's better to ask for the exact OS version you desire in the FROM statement above. In general getting a Docker image "latest" is not such a great idea.
As a side effect and if you need a particular package installed, make sure it is what you know and pin-point the exact package version you desire (isn't an automated delivery pipeline and infrastructure-as-code about knowing exactly what you have and having it all versioned?). Use
$ apt-get install cowsay=3.03+dfsg1-6
-Provenance: we need to make sure we know we are getting the image we think we are getting. Man in the middle attacks do happen. Organisations should make sure they are covered. Please investigate this step and see tools like asking for an image hash image (
docker pull debian@sha256:cabcde9b6166fcd287c1336f5....
) or even better Docker Notary if your image publisher has signed images.-On Security: Watch those passwords in either Dockerfile definitions or env vars...
-Container image size: copying a tarball will expand the storage layer Docker creates as you make the statement "ADD file" and it's not contractable. One should try to use an ftp or http server to download AND run all commands into a single statement (fetch distribution, installation and removal of unwanted files). That way you can shrink that single layer you're working on. Your code then should read something like:
RUN curl http://mysourcecodeRepo.com/file.tgz -o /file.tgz
&& tar xzf /file.tgz
&& ./myInstall.sh
&& rm /file.tgz
-On Persistence: Obviously your data must be mounted on a volume. One of the nice things about containers is the clear demarcation between code and data. As far as Caché system data is concerned, like the cache.cpf file and %SYS we are working on a solution that will make Caché a 1st class citizen of the container world and it will be very easy to use and upgrades will just work.
HTH and thanks again for the thread!
Keep them coming :-)
This sounds perfect - can you possibly elaborate on details/timeframe?
We're rolling out Cache on Docker over the next few months, so it would be really nice to know to what extent we should roll our own infrastructure in the interim :)
"we are working on a solution", Sebastian :)
We are very pleased with further enhancements & improvements we have been making from the first internal version. The fundamental idea is that, while containers are ephemeral, databases are not and we want to assist in this area. While you can deal with your application database and make sure you mount your DBs on host volumes, there could be an issue as soon as you use %SYS. You use %SYS when you define your z* | Z* routines and/or globals or when you define user credentials, etc. In this case, right now, you'd have to create an export and an import process which would not be elegant, nor would it help to be agile. What we will offer is a very easy-to-use facility by which you can define where %SYS will reside on the host... and all using canonical Docker syntax. Stay tuned...
Is there a license model that supports containers and/or microservices ?
@Herman
You have the option to elect license servers for cooperating instances as per documentation.
HTH
AFAIK, docker encourages to implement microservices. What about classic Cache based app deployment, can docker be useful in this case? By classic I mean the system accessible via several services (ActiveX, TCP sockets, SOAP and REST), multi-tasking, etc with initial 1-2 GB database inside. At the moment I'm to choose a solution to roll up several dozen of rather small systems of such kind. Should I look at docker technology in this very case?
I am bound to ver. 2015.1.4. Host OS most likely will be Windows 2008/2012, while I'd prefer to deploy our system on Linux, that's why I am searching a light solution how to virtualize or containerize it. Thank you!
Alexey, yes, it is very easy to build a little application just in one container. I don't see any problem, to use any supported technologies as well. Even also possible some load balancing from docker, which also good for microservices. I'm going to show some example with application in the next article soon.
Thanks, Dimitri.
Alexey: Docker does not encourage anything aside using its container technology and its EE (Enterprise Edition and cloud to monetise their effort) :-) However, containers in general help and fit very well into a microservices type architecture. Please note how you can create a microservices architecture without containers via VM, jar, war, based solution with a different type of engine. Containers lend themselves to it more naturally.
It's worth pointing out that just because people talk about 1 process per container, it does not preclude you from using multiple processes in each container. You could naturally have 3 sockets open, for example, 57772, 1972 and 8384, all serving different purposes + various background processes (think of our WD & GC) and still be within the boundaries of a microservice definition with a clear "bounded context". For more info on microservices you might want to read Martin Fowler microservices article and books like Building Microservices by Sam Newman or Production-Ready Microservices by Susan J. Fowler. Also you should check out Domain Driven Design by Eric Evans where "bounded contexts" and similar concepts like context, distillation and large-scale structures are dealt much more in depth.
On the 2GB Database inside the container, I would advise against it. In general one of the advantages of containers is the clear separation of concerns between code and data. Data should reside on a host volume, while you're free to swap containers at will to test & use the new version of your application. This should be an advantage if you use a CI/CD provisioning pipeline for your app.
Having said all that, it depends on your specific use case. If I want my developers to all have a std data-set to work against, then I might decide that their container environments do indeed feature a CACHE.DAT. Our Learning Services department has been using containers for two years now, and every course you take on-line runs on Docker containers. As I said, it depends on the use-case.
In ref. to your last comment: right now -March2017- I would not use Windows for a container solution. Although Microsoft has been working closely with Docker for over a year, containers are born out of a sandboxing effect just above the Linux kernel (see Kernel namespace and cgroup).
HTH for now.
Thanks, Dmitry and Luca.
Meanwhile I read some docs and interviewed colleagues who had more experience with Docker than me (while w/o Caché inside). What I've got out of it: Docker doesn't fit well to this very case of mine, which is mostly associated with deployment of already developed app rather than with new development. The reasons are:
- Only one containerized app at the client's server (our case) doesn't mean so many benefits as if it would be several ones;
- Windows issues which Luca already mentioned;
- I completely agree that "data should reside on a host volume..." with only one remark: most likely that all this stuff will be maintained at the client's site by not very well skilled IT personal. It seems that in case of Doker/host volume/etc its configuring will be more complex than rolling up Cache for Windows installation with all possible settings prepared by %Installer based class.
@Alexey
It sounds like -as you say, having to deploy on-site -if I understand correctly, might not be the best use case.
If they use virtualization wouldn't be easier for you guys to deploy a VM that you prepare at your site and just mount the FS/DB they have? That way you'd still have the privilege to run and have guarantees on your build process vs having to do it all on-site.
Just a thought.
All the best
Luca,
It was clear that we can use VMs from the very beginning of the project, and maybe we'll take this approach at last. I've just looked at the Docker's side willing to search more light/reliable alternative.
Thank you again.
Hi, Dmitry!
What should I do if I want to use "prepared" docker image?
E.g. not the standard installation but the image with some 3rd party community software installed and set up like WebTerminal, ClassExplorer, MDX2JSON and DeepSeeWeb, cache-udl, Cache REST-Forms , etc...
Stay tuned, I'm going to show how to do it the next part.
Continuation of this article already available.
Tried this with cache 2017.1.2.217.0 but can't get it working
building gives no error but when i run the image
> docker run -i -p 57772:57772 -p 1972:1972 -t cache-simple
2018/02/14 08:02:24 Starting Caché...
2018/02/14 08:02:24 Seeked /opt/cache//mgr/cconsole.log - &{Offset:0 Whence:2}
2018/02/14 08:02:24 Something is preventing Caché from starting in multi-user mode,
2018/02/14 08:02:24 You might want to start the container with the flag -cstart=false to fix it.
2018/02/14 08:02:24 Error: Caché was not brought up successfully.
and image exist immediatly.
builded also on ubuntu:xenial but same problem there
This error will happen on Windows and macOS, due to docker uses overlay driver there, which is not supported. You should configure Docker daemon to use aufs as storage driver instead.
This screenshot from macOS, on Windows a bit different, but anyway, you should choose Daemon, and switch to advanced mode, where you can use daemon settings in JSON format, just add
"storage-driver": "
aufs
"
, and Apply this changes. It will apply and restart daemon. After that it should work.I get the following message when trying to build the image
Step 9/13 : ADD $ensemble-2016.2.3.903.6-lnxrhx64.tar.gz .
lstat -2016.2.3.903.6-lnxrhx64.tar.gz: no such file or directory
not sure why
looks like you did some changes in your Dockerfile comparing to my example, and you did mistake there. Can you share your Dockerfile, so I could check it?
In my example I have line
where $cache is variable defined few lines above. And when build will run, it will be replaced with value. But in your case, I see $ensemble, and sure that you don't have such variable, and this follows to the error.
Dockerfile below, I have now changed it to reflect yours , and for some strange reason it has installed the software rather than created the image, after it finished the install I ran the docker images cmd and I can now see the image in the repository, but when I run
docker run -d -p 57779:57772 -p 1979:1972 ensemble-simple
All I get is a very long alphanumeric entry
When I run docker ps
the entry is blank so looks like ensemble isn't running
It happens because you still have $ensemble, replace all occurrences, and everything will be okay
Hi Dmitry
Was in the middle of the editing the response when you replied
when I run
docker run -d -p 57779:57772 -p 1979:1972 ensemble-simple
All I get is a very long alphanumeric entry
When I run docker ps
the entry is blank so looks like ensemble isn't running
This behaviour expected if you use default Docker configuration, look at mention about aufs above in comments.
But, you should use docker version 18.06, because in newest version already deleted support for AUFS, and InterSystems does not work on overlay driver wchich is used by default in Docker.
when your container run, and you can find it in running state, you can use command
which will show all existing containers in any state.
and then you can look at the logs of particular container
the latest version of docker for redhat I can find is 1.13.1-58.git87f2fab.el7 and not v18.06
Looking further into it v18.0 is for docker ce, is this the same as what is v1.13.1-58.git87f2fab.el7 for redhat
the storage driver at the moment is overlay2 , how do I go about changing it to aufs.
Ive looked on the internet but it always seems the instructions are going from aufs to something else and not vice versa.
this is current state of the install after running the docker run cmd
docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
docker images
REPOSITORY TAG IMAGE ID CREATED SIZE
ensemble-simple latest f0f078fe077e 52 minutes ago 3.53 GB
docker.io/centos 6 b5e5ffb5cdea 7 weeks ago 194 MB
docker ps -a
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
729f26974733 ensemble-simple:latest "/ccontainermain -..." 29 mins Exited (1) 29 mins ago gifted_nobel
c3d623142529 ensemble-simple:latest "/ccontainermain -..." 29 mins Exited (1) 29 mins ago mystifying_williams
dd02951364cf ensemble-simple "/ccontainermain -..." 49 mins Exited (1) 49 mins ago affectionate_liskov
0eba399201a3 2f6ded0c1a4b "/bin/sh -c '#(nop..." About an hour ago Created happy_tesla
if you working on Linux, you can try to use device-mapper as storage-driver, which is also supported by InterSystems.
At Ionate, we run plenty of enterprise applications and device-mapper works better. Please keep in mind that docker storage default for device mapper is 10GB.
You can easily change it:
Docker in devicemapper and 50GB dm.size
/etc/docker/daemon.json
@Dmitry - Do you know how licensing and support from Intersystems works for Cache on Linux. Also, have you seen any performance issues?
Thanks,
InterSystems has licenses for Linux various systems, so, they can offer license for it, and they have license type especially for docker version based on ubuntu.
I'm don't see any issues in performance, and mostly because I don't have so much big projects, yet, and don't use it in production. Current cases are only CI/CD.
Getting the following error when trying to run the container, any ideas how to resolve this?:
# docker ps -a
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
aaa854bd8d81 cache-docker "/ccontainermain -ccâ¦" 25 minutes ago Exited (1) 25 minutes ago modest_ritchie
# docker logs aaa854bd8d81
standard_init_linux.go:190: exec user process caused "exec format error"
FROM registry.access.redhat.com/rhel7/rhel
MAINTAINER Garth Cordery <garth.cordery@notmyaddress.com.au>
# update OS + dependencies & run Cachéilent instal
RUN yum-config-manager --add-repo http://localrepo/rhel-7-server-rpms/ \
&& yum -y --nogpgcheck update \
&& yum -y --nogpgcheck install which tar hostname net-tools wget \
&& yum -y clean all \
&& ln -sf /etc/locatime /usr/share/zoneinfo/Australia/Brisbane
ARG password="Password"
ARG cache=cache-2018.1.0.184.0
ENV TMP_INSTALL_DIR=/tmp/distrib
# vars for Cachéilent install
ENV ISC_PACKAGE_INSTANCENAME="cache" \
ISC_PACKAGE_INSTALLDIR="/usr/chachesys/" \
ISC_PACKAGE_UNICODE="Y" \
ISC_PACKAGE_CLIENT_COMPONENTS="" \
ISC_PACKAGE_INITIAL_SECURITY="Normal" \
ISC_PACKAGE_USER_PASSWORD=${password}
# set-up and install Cachérom distrib_tmp dir
WORKDIR ${TMP_INSTALL_DIR}
ADD $cache-lnxrhx64.tar.gz .
# cache distributive
RUN ./$cache-lnxrhx64/cinstall_silent \
&& ccontrol stop $ISC_PACKAGE_INSTANCENAME quietly \
# Cachéontainer main process PID 1 (https://github.com/zrml/ccontainermain)
&& export https_proxy=http://proxy.com.au:3128 \
&& curl -L https://github.com/daimor/ccontainermain/raw/master/distrib/linux/cconta... -o /ccontainermain \
&& chmod +x /ccontainermain \
&& rm -rf $TMP_INSTALL_DIR
WORKDIR ${ISC_PACKAGE_INSTALLDIR}
# TCP sockets that can be accessed if user wants to (see 'docker run -p' flag)
EXPOSE 57772 1972
ENTRYPOINT ["/ccontainermain", "-cconsole", "-i", "cache"]
Interesting looks like such error expectable for RedHat. But don't have any RedHat subscription, and I managed to build an image with RedHat, but with centos repo. And in my case, it works without any errors. Maybe you can contact me directly and send your image, so I can check on it?
My differences from your Dockerfile
and I used the latest version of ccontainermain from releases, maybe with this version will be better for you as well.
Thanks for the advice Dmitry, the latest version of ccontainermain sorted the issue, I did need to use the --privileged flag when running it so that mem tuning would work.
Thanks again for the quick response.
Garth
@Dmitry Maslennikov
We have been attempting to create this ourselves, but we're trying to shrink the size of our build images, and one of the ways we are attempting to accomplish this is to stream the install file from the WRC site instead of hosting it locally and/or ADD-ing it to the image.
We're using code like this:
Does this method still work for you?
Craig,
Try this
I followed the directions above and was able to get cache 2017 running in docker. However, the management portal is not responding.
http://localhost:9092/csp/sys/UtilHome.csp - Just spins, but
http://localhost:9092/csp/bin/Systems/Module.cxw - Works and
http://localhost:9092/csp/sys/gateway_status.cxw =. SUCCESS.
Any suggestions?
docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
fa57f1ae6086 cacheimage "/ccontainermain -cc…" 9 minutes ago Up 9 minutes 23/tcp, 4001/tcp, 18001/tcp, 19201/tcp, 0.0.0.0:2222->22/tcp, :::2222->22/tcp, 0.0.0.0:9091->1972/tcp, :::9091->1972/tcp, 0.0.0.0:9092->57772/tcp, :::9092->57772/tcp cache
run --name=cache --publish 9091:1972 --publish 9092:57772 -p 2222:22 -v /Users/user/dat:/opt/cache/dat cacheimage
Could you check with the image daimor/intersystems-cache:2017.2 ?
Dockerfile is available here
docker run with the above image works. but I cannot build with referenced docker file. I only have access to cache 2017.1 install. Could that be the issue? I will see if I can get 2017.2
2917.1 the image is daimor/intersystems-cache:2017.1
and Dockerfile
https://github.com/daimor/docker-intersystems/blob/2017.1/Dockerfile
there are images for any version from 2014.1 to 2018.1