Category Archives: Mendix

Saving costs with a new scheduler in Cloud Foundry Diego

Cloud Foundry

In the Mendix Cloud we run thousands of Mendix apps on Cloud Foundry on AWS. Mendix Runtime Engines that currently run in 2, 4, 8 or 16 GB memory containers. Mendix developers have the possibility to start, stop, scale and upload new versions of their app themselves via our Developer Portal.

This results in the fact that we must have diego-cell instances with at least 16 GB memory available at all times so that a Mendix developer can start their 16 GB memory Runtime Engine.

We found out the way Diego schedules LRPs (Long-Running Processes) on diego-cell EC2 instances can be more optimal in our usecase. In Diego there is only one scheduling algorithm. In a nutshell, app instances (LRPs) get deployed to a diego-cell with most resources available. This way app instances get balanced across diego-cell instances equally.

Nima and Jen did a really nice presentation during the last Cloud Foundry Summit in The Hague about how scheduling in Cloud Foundry works.

Let’s say you have a number of AWS EC2 m5.4xlarge (64 GB memory) diego-cell instances. At some point all diego-cell instances are filled up equally with app instances (LRPs) and all diego-cell instances have about 16 GB memory available. At some point this gets to 14~15 GB memory available. Then we have to add additional diego-cell instances to keep supporting the deployment of 16 GB memory Mendix Runtime Engines. But.. when deploying more app instances (LRPs) after scaling up, they get scheduled to the new diego-cell instances, also when they are 2, 4 or 8 GB app instances, until all diego-cell instances have ~16 GB available again.

In practice it looks like this (20 diego-cell instances, 64GB memory):

Graph: Remaining Memory (per diego-cell)

Result: 25% of the memory of our diego-cell instances is unused, wasted.

Now we could scale up to AWS EC2 m5.8xlarge (128 GB memory), so we only waste 12.5%, but at some point we also want to support app instances with 32 GB memory.

We have looked into isolation segments. Having for example an isolation segments per app instance size. Unfortunately that does not work for us. Mendix developers don’t notice this because its abstracted away for them, but they run different app instance sizes in one “Org” and “isolation segments” apply to an “Org”.

The quest to a new scheduling algorithm

I’ve been looking at this inefficient usage of resources for quite a while now. I also investigated how the scheduling algorithm in Diego works before Nima and Jen gave the presentation. During the Cloud Foundry Summit I had a chat with Nima if it would make sense to invest time in adding or changing the scheduler in Diego. Project Eirini was close to a version 1.0 release, where app instances run on Kubernetes. Kubernetes is more flexible with scheduling algorithms. So that could solve our issue as well.

First I thought: “No, let’s wait for Eirini.” But it would probably still take a year before we would migrate to Eirini in production. Having an improved scheduler in Diego would mean a cost saver for us right now.

Goal of the new scheduling algorithm

Mendix apps are memory heavy. In a shared environment, with running many Mendix Runtime Engines on one Cloud Foundry diego-cell instance, we notice that there is more then enough CPU resources available. Mendix developers mainly scale up their app by adding more memory (or adding more instances). So in our case we want to fill up diego-cell instances as much as possible.

How scheduling LRPs in Diego works technically

Like Nima explained in the presentation, the scheduler makes a decision where to deploy an app instance (LRP) based on a score the diego-cell instances provide. The lowest score wins. The score is calculated here:

It basically drills down to:

Score = ((Memory + Disk + Containers) / 3) + StartingContainers + Locality

  • Memory: percentage of memory that is still available
  • Disk: percentage of disk that is still available
  • Containers: percentage of containers it still can host (max 256 per diego-cell)
  • StartingContainers: number of starting containers x weight (usually 0.25)
  • Locality: 1000 when its already hosting an instance of the same app

For example:

((0.5 + 0.5 + 0.39) / 3) + 0.25 + 0 = 0.7133

  • Memory: diego-cell has 50% of its memory available
  • Disk: diego-cell has 50% of its disk available
  • Containers: diego-cell runs 100 containers (100/256)
  • StartingContainers: there is currently 1 container starting
  • Locality: this diego-cell does not run an instance of the same app

The idea: Bin Pack First Fit Weight

Scaling up and down the number of diego-cell instances is based on the index number BOSH assigns to an instance. When you add 1 diego-cell instance and after that remove 1 diego-cell instance the instance that was just created gets removed.

What if we could make a diego-cell more attractive to deploy to based on the index number it has. This way diego-cell instances with a lower index number could be filled up first. As long as it has enough resources available. This could be called Bin Pack First Fit.

The index number can be displayed using the “bosh instances” command:

$ bosh -d cf instances -i --column=instance --column=index
Instance                                                    Index
[..]
diego-cell/0342c42b-756e-4951-8280-495261e38f53            	0	
diego-cell/16be34ce-bd34-4837-8431-51f6bc4a0fa8            	1	
diego-cell/e3bec1d3-0899-4502-9f43-4049f53721b1            	2	
diego-cell/2581addf-4f08-421e-ab9d-c52772f50315            	3	
[..]

Like with “StartingContainers“, we could add some weight to the total score based on the index number a diego-cell instance has. This way it is also still possible to completely disable the Bin Pack First Fit weight component in the algorithm by setting the weight to 0 and keep the existing algorithm Diego has currently.

It will work like this:

Score = ((Memory + Disk + Containers) / 3) + StartingContainers + Locality + Index

  • Memory: percentage of memory that is still available
  • Disk: percentage of disk that is still available
  • Containers: percentage of containers it still can host (max 256 per diego-cell)
  • StartingContainers: number of starting containers x weight (usually 0.25)
  • Locality: 1000 when its already hosting an instance of the same app
  • Index: BOSH index number x weight

Let’s take the previous example, assume all diego-cell instances are filled up equally and add an index weight of 0.25:

  • diego-cell 0: ((0.5 + 0.5 + 0.39) / 3) + 0.25 + 0 + (0*0.25) = 0.7133
  • diego-cell 1: ((0.5 + 0.5 + 0.39) / 3) + 0.25 + 0 + (1*0.25) = 0.9633
  • diego-cell 2: ((0.5 + 0.5 + 0.39) / 3) + 0.25 + 0 + (2*0.25) = 1.2133
  • diego-cell 3: ((0.5 + 0.5 + 0.39) / 3) + 0.25 + 0 + (3*0.25) = 1.4633

In this case the next app instance will be deployed to diego-cell 0. Exactly what we want. The weight, 0.25 currently, can be increased to make diego-cell instances with a lower BOSH index number even more attractive.

A Proof of Concept

As a Proof of Concept the above has been developed in Diego: https://github.com/pommi/diego-release/tree/bin-pack-first-fit

To test the updated scheduling algorithm, while this is not part of the official diego-release (yet), we create a custom diego-release and use that in our Cloud Foundry setup.

NOTE: this diego-release is based on diego-release v2.34 (cf-deployment v9.5)

git clone --recurse-submodules --branch bin-pack-first-fit https://github.com/pommi/diego-release.git
cd diego-release
bosh --sha2 cr --timestamp-version --tarball=diego-release-bin-pack-first-fit-v2.34.0-5-g0b5569154.tgz --force

Upload diego-release-bin-pack-first-fit-v2.34.0-5-g0b5569154.tgz somewhere online and create an ops file to deploy this diego-release version instead of the default one:

- type: replace
  path: /releases/name=diego
  value:
    name: diego
    url: https://<your-domain.tld>/diego-release-bin-pack-first-fit-v2.34.0-5-g0b5569154.tgz
    sha1: <sha1 of diego-release-bin-pack-first-fit-v2.34.0-5-g0b5569154.tgz>
    version: <version from the `bosh --sha2 cr` command>
- type: replace
  path: /instance_groups/name=scheduler/jobs/name=auctioneer/properties/diego/auctioneer/bin_pack_first_fit_weight?
  value: 0.25

The result: Weighted Bin Pack First Fit

The result is actually pretty amazing ūüôā (15 diego-cell instances, 128GB memory):

Graph: Remaining Memory (per diego-cell)

This graph shows a 48 hour period, where the deployment pattern of Mendix app instances is equal to the previous graph. It is definitely noticeable that the added “Bin Pack First Fit Weight” has impact. App instances (LRPs) are not spread equally anymore. In this case we could remove 2 or 3 diego-cell instances, while keeping at least 2 to 3 diego-cell instances with 16 GB memory available. ūüėÄ

And the cost saver? An AWS On-Demand EC2 m5.4xlarge instance costs around $18.432 per day in AWS region us-east-1. Let’s say you run 100 diego-cell instances in total and you could now remove 20 to 25, while keeping 16 GB memory available on at least a couple of diego-cell instances. That is a saving of $368.60~$460.80 per day, $134,553.60~$168,192.00 per year of On-Demand EC2 costs. ūüėé (With Reserved or Spot Instances this is of course less)

We’re hiring

Want to join our team and work on cool stuff like this?
Apply for a job at Mendix:

Upgrade Oracle Java without interrupting a Mendix App

In the “Mendix Cloud” we are hosting thousands of Mendix Apps. All these Apps are running on top of the Oracle Java Runtime Environment (JRE) in Debian Linux environments. We use java-package to package the Oracle JRE to be able to easily redistribute it to all our servers.

After packaging and putting the Debian package in our local apt repository the Oracle JRE can be easily installed via apt-get.

# apt-get install oracle-java8-jre

When there is an update available of the Oracle JRE, we again package the new version and put it in our local apt repository. The update will now be available to all our Debian Linux environments.

# apt-get -V upgrade
Reading package lists... Done
Building dependency tree       
Reading state information... Done
The following packages will be upgraded:
  oracle-java8-jre (8u40 => 8u45)
1 upgraded, 0 newly installed, 0 to remove and 0 not upgraded.
Need to get 39.4 MB of archives.
After this operation, 26.6 kB of additional disk space will be used.
Do you want to continue [Y/n]?

But wait… it doesn’t warn you about it, but do you remember these screens when using Windows or Mac OSX?

javaupdate-windows  javaupdate-mac

This doesn’t mean that this doesn’t apply to Linux. ūüėČ Also on Linux it’s required to restart all java processes. In case of a Oracle JRE update it meant that we had to plan maintenance windows and restart all Mendix Apps while rolling out the update.

A new approach

It would have been much nicer if we could roll out updates without thinking about the Mendix Apps that are currently using the installed Java version. In the Linux universe this is not an unfamiliar issue. Look for example at the Linux kernel. The Linux kernel that is currently running also cannot be replaced or uninstalled. You would run into all kinds of issues regarding kernel modules and libraries that have been changed or removed. Therefore the packaging system is keeping the last X Linux kernels installed including the one you are currently running.

Since Debian 8.0 (Jessie) the apt package (since version 0.9.9.1) contains this file: “/etc/kernel/postinst.d/apt-auto-removal“. This file is executed after the installation (during “postinst“) of each “linux-image*” package. The “apt-auto-removal” script lists all installed kernels and creates an “APT::NeverAutoRemove” list in “/etc/apt/apt.conf.d/01autoremove-kernels” of the 3 most recent versions plus the one that is currently in use in. “linux-image*” packages that are not on that list may be “AutoRemoved“.

For Oracle JRE we can exactly use the same procedure. There are a few requirements:

  1. java-package needs to create versioned packages so we can install multiple versions at the same time.
  2. The oracle-java8uXX-jre package must run an apt-auto-removal script after installation to update an APT::NeverAutoRemove list.
  3. The apt-auto-removal script needs to be in a separate package, because its already required on installation of a oracle-java8uXX-jre package.
  4. We need an oracle-java8-jre-latest dependency package to install the latest oracle-java8uXX-jre package, also so that for example oracle-java8uXX-jre is marked as automatically installed so it can be removed using apt-get autoremove when it’s not on the APT::NeverAutoRemove list.

system2

Versioned packages with java-package

java-package needed to be patched to produce versioned packages. Instead of “oracle-java8-jre” we needed to have “oracle-java8uXX-jre” where XX is the update version number, for example “oracle-java8u45-jre“.

Besides the package name, the package content needed to be installed in a different place. With “oracle-java8-jre” all files are installed in “/usr/lib/jvm/jre-8-oracle-x64/“. This needed to change to “/usr/lib/jvm/jre-8uXX-oracle-x64/“.

Changing 4 lines of bash gave the expected result (github.com/mendix/java-package):

diff --git a/lib/jdk.sh b/lib/jdk.sh
index cd41772..bc981e1 100644
--- a/lib/jdk.sh
+++ b/lib/jdk.sh
@@ -57,8 +57,8 @@ j2sdk_run() {
     echo
     diskfree "$j2se_required_space"
     read_maintainer_info
-    j2se_package="$j2se_vendor-java$j2se_release-jdk"
-    j2se_name="jdk-$j2se_release-$j2se_vendor-$j2se_arch"
+    j2se_package="$j2se_vendor-java${j2se_release}u$j2se_update-jdk"
+    j2se_name="jdk-${j2se_release}u$j2se_update-$j2se_vendor-$j2se_arch"
     local target="$package_dir/$j2se_name"
     install -d -m 755 "$( dirname "$target" )"
     extract_bin "$archive_path" "$j2se_expected_min_size" "$target"
diff --git a/lib/jre.sh b/lib/jre.sh
index ecd6d41..b209fcb 100644
--- a/lib/jre.sh
+++ b/lib/jre.sh
@@ -42,8 +42,8 @@ j2re_run() {
     echo
     diskfree "$j2se_required_space"
     read_maintainer_info
-    j2se_package="$j2se_vendor-java$j2se_release-jre"
-    j2se_name="jre-$j2se_release-$j2se_vendor-$j2se_arch"
+    j2se_package="$j2se_vendor-java${j2se_release}u$j2se_update-jre"
+    j2se_name="jre-${j2se_release}u$j2se_update-$j2se_vendor-$j2se_arch"
     local target="$package_dir/$j2se_name"
     install -d -m 755 "$( dirname "$target" )"
     extract_bin "$archive_path" "$j2se_expected_min_size" "$target"

Now we were able to install multiple Oracle JRE versions alongside each other. I thought it was also nice to have a “/usr/bin/java8” symlink, which always points to the latest version. This was also easily implemented:

diff --git a/lib/oracle-jdk.sh b/lib/oracle-jdk.sh
index adb3dc2..bdd2b91 100644
--- a/lib/oracle-jdk.sh
+++ b/lib/oracle-jdk.sh
@@ -124,6 +124,10 @@ fi
 install_no_man_alternatives $jvm_base$j2se_name/jre/lib $oracle_jre_lib_hl
 install_alternatives $jvm_base$j2se_name/bin $oracle_bin_jdk
 
+if [[ -f "$jvm_base$j2se_name/bin/java" ]]; then
+    update-alternatives --install "/usr/bin/java$j2se_release" "java$j2se_release" "$jvm_base$j2se_name/bin/java" $j2se_priority
+fi
+
 # No plugin for ARM architecture yet
 if [ "${DEB_BUILD_ARCH:0:3}" != "arm" ]; then
 plugin_dir="$jvm_base$j2se_name/jre/lib/$DEB_BUILD_ARCH"
@@ -148,6 +152,8 @@ fi
 remove_alternatives $jvm_base$j2se_name/jre/lib $oracle_jre_lib_hl
 remove_alternatives $jvm_base$j2se_name/bin $oracle_bin_jdk
 
+update-alternatives --remove "java$j2se_release" "$jvm_base$j2se_name/bin/java"
+
 # No plugin for ARM architecture yet
 if [ "${DEB_BUILD_ARCH:0:3}" != "arm" ]; then
 plugin_dir="$jvm_base$j2se_name/jre/lib/$DEB_BUILD_ARCH"
diff --git a/lib/oracle-jre.sh b/lib/oracle-jre.sh
index 3958ea7..fcc2287 100644
--- a/lib/oracle-jre.sh
+++ b/lib/oracle-jre.sh
@@ -96,6 +96,10 @@ install_alternatives $jvm_base$j2se_name/bin $oracle_jre_bin_jre
 install_no_man_alternatives $jvm_base$j2se_name/bin $oracle_no_man_jre_bin_jre
 install_no_man_alternatives $jvm_base$j2se_name/lib $oracle_jre_lib_hl
 
+if [[ -f "$jvm_base$j2se_name/bin/java" ]]; then
+    update-alternatives --install "/usr/bin/java$j2se_release" "java$j2se_release" "$jvm_base$j2se_name/bin/java" $j2se_priority
+fi
+
 plugin_dir="$jvm_base$j2se_name/lib/$DEB_BUILD_ARCH"
 for b in $browser_plugin_dirs;do
     install_browser_plugin "/usr/lib/\$b/plugins" "libjavaplugin.so" "\$b-javaplugin.so" "\$plugin_dir/libnpjp2.so"
@@ -114,6 +118,8 @@ remove_alternatives $jvm_base$j2se_name/bin $oracle_jre_bin_jre
 remove_alternatives $jvm_base$j2se_name/bin $oracle_no_man_jre_bin_jre
 remove_alternatives $jvm_base$j2se_name/lib $oracle_jre_lib_hl
 
+update-alternatives --remove "java$j2se_release" "$jvm_base$j2se_name/bin/java"
+
 plugin_dir="$jvm_base$j2se_name/lib/$DEB_BUILD_ARCH"
 for b in $browser_plugin_dirs;do
     remove_browser_plugin "\$b-javaplugin.so" "\$plugin_dir/libnpjp2.so"

And the last part regarding java-package was to execute “/etc/oracle-java/postinst.d/apt-auto-removal” after installation:

diff --git a/lib/oracle-jre.sh b/lib/oracle-jre.sh
index fcc2287..ebebb1f 100644
--- a/lib/oracle-jre.sh
+++ b/lib/oracle-jre.sh
@@ -104,6 +104,10 @@ plugin_dir="$jvm_base$j2se_name/lib/$DEB_BUILD_ARCH"
 for b in $browser_plugin_dirs;do
     install_browser_plugin "/usr/lib/\$b/plugins" "libjavaplugin.so" "\$b-javaplugin.so" "\$plugin_dir/libnpjp2.so"
 done
+
+if [ -d "/etc/oracle-java/postinst.d" ]; then
+    run-parts --report --exit-on-error --arg=$j2se_vendor-java${j2se_release}u$j2se_update-jre /etc/oracle-java/postinst.d
+fi
 EOF
 }

apt-auto-removal and APT::NeverAutoRemove

To generate the “APT::NeverAutoRemove” list, we’ve taken the “apt-auto-removal” script from the apt package and modified it to support oracle-java packages:

#!/bin/sh
set -e

# Author: Pim van den Berg <pim.van.den.berg@mendix.com>
#
# This is a modified version of the /etc/kernel/postinst.d/apt-auto-removal
# script from the apt package to mark kernel packages as NeverAutoRemove.
#
# Mark as not-for-autoremoval those oracle-java packages that are currently in use.
#
# We generate this list and save it to /etc/apt/apt.conf.d instead of marking
# packages in the database because this runs from a postinst script, and apt
# will overwrite the db when it exits.

eval $(apt-config shell APT_CONF_D Dir::Etc::parts/d)
test -n "${APT_CONF_D}" || APT_CONF_D="/etc/apt/apt.conf.d"
config_file=${APT_CONF_D}/01autoremove-oracle-java

eval $(apt-config shell DPKG Dir::bin::dpkg/f)
test -n "$DPKG" || DPKG="/usr/bin/dpkg"

if [ ! -e /bin/fuser ]; then
	echo "WARNING: /bin/fuser is missing, could not generate reliable $config_file"
	exit
fi

java_versions=""

for java_binary in /usr/lib/jvm/*/bin/java; do
	if /bin/fuser $java_binary > /dev/null 2>&1; then
		java_versions="$java_versions
$(dpkg -S $java_binary | sed 's/: .*//')"
	fi
done

versions="$(echo "$java_versions" | sort -u | sed -e 's#\.#\\.#g' )"

generateconfig() {
	cat <<EOF
// DO NOT EDIT! File autogenerated by $0
APT::NeverAutoRemove
{
EOF
	for version in $versions; do
		echo "   \"^${version}$\";"
	done
	echo '};'
}
generateconfig > "${config_file}.dpkg-new"
mv "${config_file}.dpkg-new" "$config_file"

The “java-auto-removal” script will go through all “/usr/lib/jvm/*/bin/java” files and check whether they are in use, using the “/bin/fuser” command. When in use, the package the java binary is part of will be added to the “APT::NeverAutoRemove” list. This list will be written to /etc/apt/apt.conf.d/01autoremove-oracle-java.

Great improvement ūüėÄ

That’s it. We are now able to upgrade Oracle Java while the Mendix App keeps running. Once the Mendix App is stopped and then started by the customer, it will start to use the new version of Java. Once another new Oracle Java update is installed or the “java-auto-removal” script is run, the “APT::NeverAutoRemove” list is updated. After that the Oracle Java version that was in use by the Mendix App before it stopped can be “AutoRemoved“. ūüėÄ

Mendix shipped in a Docker container

Imagine… Imagine if you could setup a new Mendix hosting environment in seconds, everywhere. A lightweight, secure and isolated environment where you just have to talk to a RESTful API to deploy your MDA (Mendix Deployment Archive) and start your App.

Since the 2nd quarter of this year a great piece of software became very popular to help to achieve this goal: Docker. Docker provides a high-level API on top of Linux Containers (LXC), which provides a lightweight virtualization solution that runs processes in isolation.

Mendix on Docker

tl;dr

Run a Mendix App in a Docker container in seconds:

root@host:~# docker run -d mendix/mendix
root@host:~# curl -XPOST -F model=@project.mda http://172.17.0.5:5000/upload/
File uploaded.
root@host:~# curl -XPOST http://172.17.0.5:5000/unpack/
Runtime downloaded and Model unpacked.
root@host:~# curl -XPOST -d "DatabaseHost=172.17.0.4:5432" -d "DatabaseUserName=docker" -d "DatabasePassword=docker" -d "DatabaseName=docker" http://172.17.0.5:5000/config/
Config set.
root@host:~# curl -XPOST http://172.17.0.5:5000/start/
App started. (Database updated)
root@host:~#

Docker

There has been a lot of buzz around Docker since its start in March 2013. Being able to create an isolated environment once, package it up, and run it everywhere makes it very exciting. Docker provides easy-to-use features like Filesystem isolation, Resource isolation, Network isolation, Copy-on-write, Logging, Change management and more.

For more details about Docker, please read “The whole story”. We’d like to go on with the fun stuff.

Mendix on Docker

Once a month a so-called FedEx Day (Research Day, ShipIt day, Hackatron) is organized at Mendix. On that day, Mendix developers have the freedom to work on whatever they want. We’ve been playing with Docker a couple of Research Day’s ago. Just see how it works, that kind of stuff. But this time we really wanted to create something we’d potentially use in production. A proof of concept how to run Mendix on Docker.

The plan:

  1. Create a Docker Container containing all software to run Mendix
  2. Create a RESTful API to upload, start and stop a Mendix App within that container

What about the database, you may be wondering? We’ll just use a Docker container that provides us a PostgreSQL service! You can also build your own PostgreSQL container or use an existing PostgreSQL server in your network.

Start off with an image:

mendix-docker

This is what we are building. A Docker container containing:

  • All required software to run a Mendix App, like the Java Runtime Environment and the m2ee library
  • A RESTful API (m2ee-api) to upload, start and stop an App (listening on port 5000)
  • A webserver (nginx), to serve static content and proxy App paths to the Mendix runtime (listening on port 7000)
  • When an App is deployed the Mendix runtime will be listening on port 8000 locally

Building the base container

Before we can start to install the software, we need a base image. A minimal install of an operating system like Debian GNU/Linux, Ubuntu, Red Hat, CentOS, Fedora, etc. You could download a base container from the Docker Index. But because this is so basic and we’d like to create a Mendix container we can trust 100% (a 3rd party base image could contain back-doors), we created one ourselves.

A Debian GNU/Linux Wheezy image:

debootstrap wheezy wheezy http://cdn.debian.net/debian
tar -C wheezy -c . | docker import - mendix/wheezy

That’s all! Let’s show the image we’ve just created:

root@host:~# docker images
REPOSITORY       TAG       IMAGE ID       CREATED           VIRTUAL SIZE
mendix/wheezy    latest    1bee0c7b9ece   6 seconds ago     218.6 MB
root@host:~#

Building the Mendix container

On top of the base image we just created, we can start to install all required software to run Mendix. Creating a Docker container can be done using a Dockerfile. It contains all instructions to provision the container and information like what network ports to expose and what executable to run (by default) when you start using the container.

There is an extensive manual available about how to run Mendix on GNU/Linux. We’ve used this to create our Dockerfile. This Dockerfile also installs files like /home/mendix/.m2ee/m2ee.yaml, /home/mendix/nginx.conf and /etc/apt/sources.list. They must be in your current working directory when running the docker build command. All files have been published to GitHub.

To create the Mendix container run:

docker build -t mendix/mendix .

That’s it! We’ve created our own Docker container! Let’s show it:

root@host:~#
REPOSITORY       TAG       IMAGE ID       CREATED           VIRTUAL SIZE
mendix/mendix    latest    c39ee75463d6   10 seconds ago    589.6 MB
mendix/wheezy    latest    1bee0c7b9ece   3 minutes ago     218.6 MB
root@host:~#

Our container has been published to the Docker Index: mendix/mendix

The RESTful API

When you look at the Dockerfile, it shows you it’ll start the m2ee-api on startup. This API will listen on port 5000 and currently supports a limited set of actions:

GET  /about/        # about m2ee-api
GET  /status/       # app status
GET  /config/       # show configuration
POST /config/       # set configuration
POST /upload/       # upload a new MDA
POST /unpack/       # unpack the uploaded MDA
POST /start/        # start the app
POST /stop/         # stop the running app
POST /terminate/    # terminate the running app
POST /kill/         # kill the running app
POST /emptydb/      # empty the database

Usage

Now that we’ve created the container and published it to the Docker Index we can start using it. And not only we can start using it. Everyone can!

Pull the container and start it.

root@host:~# docker pull mendix/mendix
Pulling repository mendix/mendix
c39ee75463d6: Download complete
eaea3e9499e8: Download complete
...
855acec628ec: Download complete
root@host:~# docker run -d mendix/mendix
bd7964940dfc61449da79cddd1c0e8845d61f6ec1092b466e8e2e582726a5eea
CONTAINER ID        IMAGE                      COMMAND                CREATED             STATUS              PORTS                NAMES
bd7964940dfc        mendix/mendix:latest       /bin/su mendix -c /u   19 seconds ago      Up 18 seconds       5000/tcp, 7000/tcp   tender_hawkings
root@host:~# docker inspect bd7964940dfc | grep IPAddress | awk '{ print $2 }' | tr -d ',"'
172.17.0.5
root@host:~#

In this container the RESTful API started and is now listening on port 5000. We can for example ask for its status or show its configuration.

root@host:~# curl -XGET http://172.17.0.5:5000/status/
The application process is not running.
root@host:~# curl -XGET http://172.17.0.5:5000/config/
{
"DatabaseHost": "127.0.0.1:5432",
"DTAPMode": "P",
"MicroflowConstants": {},
"BasePath": "/home/mendix",
"DatabaseUserName": "mendix",
"DatabasePassword": "mendix",
"DatabaseName": "mendix",
"DatabaseType": "PostgreSQL"
}
root@host:~#

To run an App in this container, we first need a database server. Pull a PostgreSQL container from the Docker Index and start it.

root@host:~# docker pull zaiste/postgresql
Pulling repository zaiste/postgresql
0e66fd3d6a6f: Download complete
27cf78414709: Download complete
...
046559147c70: Download complete
root@host:~# docker run -d zaiste/postgresql
9ba56a7c4bb132ef0080795294a077adca46eaca5738b192d2ead90c16ac2df2
root@host:~# docker ps
CONTAINER ID        IMAGE                      COMMAND                CREATED             STATUS              PORTS                NAMES
9ba56a7c4bb1        zaiste/postgresql:latest   /bin/su postgres -c    22 seconds ago      Up 21 seconds       5432/tcp             jolly_darwin
bd7964940dfc        mendix/mendix:latest       /bin/su mendix -c /u   30 seconds ago      Up 29 seconds       5000/tcp, 7000/tcp   tender_hawkings
root@host:~# docker inspect 9ba56a7c4bb1 | grep IPAddress | awk '{ print $2 }' | tr -d ',"'
172.17.0.4
root@host:~#

Now configure Mendix to use this database server.

root@host:~# curl -XPOST -d "DatabaseHost=172.17.0.4:5432" -d "DatabaseUserName=docker" -d "DatabasePassword=docker" -d "DatabaseName=docker" http://172.17.0.5:5000/config/
Config set.
root@host:~# curl -XGET http://172.17.0.5:5000/config/
{
"DatabaseHost": "172.17.0.4:5432",
"DTAPMode": "P",
"MicroflowConstants": {},
"BasePath": "/home/mendix",
"DatabaseUserName": "docker",
"DatabasePassword": "docker",
"DatabaseName": "docker",
"DatabaseType": "PostgreSQL"
}
root@host:~#

Upload, unpack and start an MDA:

root@host:~# curl -XPOST -F model=@project.mda http://172.17.0.5:5000/upload/
File uploaded.
root@host:~# curl -XPOST http://172.17.0.5:5000/unpack/
Runtime downloaded and Model unpacked.
root@host:~# # set config after unpack (unpack will overwrite your config)
root@host:~# curl -XPOST -d "DatabaseHost=172.17.0.4:5432" -d "DatabaseUserName=docker" -d "DatabasePassword=docker" -d "DatabaseName=docker" http://172.17.0.5:5000/config/
Config set.
root@host:~# curl -XPOST http://172.17.0.5:5000/start/
App started. (Database updated)
root@host:~#

Check if the application is running:

root@host:~# curl -XGET http://172.17.0.5:7000/
-- a lot of html --
root@host:~# curl -XGET http://172.17.0.5:7000/xas/
-- a lot of html --
root@host:~#

Great success! We’ve deployed our Mendix App in a completely new environment in seconds.

Reflection

Docker is a very powerful tool to deploy lightweight, secure and isolated environments. The addition of a RESTful API makes it very easy to deploy and start Apps.

One of the limitations after finishing this is that the App isn’t reachable from the outside world. The port redirection feature from Docker can be used for that. To run more Mendix containers on one host there must be some kind of orchestrator on the Docker host that administrates the containers and keeps track of what is running where.

The RESTful API provides a limited set of features in comparison with m2ee-tools. When you start your App using m2ee-tools and your database already contains data, the CLI will ask you kindly what to do. Currently the m2ee-api will just try to upgrade the database scheme if needed and start the App without a notice.