1 May 2019

Running PostgreSQL in a Cloud on Oracle Containers Engine for Kubernetes

In this post I am going to show a few steps to deploy and run PostgreSQL database in a K8S cluster on OKE.

The deployment is going to be based on postgres:11.1 Docker image which requires a few environment variables to be configured: POSTGRES_DB (database name), POSTGRES_USER and POSTGRES_PASSWORD. I am going to store values of these variables in K8S ConfigMap and Secret:
apiVersion: v1
kind: ConfigMap
metadata:
  name: postgre-db-config
data:
  db-name: flexdeploy

apiVersion: v1
kind: Secret
metadata:
  name: postgre-db-secret
stringData:
  username: creator
  password: c67

These configuration K8S resources are referenced by the StatefulSet which actually takes care of the lifespan of a pod with postgres-db container:
apiVersion: apps/v1beta2
kind: StatefulSet
metadata:
  name: postgre-db
  labels:
    run: postgre-db
spec:
  selector:
      matchLabels:
        run: postgre-db
  serviceName: "postgre-db-svc"
  replicas: 1
  template:
    metadata:
      labels:
        run: postgre-db
    spec:
      containers:
      - image: postgres:11.1
        volumeMounts:
           - mountPath: /var/lib/postgresql/data
             name: db     
        env:
          - name: POSTGRES_DB
            valueFrom:
              configMapKeyRef:
                   name: postgre-db-config
                   key: db-name                  
          - name: POSTGRES_USER
            valueFrom:
              secretKeyRef:
                   name: postgre-db-secret
                   key: username                  
          - name: POSTGRES_PASSWORD
            valueFrom:
              secretKeyRef:
                   name: postgre-db-secret
                   key: password                  
          - name: PGDATA
            value: /var/lib/postgresql/data/pgdata           
        name: postgre-db
        ports:
        - containerPort: 5432
  volumeClaimTemplates:
   - metadata:
       name: db
     spec:
       accessModes: [ "ReadWriteOnce" ]
       resources:
         requests:
           storage: 3Gi  


StatefulSet K8S resource has been specially designed for stateful applications like database that save their data to a persistent storage. In order to define a persistent storage for our database we use another K8s resource Persistent Volume and here in the manifest file we are defining a claim to create a 3Gi Persistent Volume with name db. The volume is called persistent because its lifespan is not maintained by a container and not even by a pod, it’s maintained by a K8s cluster. So it can outlive any containers and pods and save the data. Meaning that if we kill or recreate a container or a pod or even the entire StatefulSet, the data will be still there. We are referring to this persistence volume in the container definition mounting a volume on path /var/lib/postgresql/data. This is where PostgreSQL container stores its data.

In order to access the database we are going to create a service:
apiVersion: v1
kind: Service
metadata:
  name: postgre-db-svc 
spec:
  selector:
    run: postgre-db
  ports:
    - port: 5432
      targetPort: 5432 
  type: LoadBalancer   

This is a LoadBalancer service which is accessible from outside of the K8S cluster:
$ kubectl get svc postgre-db-svc
NAME             TYPE           CLUSTER-IP     EXTERNAL-IP      PORT(S)          AGE
postgre-db-svc   LoadBalancer   10.96.177.34   129.146.211.77   5432:32498/TCP   39m

Assuming that we have created a K8s cluster on OKE and our kubectl is configured to communicate with it we can apply the manifest files to the cluster:
kubectl apply -f postgre-db-config.yaml
kubectl apply -f postgre-db-secret.yaml
kubectl apply -f postgre-db-pv.yaml
kubectl apply -f postgre-db-service.yaml

Having done that, we can connect to the database from our favorite IDE on our laptop

The manifest files for this post are available on GitHub.

That's it!

20 Apr 2019

Deployment Strategies with Kubernetes and Istio

In this post I am going to discuss various deployment strategies and how they can be implemented with K8s and Istio. Basically the implementation of all strategies is based on the ability of K8s to run multiple versions of a microservice simultaneously and on the concept that consumers can access the microservice only through some entry point. At that entry point we can control what version of a microservice the consumer should be routed to.

The sample application for this post is going to be a simple Spring Boot application wrapped into a Docker image. So there are two images superapp:old and superapp:new representing an old and a new versions of the application respectively:

docker run -d --name old -p 9001:8080 eugeneflexagon/superapp:old
docker run -d --name new -p 9002:8080 eugeneflexagon/superapp:new


curl http://localhost:9001/version
{"id":1,"content":"old"}

curl http://localhost:9002/version
{"id":1,"content":"new"}


Let's assume the old version of the application is deployed to a K8s cluster running on Oracle Kubernetes Engine with the following manifest:
apiVersion: apps/v1beta1
kind: Deployment
metadata:
  name: superapp
spec:
  replicas: 3
  template:
    metadata:
       labels:
         app: superapp
    spec:
      containers:
        - name: superapp
          image: eugeneflexagon/superapp:old
          ports:
            - containerPort: 8080
So there are three replicas of a pod running the old version of the application. There is also a service routing the traffic to these pods:
apiVersion: v1
kind: Service
metadata:
  name: superapp
spec:
  selector:
    app: superapp   
  ports:
    - port: 8080
      targetPort: 8080      

Rolling Update
This deployment strategy updates pods in a rolling update way, changing them one by one.


This is a default strategy handled by a K8s cluster itself, so we just need to update the superapp deployment with a reference to the new image:
apiVersion: apps/v1beta1
kind: Deployment
metadata:
  name: superapp
spec:
  replicas: 3
  template:
    metadata:
       labels:
         app: superapp
    spec:
      containers:
        - name: superapp
          image: eugeneflexagon/superapp:new
          ports:
            - containerPort: 8080
However, we can fine-tune the rolling update algorithm by providing parameters for this deployment strategy in the manifest file:
spec:
  replicas: 3
  strategy:
    type: RollingUpdate
    rollingUpdate:
       maxSurge: 30%
       maxUnavailable: 30%   
  template:
  ...

The maxSurge parameter defines the maximum number of pods that can be created over the desired number of pods. It can be either a percentage or an absolute number. Default value is 25%.
The maxUnavailable parameter defines the maximum number of pods that can be unavailable during the update process. It can be either a percentage or an absolute number. Default value is 25%.


Recreate
This deployment strategy kills all old pods and then creates the new ones.

spec:
  replicas: 3
  strategy:
    type: Recreate
  template:
  ...

Very simple.

Blue/Green
This strategy defines an old version of the application as a green one and a new version as a blue one. Users always have access only to the green version.


apiVersion: apps/v1beta1
kind: Deployment
metadata:
  name: superapp-01
spec:
  template:
    metadata:
       labels: 
         app: superapp
         version: "01"
...



apiVersion: v1
kind: Service
metadata:
  name: superapp
spec:
  selector:
    app: superapp 
    version: "01"
...

The service routes the traffic only to pods with label version: "01".

We deploy a blue version to a K8s cluster and make it available only for QAs or for a test automation tool (via a separate service or direct port-forwarding).



apiVersion: apps/v1beta1
kind: Deployment
metadata:
  name: superapp-02
spec:
  template:
    metadata:
       labels:
         app: superapp
         version: "02"
...


Once the new version is tested we switch the service to it and scale down the old version:



apiVersion: v1
kind: Service
metadata:
  name: superapp
spec:
  selector:
    app: superapp 
    version: "02"
...


kubectl scale deployment superapp-01 --replicas=0

Having done that,  all users work with the new version.

So there is no Istio so far. Everything is handled by a K8s cluster out-of-the box. Let's move on to the next strategy.


Canary
I love this deployment strategy as it lets the users test the new version of the application and they don't even know about that. The idea is that we deploy a new version of the application and route 10% of the traffic to it. The users have no idea about that.


If it works for a while, we can balance the traffic 70/30, then 50/50 and eventually 0/100.
Even though this strategy can be implemented with K8s resources only by playing with the number of old and new pods, it is way more convenient to implement it with Istio.
So the old and the new applications are defined as the following deployments:
apiVersion: apps/v1beta1
kind: Deployment
metadata:
  name: superapp-01
spec:
  template:
    metadata:
       labels: 
         app: superapp
         version: "01"
...

apiVersion: apps/v1beta1
kind: Deployment
metadata:
  name: superapp-02
spec:
  template:
    metadata:
       labels:
         app: superapp
         version: "02"
...

The service routes the traffic to both of them:
apiVersion: v1
kind: Service
metadata:
  name: superapp
spec:
  selector:
    app: superapp 
...

On top of that we are going to use the following Istio resources: VirtualService and  DestinationRule.
apiVersion: networking.istio.io/v1alpha3
kind: DestinationRule
metadata:
  name: superapp
spec:
  host: superapp
  subsets:
  - name: green
    labels:
      version: "01"
  - name: blue
    labels:
      version: "02"
---     
apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
  name: superapp
spec:
  hosts:
    - superapp   
  http:
  - match:
    - uri:
        prefix: /version
    route:
    - destination:
        port:
          number: 8080
        host: superapp
        subset: green
      weight: 90
    - destination:
        port:
          number: 8080
        host: superapp    
        subset: blue  
      weight: 10
The VirtualService will route all the traffic coming to the superapp service (hosts) to the green and blue pods according to the provided weights (90/10).

A/B Testing
With this strategy we can precisely control what users, from what devices, departments, etc. are routed to the new version of the application.

For example here we are going to analyze the request header and if its custom tag "end-user" equals "xammer" it will be routed to the new version of the application. The rest of the requests will be routed to the old one:
apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
  name: superapp
spec:
  gateways:
    - superapp
  hosts:
    - superapp   
  http:
  - match:
    - headers:
        end-user:
          exact: xammer                 
    route:
    - destination:
        port:
          number: 8080
        host: superapp
        subset: blue
  - route:
    - destination:
        port:
          number: 8080
        host: superapp
        subset: green

All examples and manifest files for this post are available on GitHub so you can play with various strategies and sophisticated routing rules on your own. You just need a K8s cluster (e.g. Minikube on your laptop) with preinstalled Istio. Happy deploying!

That's it!

13 Apr 2019

Load Testing of a Microservice. Kubernetes way.

Let's assume there is a microservice represented by a composition of containers running on a K8s cluster somewhere in a cloud, e.g. Oracle Kubernetes Engine (OKE). At some point we want to quickly stress test a specific microservice component or the entire microservice. So we want to know how it works under the load, how it handles many subsequent requests coming from many parallel clients. The good news is that we have already a tool for that. Up and running. This is the Kubernetes cluster itself.

We're going to use Kubernetes Job for this testing described in the following manifest file:
apiVersion: batch/v1
kind: Job
metadata:
   name: job-load
spec:
   parallelism: 50   
   template:
     spec:
       containers:
         - name: loader
           image: eugeneflexagon/aplpine-with-curl:1.0.0
           command: ["time", "curl", "http://my_service:8080/my_path?[1-100]"]     
       restartPolicy: OnFailure   
This job is going to spin up 50 pods running in parallel and sending 100 requests each to my_service on port 8080 and with path my_path. Having the job created and started by invoking
kubectl  apply -f loadjob.yaml
We can observe all 50 pods created by the job using

kubectl get pods -l job-name=job-load
NAME             READY     STATUS      RESTARTS   AGE
job-load-4n262   1/2       Completed   1          12m
job-load-dsqtc   1/2       Completed   1          12m
job-load-khdn4   1/2       Completed   1          12m
job-load-kptww   1/2       Completed   1          12m
job-load-wf9pd   1/2       Completed   1          12m
...

If we look at the logs of any of these pods

kubectl logs job-load-4n262

We'll see something like the following:
[1/100]: http://my_service.my_namespace:8080/my_path?1 --> <stdout>
{"id":456,"content":"Hello world!"}

[2/100]: http://my_service.my_namespace:8080/my_path?2 --> <stdout>
{"id":457,"content":"Hello world!"}

[3/100]: http://my_service.my_namespace:8080/my_path?3 --> <stdout>
{"id":458,"content":"Hello world!"}

....

real    0m 10.04s
user    0m 0.00s
sys     0m 0.04s

That's it!

8 Mar 2019

Serverless API with Azure Functions

In this post I am going to work on a pretty simple use case. While executing a deployment pipeline FlexDeploy may produce some human tasks that should be either approved or rejected. For example, someone has to approve a deployment to the production environment. It can be done either in FlexDeploy UI or with some external communication channels. Today I am going to focus on the scenario when a FlexDeploy human task is approved/rejected with Slack:


There are a few requirements and considerations that I would like to take into account:

  • I don't want to teach FlexDeploy to communicate with Slack
  • I don't want to provide Slack with the details of FlexDeploy API
  • I don't want to expose FlexDeploy API to public 
  • I do want to be able to easily change Slack to something different or add other communication tools without touching FlexDeploy
Basically, I want to decouple FlexDeploy from the details of the external communication mechanism. For that reason I am going to introduce an extra layer, an API between FlexDeploy and Slack. It looks like serverless paradigm is a very attractive approach to implement this API. Today I am going to build it with Azure Functions, because ... why not? 

So, technically, a poc version of the solution looks like this:

Once a new human task comes up, FlexDeploy notifies the serverless API about that providing an internal task id and task description. There is a function SaveTask that saves the provided task details along with a generated token (just some uid) to Azure Table storage. This token has an expiration time meaning that it should be used before that time to approve/reject the task.

const azure = require('azure-storage');
const uuidv1 = require('uuid/v1');

module.exports = async function (context, taskid) {   
    var tableSvc = azure.createTableService('my_account', 'my_key');
    var entGen = azure.TableUtilities.entityGenerator;
    var token = uuidv1();
    var tokenEntity = {
        PartitionKey: entGen.String('tokens'),
        RowKey: entGen.String(token),
        TaskId: entGen.String(taskid),
        dueDate: entGen.DateTime(new Date(Date.now() + 24 * 60 * 60 * 1000))
      };
     
      tableSvc.insertEntity('tokens',tokenEntity, function (error, result, response) { });

    return token; 
};


Having the token saved, the PostToSlack function is invoked posting a message to a Slack channel. SaveTask and PostToSlack functions are orchestrated into a durable function NotifyOnTask which is actually being invoked by FlexDeploy:
const df = require("durable-functions");

module.exports = df.orchestrator(function*(context){   
    var task = context.df.getInput()
    var token = yield context.df.callActivity("SaveTask",  task.taskid)
    return yield context.df.callActivity("PostToSlack",  {"token": token, "description": task.description})
});

The message in Slack contains two buttons to approve and reject the task.


The buttons refer to webhooks pointing to the ActionOnToken durable function:
const df = require("durable-functions");

module.exports = df.orchestrator(function*(context){   
    var input = context.df.getInput()
    var taskId = yield context.df.callActivity("GetTaskId",  input.token)
    if (input.action == 'approve') {
        yield context.df.callActivity("ApproveTask",  taskId)
    } else if (input.action == 'reject') {
        yield context.df.callActivity("RejectTask",  taskId)
    }
});


ActionOnToken invokes GetTaskId function retrieving task id from the storage by the given token:
const azure = require('azure-storage');

module.exports = async function (context, token) {
    var tableSvc = azure.createTableService('my_account', 'my_key');

    function queryTaskID(token) {
        return new Promise(function (resolve, reject) {
            tableSvc.retrieveEntity('tokens', 'tokens', token, 
             function (error, result, response) {
                if (error) {
                    reject(error)
                } else {
                    resolve(result)
                }
            });
        });
    }

    var tokenEntity = await queryTaskID(token);
    if (tokenEntity) {
        var dueDate = tokenEntity.dueDate._
        if (dueDate > Date.now()) {
            return tokenEntity.TaskId._
        }
    }
};
Having done that it either approves or rejects the task by invoking either ApproveTask or RejectTask functions.  These functions in their turn make corresponding calls to FlexDeploy REST API.
const request = require('sync-request');
const fd_url = 'http://dkrlp01.flexagon:8000';

module.exports = async function (context, taskid) {   
    var taskid = taskid;
    var res = request('PUT',
              fd_url+'/flexdeploy/rest/v1/tasks/approval/approve/'+taskid,{        
                    });

};

I could start developing my serverless application directly in the cloud on Azure Portal, but I decided to implement everything and play with it locally and move to the cloud later. The fact that I can do that, develop and test my functions locally is actually very cool, not every serverless platform gives you that feature. The only thing I have configured in the cloud is Azure Table storage account with a table to store my tokens and task details. 

A convenient way to start working with Azure Functions locally is to use Visual Studio Code as a development tool. I am working on Mac, so I have downloaded and installed a version for Mac OS X.   VS Code is all about extensions, for every technology you are working with you are installing one or a few extensions. Same is about Azure Functions. There is an extension for that:



Having done that, you are getting a new tab where you can create a new function application and start implementing your functions:


While configuring a new project the wizard is asking you to select a language you prefer to implement the functions with:


Even though I love Java, I have selected JavaScript because on top of regular functions I wanted to implement durable functions and they support C#, F# and JavaScript only. At the moment of writing this post JavaScript was the closest to me.

Th rest is as usual. You create functions, write the code, debug, test, fix, and all over again. You just click F5 and VS Code starts the entire application in debug mode for you:


When you start the application for the first time, VS Code will propose you to install the functions runtime on your computer if it is not there. So basically, assuming that you have on your laptop runtime of your preferred language (Node.js), you just need to have VS Code with the functions extension to start working with Azure Functions. It will do the rest of installations for you. 

So, once the application is started I can test it by invoking NotifyOnTask function which initiates the entire cycle:
curl -X POST --data '{"taskid":"8900","description":"DiPocket v.1.0.0.1 is about to be deployed to PROD"}'  -H "Content-type: application/json" http://localhost:7071/api/orchestrators/NotifyOnTask

The source code of the application is available on GitHub.

Well, the general opinion of Azure Functions so far is ... it is good. It just works. I didn't run into any annoying issue (so far) while implementing this solution (except some stupid mistakes that I made because I didn't read the manual carefully). I will definitely keep playing and posting on Azure Functions enriching and moving this solution to the cloud and, probably, implementing something different.

That's it!



22 Feb 2019

Conversational UI with Oracle Digital Assistant and Fn Project. Part III. Moving to the cloud.

In this post I am going to continue the story of implementing a conversational UI for FlexDeploy on top of Oracle Digital Assistant and Fn Project. Today I am going to move the serverless API working around my chatbot to the cloud, so the entire solution is working in the cloud:



The API is implemented as a set of Fn functions collected into an Fn application. The beauty of Fn is that it's just a bunch of Docker containers that can equally run on your laptop on your local Docker engine and somewhere in the cloud. Having said that I can run my Fn application on a K8s cluster from any cloud provider as it is described here. But today is not that day. Today I am going to run my serverless API on a brand new cloud service Oracle Functions which is built on top of Fn. The service is not general available yet, but I participate in the Limited Availability program so I have a trial access to it, I can play with it and blog about it. In this solution I had to get rid of the Fn Flow implemented here and get back to my original implementation as Fn Flow is not supported by Oracle Functions yet. I hope it will be soon as this is actually the best part.

So, having our OCI environment configured and having Oracle Functions service up and running (I am not reposting Oracle tutorial on that here), we need to configure our Fn CLI to be able to communicate with the service:
fn create context oracle_fn --provider oracle 
fn use context oracle_fn
fn update context oracle.compartment-id MY_COMPARTMENT_ID
fn update context api-url https://functions.us-phoenix-1.oraclecloud.com
fn update context registry phx.ocir.io/flexagonoraclecloud/flexagon-repo
fn update context oracle.profile oracle_fn

Ok, so now our Fn command line interface is talking to Oracle Functions. The next step is to create an application in the Oracle Functions console:




Now we can deploy the Fn application to Oracle Functions:
Eugenes-MacBook-Pro-3:fn fedor$ ls -l
total 8
-rw-r--r--@ 1 fedor  staff   12 Dec  4 15:41 app.yaml
drwxr-xr-x  5 fedor  staff  160 Feb  9 15:24 createsnapshotfn
drwxr-xr-x  6 fedor  staff  192 Feb  9 15:25 receiveFromBotFn
drwxr-xr-x  6 fedor  staff  192 Feb  9 15:25 sendToBotFn
Eugenes-MacBook-Pro-3:fn fedor$ 
Eugenes-MacBook-Pro-3:fn fedor$ 
Eugenes-MacBook-Pro-3:fn fedor$ fn deploy --all 
Having done that we can observe the application in the Oracle Functions console:



The next step is to update API urls in the chatbot and on my laptop so the functions in the cloud are invoked instead of the previous local implementation. The urls can be retrieved with the following command:
fn list triggers odaapp
So far the migration from my laptop to Oracle Functions has been looking pretty nice and easy. But here is a little of pain. In order to invoke functions hosted in Oracle Functions with http requests, the requests should be signed so they can pass through the authentication. A node.js implementation of invoking a signed function call looks like this:
var fs = require('fs');
var https = require('https');
var os = require('os');
var httpSignature = require('http-signature');
var jsSHA = require("jssha");

var tenancyId = "ocid1.tenancy.oc1..aaaaaaaayonz5yhpr4vxqpbdof5rn7x5pfrlgjwjycwxasf4dkexiq";
var authUserId = "ocid1.user.oc1..aaaaaaaava2e3wd3cu6lew2sktd6by5hnz3d7prpgjho4oambterba";
var keyFingerprint = "88:3e:71:bb:a5:ea:68:b7:56:fa:3e:5d:ea:45:60:10";
var privateKeyPath = "/Users/fedor/.oci/functions_open.pem";
var privateKey = fs.readFileSync(privateKeyPath, 'ascii');
var identityDomain = "identity.us-ashburn-1.oraclecloud.com";


function sign(request, options) {
    var apiKeyId = options.tenancyId + "/" + options.userId + "/" + options.keyFingerprint;

    var headersToSign = [
        "host",
        "date",
        "(request-target)"
    ];

    var methodsThatRequireExtraHeaders = ["POST", "PUT"];

    if(methodsThatRequireExtraHeaders.indexOf(request.method.toUpperCase()) !== -1) {
        options.body = options.body || "";
        var shaObj = new jsSHA("SHA-256", "TEXT");
        shaObj.update(options.body);

        request.setHeader("Content-Length", options.body.length);
        request.setHeader("x-content-sha256", shaObj.getHash('B64'));

        headersToSign = headersToSign.concat([
            "content-type",
            "content-length",
            "x-content-sha256"
        ]);
    }


    httpSignature.sign(request, {
        key: options.privateKey,
        keyId: apiKeyId,
        headers: headersToSign
    });

    var newAuthHeaderValue = request.getHeader("Authorization").replace("Signature ", "Signature version=\"1\",");
    request.setHeader("Authorization", newAuthHeaderValue);
}


function handleRequest(callback) {

    return function(response) {
        var responseBody = "";
        response.on('data', function(chunk) {
        responseBody += chunk;
    });


        response.on('end', function() {
            callback(JSON.parse(responseBody));
        });
    }
}


function createSnapshot(release) {

    var body = release;

    var options = {
        host: 'af4qyj7yhva.us-phoenix-1.functions.oci.oraclecloud.com',
        path: '/t/createsnapshotfn',
        method: 'POST',
        headers: {
            "Content-Type": "application/text",
        }
    };


    var request = https.request(options, handleRequest(function(data) {
        console.log(data);
    }));


    sign(request, {
        body: body,
        privateKey: privateKey,
        keyFingerprint: keyFingerprint,
        tenancyId: tenancyId,
        userId: authUserId
    });

    request.end(body);
};

This approach should be used by Oracle Digital Assistant custom components and by the listener component on my laptop while invoking the serverless API hosted in Oracle Functions.



That's it!

27 Jan 2019

Monitoring an ADF Application in a Docker Container. Easy Way.

In this short post I am going to show a simple approach to make sure that your ADF application running inside a Docker container is a healthy Java application in terms of memory utilization. I am going to use a standard tool JConsole which comes as a part of JDK installation on your computer. If there is a problem (i.e. a memory leak,  often GCs, long GCs, etc.) you will see it with JConsole. In an effort to analyze the root of the problem and find the solution you might want to use more powerful and fancy tools. I will discuss that in one of my following posts. A story of tuning JVM for an ADF application is available here.

So there is an ADF application running on top of Tomcat. The application and the Tomcat are packaged into a Docker container running on dkrlp01.flexagon host. There are some slides on running an ADF application in a Docker container.
In order to connect with JConsole from my laptop to a JVM running inside the container, we need to add the following JVM arguments in tomcat/bin/setenv.sh:
 -Dcom.sun.management.jmxremote=true
 -Dcom.sun.management.jmxremote.rmi.port=9010
 -Dcom.sun.management.jmxremote.port=9010
 -Dcom.sun.management.jmxremote.ssl=false
 -Dcom.sun.management.jmxremote.authenticate=false
 -Dcom.sun.management.jmxremote.local.only=false
 -Djava.rmi.server.hostname=dkrlp01.flexagon

Besides that the container has to expose port 9010, so it should be created with
"docker run -p 9010:9010 ..." command.

Having done that we can invoke jconsole command locally and connect to the container:


Now just give the application some load with you favorite testing tool (JMeter, OATS, SOAP UI, Selenium, etc..) and observe the memory utilization:



That's it!