31 Dec 2018

Conversational UI with Oracle Digital Assistant and Fn Project. Part II

In my previous post I implemented a conversational UI for FlexDeploy with Oracle Digital Assistant. Today I am going to enrich it with Fn Flow so that the chatbot accepts release name instead of id to create a snapshot. Having done that the conversation will sound more natural:

...
"Can you build a snapshot?" I asked.
"Sure, what release are you thinking of?"
"Olympics release"
"Created a snapshot for release Olympics" she reported.
...


The chatbot invokes Fn Flow passing the release name to it as an input. The flow invokes an Fn function to get id of the given release and then it invokes an Fn function calling FlexDeploy Rest API with that id.


So the createSnapshotFlow orchestrates two Fn functions in a chain. The one getting release id for the given name with FlexDeploy REST API:
fdk.handle(function (input) {
  var res = request('GET', fd_url + '/flexdeploy/rest/v1/release?releaseName=' + input, {
  });


  return JSON.parse(res.getBody('utf8'))[0].releaseId;
})

And the one creating a snapshot for the release id with the same API

fdk.handle(function (input) {
  var res = request('POST', fd_url + '/flexdeploy/rest/v1/releases/'+input+'/snapshot', {
    json: { action: 'createSnapshot' },
  });


  return JSON.parse(res.getBody('utf8'));
})

The core piece of this approach is Fn Flow. The Java code of createSnapshotFlow looks like this:

public class CreateSnapshotFlow {


 public byte[] createSnapshot(String input) {
   Flow flow = Flows.currentFlow();

    FlowFuture<byte[]> stage = flow
      //invoke checkreleasefn
      .invokeFunction("01D14PNT7ZNG8G00GZJ000000D", HttpMethod.POST,
                      Headers.emptyHeaders(), input.getBytes())
      .thenApply(HttpResponse::getBodyAsBytes)
      .thenCompose(releaseId -> flow.
                      //invoke createsnapshotfn
                     invokeFunction("01CXRE2PBANG8G00GZJ0000001", HttpMethod.POST,
                                    Headers.emptyHeaders(), releaseId))
      .thenApply(HttpResponse::getBodyAsBytes);

    return stage.get();
 }



Note, that the flow operates with function ids rather than function names. The list of all application functions with their ids can be retrieved with this command line:


Where odaapp is my Fn application.

That's it!

30 Nov 2018

Conversational UI with Oracle Digital Assistant and Fn Project

Here and there we see numerous predictions that pretty soon chatbots will play a key role in the communication between the users and their systems. I don't have a crystal ball and I don't want to wait for this "pretty soon", so I decided to make these prophecies come true now and see what it looks like.

A flagman product of the company I am working for is FlexDeploy which is a fully automated DevOps solutions. One of the most popular activities in FlexDeploy is creating a release snapshot that actually builds all deployable artifacts and deploys them across environments with a pipeline.
So, I decided to have some fun over the weekend and implemented a conversational UI for this operation where I am able to talk to FlexDeploy. Literally. At the end of my work my family saw me talking to my laptop and they could hear something like that:

  "Calypso!" I said.
  "Hi, how can I help you?" was the answer.
  "Not sure" I tested her.
  "You gotta be kidding me!" she got it.
  "Can you build a snapshot?" I asked.
  "Sure, what release are you thinking of?"
  "1001"
  "Created a snapshot for release 1001" she reported.
  "Thank you" 
  "Have a nice day" she said with relief.

So,  basically, I was going to implement the following diagram:


As a core component of my UI I used a brand new Oracle product Oracle Digital Assistant. I built a new skill capable of basic chatting and implemented a new custom component so my bot was able to invoke an http request to have the backend system create a snapshot.  The export of the skill FlexDeployBot along with Node.js source code of the custom component custombotcomponent is available on GitHub repo for this post.
I used my MacBook as a communication device capable of listening and speaking and I defined a webhook  channel for my bot so I can send messages to it and get callbacks with responses.

It looks simple and nice on the diagram above. The only thing is that I wanted to decouple the brain, my chatbot, from the details of the communication device and from the details of the installation/version of my back-end system FlexDeploy. I needed an intermediate API layer, a buffer, something to put between ODA and the outer world. It looks like Serverless Functions is a perfect fit for this job.
















As a serverless platform I used Fn Project. The beauty of it is that it's a container-native serverless platform, totally based on Docker containers and it can be easily run locally on my laptop (what I did for this post) or somewhere in the cloud, let's say on Oracle Kubernetes Engine.

Ok, let's get into the implementation details from left to right of the diagram.















So, the listener component, the ears, the one which recognizes my speech and converts it into text is implemented with Python:

The key code snippet of the component look like this (the full source code is available on GitHub):
r = sr.Recognizer()
mic = sr.Microphone()

with mic as source:
    r.energy_threshold = 2000

while True:  
    try:
        with mic as source: 
            audio = r.listen(source, phrase_time_limit=5)           
            transcript = r.recognize_google(audio)
            print(transcript)
            if active:
                requests.post(url = URL, data = transcript)
                time.sleep(5)
           
    except sr.UnknownValueError:
        print("Sorry, I don't understand you")

Why Python? There are plenty of available speech recognition libraries for Python, so you can play with them and choose the one which understands your accent better. I like Python.
So, once the listener recognizes my speech it invokes an Fn function passing the phrase as a request body.
The function sendToBotFn is implemented with Node.js:
function buildSignatureHeader(buf, channelSecretKey) {
    return 'sha256=' + buildSignature(buf, channelSecretKey);
}


function buildSignature(buf, channelSecretKey) {
   const hmac = crypto.createHmac('sha256', Buffer.from(channelSecretKey, 'utf8'));
   hmac.update(buf);
   return hmac.digest('hex');
}


function performRequest(headers, data) {
  var dataString = JSON.stringify(data);
 
  var options = {
   body: dataString,   
   headers: headers
  };
       
  request('POST', host+endpoint, options);             
}


function sendMessage(message) {
  let messagePayload = {
   type: 'text',
   text: message
  }

  let messageToBot = {
    userId: userId,
    messagePayload: messagePayload
  }

  let body = Buffer.from(JSON.stringify(messageToBot), 'utf8');
  let headers = {};
  headers['Content-Type'] = 'application/json; charset=utf-8';
  headers['X-Hub-Signature'] = buildSignatureHeader(body, channelKey);

  performRequest(headers, messageToBot);  
}


fdk.handle(function(input){ 
  sendMessage(input); 
  return input; 
})

Why Node.js? It's not because I like it. No. It's because Oracle documentation on implementing a custom web hook channel is referring to Node.js. They like it.

When the chatbot is responding it is invoking a webhook referring to an Fn function receiveFromBotFn running on my laptop.  I use ngrok tunnel to expose my Fn application listening to localhost:8080 to the Internet. The receiveFromBotFn function is also implemented with Node.js:
const fdk=require('@fnproject/fdk');
const request = require('sync-request');
const url = 'http://localhost:4390';
fdk.handle(function(input){  
    var sayItCall = request('POST', url,{
     body: input.messagePayload.text,
    });
  return input;
})
 
The function sends an http request to a simple web server running locally and listening to 4390 port.
I have to admit that it's really easy to implement stuff like that with Node.js. The web server uses Mac OS X native utility say to pronounce whatever comes in the request body:
var http = require('http');
const exec = require("child_process").exec
const request = require('sync-request');

http.createServer(function (req, res) {
      let body = '';
      req.on('data', chunk => {
          body += chunk.toString();
      });

      req.on('end', () => {       
          exec('say '+body, (error, stdout, stderr) => {
      });       
      res.end('ok');
     });

  res.end();

}).listen(4390);
In order to actually invoke the back-end to create a snapshot with FlexDeploy the chatbot invokes with the custombotcomponent an Fn function createSnapshotFn:
fdk.handle(function(input){
   
var res=request('POST',fd_url+'/flexdeploy/rest/v1/releases/'+input+'/snapshot',  {
      json: {action : 'createSnapshot'},
  });

  return JSON.parse(res.getBody('utf8'));
})

The function is simple, it just invokes FlexDeploy REST API to start building a snapshot for the given release. It is also implemented with Node.js, however I am going to rewrite it with Java. I love Java. Furthermore, instead of a simple function I am going to implement an Fn Flow that first checks if the given release exists and if it is valid and only after that it invokes the createSnapshotFn function for that release. In the next post.


That's it!



24 Nov 2018

Persistent Volumes for Database Containers running on a K8s cluster

In one of my previous posts I showed how we can run Oracle XE database on a K8s cluster. That approach works fine for the use-cases when we don't care about the data and we are fine with loosing it when the container is redeployed and the pod is restarted. But if we want to keep the data, if we want it to survive all rescheduling we'll want to reconsider K8s resources used to run the DB container on the cluster. That said, the yaml file defining the resources looks like this one:

apiVersion: apps/v1beta2
kind: StatefulSet
metadata:
  name: oraclexe
  labels:
    run: oraclexe
spec:
  selector:
      matchLabels:
        run: oraclexe
  serviceName: "oraclexe-svc"
  replicas: 1
  template:
    metadata:
      labels:
        run: oraclexe
    spec:
      volumes:
       - name: dshm
         emptyDir:
           medium: Memory  
      containers:
      - image: eugeneflexagon/database:11.2.0.2-xe
        volumeMounts:
           - mountPath: /dev/shm
             name: dshm
           - mountPath: /u01/app/oracle/oradata
             name: db
        imagePullPolicy: Always
        name: oraclexe
        ports:
        - containerPort: 1521
          protocol: TCP
  volumeClaimTemplates:
   - metadata:
       name: db
     spec:
       accessModes: [ "ReadWriteOnce" ]
       resources:
         requests:
           storage: 100M                   
---
apiVersion: v1
kind: Service
metadata:
  name: oraclexe-svc
  labels:
    run: oraclexe   
spec:
  selector:
    run: oraclexe
  ports:
    - port: 1521
      targetPort: 1521
  type: LoadBalancer

There are some interesting things here. First of all this is not a deployment. We are defining here another K8s resource which is called Stateful Set. Unlike a Deployment, a Stateful Set maintains a sticky identity for each of their Pods. These pods are created from the same specification, but they are not interchangeable: each has a persistent identifier that it maintains across any rescheduling.

This guy has been specially designed for stateful applications like database that save their data to a persistent storage. In order to define a persistent storage for our database we use a special K8s resource Persistent Volume and here in the yaml file we are defining a claim to create a 100mb Persistent Volume with name db. This volume provides read/write access mode for one assigned pod. The volume is called persistent because its lifespan is not maintained by a container and not even by a pod, it’s maintained by a K8s cluster. So it can outlive any containers and pods and save the data. We are referring to this persistence volume in the container definition mounting a volume on path /u01/app/oracle/oradata. This is where Oracle DB XE container stores its data.

That's it!

31 Oct 2018

Develop, Build, Deliver and Run Microservices with Containers in the Cloud

In this post I would like to thank everyone, who managed to attend my sessions "Develop, Build, Deliver and Run Microservices with Containers in the Cloud" at Oracle Code One and "Develop, Deliver, Run Oracle ADF applications with Docker" at Oracle Open World 2018. Thank you guys for coming to listen to me, to learn something new and to ask a lot of interesting questions. 

The presentations are available on the content catalog and on Slide Share as well: 




Happy Halloween!

29 Sep 2018

Configuring a Datasource in a Docker Container

In this post I am going to show how to configure a datasource consumed by an ADF application running on Tomcat in a Docker container.


So, there is a Docker container sample-adf with a Tomcat application server preconfigured with ADF libraries and with an ADF application running on top of Tomcat. The ADF application requires a connection to an external database.
The application is implemented with ADF BC and it's application module is referring to a datasource jdbc/appDS.



This datasource is configured inside a container in Tomcat /conf/context.xml file. The JDBC url, username and password are provided by environment variables:

<Resource name="jdbc/appDS" auth="Container"
           type="oracle.jdbc.pool.OracleDataSource"
           factory="oracle.jdbc.pool.OracleDataSourceFactory"
           url="${DB_URL}"
           user="${DB_USERNAME}"
           password="${DB_PWD}"

           ...


These variables are propagated to the application server in Tomcat /bin/setenv.sh file:

CATALINA_OPTS='-DDB_URL=$DB_URL -DDB_USERNAME=$DB_USERNAME -DDB_PWD=DB_PWD ...'

Having these configurations set, we can run a container providing values of the variables:

docker run --name adf -e DB_URL="jdbc:oracle:thin:@myhost:1521:xe" -e DB_USERNAME=system -e DB_PWD=welcome1 sample-adf

If we are about to run a container in a K8s cluster we can provide variable values in a yaml file:

spec:
      containers:
      - image: sample-adf
        env:
        - name: DB_URL
           value: "jdbc:oracle:thin:@myhost:1521:xe"
        - name: DB_USERNAME
          value: "system"
        - name: DB_PWD
          value: "welcome1"


In order to make this yaml file portable we would avoid providing exact values and refer to K8s ConfigMaps and Secrets instead of that

A ConfigMap is a named K8s resource that allows us to decouple configuration artifacts from image content to keep containerized applications portable. This is just a simple set of key-value paires. And obviously those values in each K8s cluster, in each environment are different.

Similar approach is used when it comes to sensitive data like user names and passwords. Only in this case instead of configmaps we use a special resource which is called Secret. The data is encoded and it is only sent to a node if a pod on that node requires it. It is deleted once the pod that depends on it is deleted.

We can create ConfigMaps and Secrets out of key-value files or just by providing the values in a command line:

kubectl create configmap adf-config  
--from-literal=db.url="jdbc:oracle:thin:@myhost:1521:xe"

kubectl create secret generic adf-secret 
--from-literal=db.username="system
--from-literal=db.pwd="welcome1"


Having done that we can specify in the yaml file that values for the environment variables should be fetched from adf-config ConfigMap and adf-secret Secret:

spec:
      containers:
      - image: sample-adf
        env:
        - name: DB_URL
          valueFrom:
               configMapKeyRef:
                   name: adf-config
                   key: db.url
        - name: DB_USERNAME
          valueFrom:
               secretKeyRef:
                   name: adf-secret
                   key: db.username
        - name: DB_PWD
          valueFrom:
               secretKeyRef:
                   name: adf-secret
                   key: db.pwd


That's it!

31 Aug 2018

Remote access to Minikube with Kubectl

Let's say you need to install a Kubernetes cluster in your organization for development and testing purposes. Minikube looks like a perfect fit for that job. It was specially designed for users looking to try out Kubernetes or develop with it day-to-day. It runs a single-node Kubernetes cluster inside a VM on a standalone machine. So, you found a server for that, followed the insulation guide to install a virtual box with Minikube on it and now you can easily deploy pods to the K8s cluster with kubectl from that server. In order to be able to do the same remotely from your laptop you have to do some extra movements:

1. Install kubectl on your laptop if you don't have it.
2. Copy .minikube folder from the server with Minikube to your laptop (e.g. to /Users/fedor/work/minikube)
3. Update clusters, contexts and users sections in your kubectl config file on your laptop ($HOME/.kube/config) with the following content
apiVersion: v1
clusters:
- cluster:   
    insecure-skip-tls-verify: true
    server: https://YOUR_SERVER:51928
  name: minikube
contexts:
- context:
    cluster: minikube
    user: minikube
  name: minikube
current-context: minikube
kind: Config
preferences: {}
users:
- name: minikube
  user:
    client-certificate: /Users/fedor/work/minikube/client.crt
    client-key: /Users/fedor/work/minikube/client.key

4. Go to the server and stop Minikube with
minikube stop
5. Forward a port for the Minikube VM from 8443 guest port to 51928 host port.


6. Start Minikube with  
minikube start
7. Check from your laptop that it works:
kubectl get pods

That's it!

30 Jul 2018

Run Oracle XE Docker Container on Amazon EKS

Recently Amazon announced general availability of their new service Amazon Elastic Container Service for Kubernetes (Amazon EKS). This is a managed service to deploy, manage and scale containerized applications using K8s on AWS. I decided to get my hands dirty with it and deployed a Docker container with Oracle XE Database to a K8s cluster on Amazon EKS. In this post I am going to describe what I did to make that happen.

1. Create Oracle XE Docker image.

First of all we need a Docker image with Oracle XE database:

1.1 Clone Oracle GitHub repository to build docker images:
git clone https://github.com/oracle/docker-images.git oracle-docker-images

It will create oracle-docker-images folder.

1.2 Download Oracle XE binaries from OTN

1.3 Copy the downloaded stuff to ../oracle-docker-images/OracleDatabase/SingleInstance/dockerfiles/11.2.0.2 folder

1.4 Build the Docker image
./buildDockerImage.sh -v 11.2.0.2 -x -I

1.5 Check the new image

docker images oracle/database:11.2.0.2-xe
1.6. Rename the image so you can push it to Docker Hub. E.g.:
docker tag oracle/database:11.2.0.2-xe eugeneflexagon/database:11.2.0.2-xe

Ok, so having done that, we have Oracle XE Docker image stored in Docker Hub repository.

2. Create K8s cluster on Amazon EKS.

Assuming that you have already AWS account, take your favorite tambourine (you will need it) and create a K8s cluster following this guide Getting Started with Amazon EKS (a good example of how complicated you can make a "getting started guide").

Once you are able to see your working nodes in Ready status, you're good to move forward
kubectl get nodes --watch
3. Configure Load Balancer

In AWS console go to your EC2 Dashboard and look at the Load Balancers tab:




Click on Create Load Balancer, select Network Load Balancer:



change the listener port to 1521



and specify in Availability Zones the VPC that you have just created for the K8s cluster:

The scheme should be internet-facing.

4.
Deploy Oracle XE Docker container to the K8s cluster.

4.1 Create a yaml file with the following content:
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
  name: oraclexe
  labels:
    run: oraclexe
spec:
  replicas: 1
  strategy:
    rollingUpdate:
      maxSurge: 1
      maxUnavailable: 1
    type: RollingUpdate
  template:
    metadata:
      labels:
        run: oraclexe
    spec:   
     volumes:
       - name: dshm
         emptyDir:
           medium: Memory
     containers:
       - image: eugeneflexagon/database:11.2.0.2-xe
         volumeMounts:
           - mountPath: /dev/shm
             name: dshm
         imagePullPolicy: Always
         name: oraclexe
         ports:
           - containerPort: 1521
             protocol: TCP
     imagePullSecrets:
       - name: wrelease
     restartPolicy: Always
---
apiVersion: v1
kind: Service
metadata:
  name: oraclexe-svc
spec:
  selector:
    run: oraclexe
  ports:
    - port: 1521
      targetPort: 1521
  type: LoadBalancer

4.2  Deploy it:
kubectl apply -f oraclexe-deployment.yaml

5. Check how it works

5.1. Get a list of pods and check the logs:
kubectl get pods

kubectl logs -f POD_NAME



Once you see in the logs DATABASE IS READY TO USE! the database container is up and running.

Note, that the container while starting generated a password for sys and system users. You can find this password in the log:



5.2 Get external IP address of the service:
kubectl get svc

Wait until the address in EXTERNAL-IP column turns from PENDING into something meaningful:



5.3 Connect to the DB:

That's it!

29 Jun 2018

Oracle Jet vs Oracle ADF or Oracle Jet with Oracle ADF

In this post I would like to thank everyone, who managed to attend my session "Oracle Jet vs Oracle ADF or Oracle Jet with Oracle ADF" at ODTUG KScope18 conference in Orlando FL. Thank you guys for coming to listen to me, to learn something new and to ask a lot of interesting questions. 

I promised at the session that the presentation would be available for download. 
The presentation is available here:


That's it!

28 May 2018

Oracle ADF and Oracle Jet work together. Architecture patterns.

In this post I am going to consider various architecture patterns of implementing an application on top of combination of Oracle ADF and Oracle Jet.  An organization practicing ADF may think on incorporating Oracle Jet for existing projects to refresh look&feel and make it modern and responsive and to implement new features in a new way. It may think on using Oracle Jet for totally new projects and obviously for projects related to development of hybrid applications for mobile devices.
Oracle Jet is all about UI it is only about the client side part. So, the server side has to be implemented with something anyway. Obviously that many organizations will decide to use ADF for that in order to reuse their knowledge, experience, implementations and investments in ADF. It makes perfect sense. So, let's have a look at what options we have when it comes to combining Oracle Jet with Oracle ADF.


The first, most obvious and most popular option is to put Oracle Jet on top of ADF BC. So the client side for a web or for a hybrid mobile application is implemented with Jet and the serverside is ADF BC exposed as a Rest service. With JDeveloper 12.2.x you can expose ADF BC as Rest services in a few mouse clicks just like that.


The advantage of this approach is a pretty simple architecture. And what is simple has a chance to work longer. Another very valuable benefit is that we are reusing our resources, our knowledge and ADF experience, and if our existing ADF application is implemented right then we are going to reuse the most critical part of business logic implementation.
However, we have to understand that ADF BC Business services working perfectly in an ADF application might be useless for a Jet applications. Why is that? The main reason is that we have changed the state management model. We switched from the classic ADF stateful behavior to the REST stateless model. Furthermore, more likely the UI design will be different in Jet Web and Hybrid applications. 
So, we need to create new ADF BC services supporting a stateless model and serving for the convenience of the new UI.

The good news is that we don’t have to build everything form scratch. If existing ADF BC model is built in the right way, we can reuse the core part of it including entities and business logic implemented at the entity level. 
So,we can split the entire ADF BC model into the core part containing entities, utilities,  and shared AMs and the facade part containing specific AMs and VOs and providing services for an ADF application and for a Jet application. 





Having reconsidered our ADF BC and getting them ready to serve both ADF and Jet applications we can incorporate now Jet functionality into existing ADF projects. A common architecture approach is 
to implement some pages of the system with ADF,  some web pages are implemented with Jet and there is also a mobile hybrid application which is also implemented with Oracle Jet.

The advantage of this approach is that we keep things separately. It looks like different applications working on top of a common business model. And each application introduces its own UI, suitable for those use-cases they are implemented for. Furthermore they provide different entry points to the entire system. We can access it through a regular ADF page, we can go with a mobile device or we can access it from a Jet web page which in its turn can be easily integrated into any parent web page, for example a portal application.
But this advantage may turn into a disadvantage as for each entry point we should think about authentication, internalization, localization, etc.
This approach brings more running pieces into the entire system structure, so CI, CD, automated testing, the environment become more complicated here.

Another obvious option would be to integrate Jet content into an ADF page, so that from the user perspective it looks like a single page but behind the scene this is a mix of two different web applications.



This option is not my favorite, I would avoid it. Because, basically, what you are doing here is mixing two web applications on the same page. It means there will be two different sessions with separate transactions and therefore separate entity caches and user contexts. 
Jet content is not participating in JSF lifecycle so the entire page is being submitted in two different ways. ADF prefers to own the entire page so such nice features like responsive geometry management and Drag&Drop just won’t work with the Jet content.
In my opinion this approach makes sense in very specific scenarios when we need to show on our page some content form outside. For example if our page is kind of a portal or a dashboard gathering in one place data from different sources. In this case the same Jet component can be used on a page like that and in a regular Jet application.

Same considerations are about the opposite approach when we integrate ADF content into a Jet page by the means of a remote task flow call. This technique makes sense but it should be used only in specific use cases when we want to reuse existing ADF functionality which is not implemented in Jet. At least not at this point in time. This approach should not be used as a standard instrument to build our application.


At the bottom line Oracle ADF and Oracle JET can work perfectly together and this is a good option for organizations having solid ADF background. The only thing is to choose wisely the architecture approach of combining these two completely different tools.


That's it!



28 Apr 2018

Building Oracle Jet applications with Docker Hub

In this post I am going to show a simple CI solution for an Oracle Jet application basing on Docker Hub Automated Builds feature. The solution is container native meaning that Docker Hub is going to automatically build a Docker image according to a Docker file. The image is going to be stored in Docker Hub registry. A Docker file is a set of instructions on how to build a Docker image and those instructions may contain any actions including building an Oracle Jet application. So, what we need to do is to create a proper Docker file and set up Docker Hub Automated Build.
I am going to build an Oracle Jet application with OJet CLI, so I have created a Docker image having OJet CLI installed and serving as an actual builder. The image is built with the following Dockerfile:

FROM node
RUN npm install -g @oracle/ojet-cli

By running this command:
docker built -t eugeneflexagon/ojetbuilder .

Having done that we can use this builder image in a Dockerfile to build our Jet application:
# Create an image from a "builder" Docker image
FROM eugeneflexagon/ojetbuilder

# Copy all sources inside the new image
COPY . .

# Build the appliaction. As a result this will produce web folder.
RUN ojet build


# Create another Docker image which runs Jet application
# It contains Nginx on top of Alpine and our Jet appliction (web folder)
# This image is the result of the build and it is going to be stored in Docker Hub
FROM nginx:1.10.2-alpine
COPY --from=0 web /usr/share/nginx/html
EXPOSE 80

Here we are using the multi-stage build Docker feature when we actually create two Docker images: one for building and one for running, and only the last one is going to be saved as the final image. So, I added this Docker file to my source code on GitHub.

The next step is to configure Docker Hub Automated Build:









That was easy. Now we can change the source code and once it is pushed to GutHub the build is automatically queued:



Once the build is finished we can pull and run the container locally:


docker run -it -p 8082:80 eugeneflexagon/ojetdevops:latest

And see the result at http://localhost:8082


That's it!

18 Apr 2018

Containers, Serverless and Functions in a nutshell

In this post I would like to thank everyone, who managed to attend my session "Containers, Serverless and Functions in a nutshell" at Oracle Code conference in Boston. Thank you guys for coming to listen to me, to learn something new and to ask a lot of interesting questions. 

I promised at the session that the presentation would be available for download. 
The presentation is available here:


That's it!

31 Mar 2018

Deploying to K8s cluster with Fn Function

An essential step of any CI/CD pipeline is deployment. If the pipeline operates with Docker containers and deploys to K8s clusters then the goal of the deployment step is to deploy a specific Docker image (stored on some container registry) to a specific K8s cluster.  Let's say there is a VM where this deployment step is being performed. There are a couple of things to be done with that VM before it can be used as a deploying-to-kuberenetes machine:
  • install kubectl (K8s CLI) 
  • configure access to K8s clusters where we are going to deploy 
Having the VM configured, the deployment step does the following:
# kubeconfig file contains access configuration to all K8s clusters we need
# each configuration is called "context"
export KUBECONFIG=kubeconfig

# switch to "google-cloud-k8s-dev" context (K8s cluster on Google Cloud for Dev)
# so all subsequent kubectl commands are applied to that K8s cluster
kubectl config  use-context google-cloud-k8s-dev

# actually deploy by applying k8s-deployment.yaml file
# containing instructions on what image should be deployed and how  
kubectl apply -f k8s-deployment.yaml

In this post I am going to show how we can create a preconfigured Docker container capable of deploying a Docker image to a K8s cluster. So, basically, it is going to work as a function with two parameters: docker image, K8s context. Therefore we are going to create a function in Fn Project basing on this "deployer" container and deploy to K8s just by invoking the function over http.

The deployer container is going to be built from a Dockerfile with the following content:
FROM ubuntu

# install kubectl
ADD https://storage.googleapis.com/kubernetes-release/release/v1.6.4/bin/linux/amd64/kubectl /usr/local/bin/kubectl
ENV HOME=/config
RUN chmod +x /usr/local/bin/kubectl
RUN export PATH=$PATH:/usr/local/bin

# install rpl
RUN apt-get update
RUN apt-get install rpl -y

# copy into container k8s configuration file with access to all K8s clusters
COPY kubeconfig kubeconfig

# copy into container yaml file template with IMAGE_NAME placeholder
# and an instruction on how to deploy the container to K8s cluster
COPY k8s-deployment.yaml k8s-deployment.yaml

# copy into container a shell script performing the deployment
COPY deploy.sh /usr/local/bin/deploy.sh
RUN chmod +x /usr/local/bin/deploy.sh

ENTRYPOINT ["xargs","/usr/local/bin/deploy.sh"]

It is worth looking at the k8s-deployment.yaml file. It contains IMAGE_NAME placeholder which is going to be replaced with the exact Docker image name while deployment:

apiVersion: extensions/v1beta1
kind: Deployment

...

    spec:
      containers:
      - image: IMAGE_NAME
        imagePullPolicy: Always
...

The deploy.sh script which is being invoked once the container is started has the following content:
#!/bin/bash

# replace IMAGE_NAME placeholder in yaml file with the first shell parameter 
rpl IMAGE_NAME $1 k8s-deployment.yaml

export KUBECONFIG=kubeconfig

# switch to K8s context specified in the second shell parameter
kubectl config  use-context $2

# deploy to K8s cluster
kubectl apply -f k8s-deployment.yaml

So, we are going to build a docker image from the Dockerfile by invoking this docker command:
docker build -t efedorenko/k8sdeployer:1.0 .
Assuming there is Fn Project up and running somewhere (e.g. on K8s cluster as it is described in this post) we can create an Fn application:
fn apps create k8sdeployerapp
Then create a route to the k8sdeployer container:
fn routes create k8sdeployerapp /deploy efedorenko/k8sdeployer:1.0
We have created a function deploying a Docker image to a K8s cluster. This function can be invoked over http like this:
curl http://35.225.120.28:80/r/k8sdeployer -d "google-cloud-k8s-dev efedorenko/happyeaster:latest"
This call will deploy efedorenko/happyeaster:latest Docker image to a K8s cluster on Google Cloud Platform.


That's it!



24 Mar 2018

Run Fn Functions on K8s on Google Cloud Platform

Recently, I have been playing a lot with Functions and Project Fn. Eventually, I got to the point where I had to go beyond a playground on my laptop and go to the real wild world. An idea of running Fn on a K8s cluster seemed very attractive to me and I decided to do that somewhere on prem or in the cloud.  After doing some research on how to install and configure K8s cluster on your own on a bare metal I came to a conclusion that I was too lazy for that. So, I went (flew) to the cloud.

In this post I am going to show how to run Fn on Kubernetes cluster hosted on the Google Cloud Platform. Why Google? There are plenty of other cloud providers with the K8s services.
The thing is that Google really has Kubernetes cluster in the cloud which is available for everyone. They give you the service right away without asking to apply for a preview mode access (aka we'll reach out to you once we find you good enough for that), explaining why you need it, checking your background, credit history, etc. So, Google.

Once you got through all formalities and finally have access to the Google Kubernetes Engine, go to the Quickstarts page and follow the instructions to install Google Cloud SDK.

If you don't have kubectl installed on your machine you can install it with gcloud:
gcloud components install kubectl

Follow the instructions on Kubernetes Engine Quickstart to configure gcloud and create a K8s cluster by invoking the following commands:
gcloud container clusters create fncluster
gcloud container clusters get-credentials fncluster
Check the result with kubectl:
kubectl cluster-info
This will give you a list of K8s services in your cluster and their URLs.

Ok, so this is our starting point. We have a new K8s cluster in the cloud on one hand and Fn project on another hand. Let's get them married.

We need to install a tool managing Kubernetes packages (charts). Something similar to apt/yum/dnf/pkg on Linux. The tool is Helm. Since I am a happy Mac user I just did that:
brew install kubernetes-helm

The rest of Helm installation options are available here.

The next step is to install Tiller in the K8s cluster. This is a server part of Helm:
kubectl create serviceaccount --namespace kube-system tiller
kubectl create clusterrolebinding tiller-cluster-rule --clusterrole=cluster-admin --serviceaccount=kube-system:tiller
helm init --service-account tiller --upgrade
helm repo update

If you don't have Fn installed locally, you will want to install it so you have Fn CLI on your machine (on Mac or Linux):  

curl -LSs https://raw.githubusercontent.com/fnproject/cli/master/install > setup.sh
chmod u+x setup.sh
sudo ./setup.sh

Install Fn on K8s cluster with Helm (assuming you do have git client):
git clone git@github.com:fnproject/fn-helm.git && cd fn-helm
helm dep build fn
helm install --name fn-release fn

Wait (a couple of minutes) until Google Kubernetes Engine assigns an external IP to the Fn API in the cluster. Check it with:
kubectl get svc --namespace default -w fn-release-fn-api

Configure your local Fn client with access to Fn running on K8s cluster
export FN_API_URL=http://$(kubectl get svc --namespace default fn-release-fn-api -o jsonpath='{.status.loadBalancer.ingress[0].ip}'):80

Basically, it's done. Let's check it:
  fn apps create adfbuilderapp 
  fn apps list

Now we can build ADF applications with an Fn function as it is described in my previous post. Only this time the function will run and therefore building job will be performed somewhere high in the cloud.


That's it!

26 Feb 2018

Running Tomcat and Oracle DB in a Docker container

In one of my previous posts I showed how to run an ADF essentials application on Tomcat in a docker container. I am using this approach primarily for sample applications as a convenient way to share a proof-of-concept. In this post I am going to describe how to enrich the docker container with Oracle DB so my samples can be DB aware.

The original Tomcat image that I am developing in these posts is based on Debian Linux. I really don't want to have fun with installing and configuring Oracle DB on Debian Linux, and, for sure, I am not going to describe that in this post. What I am going to do is to use Docker-in-Docker technique. So, I am going to take the container from the previous post with ADF-preconfigured Tomcat, install Docker runtime in that container, pull Oracle DB image and run it inside the container. There are plenty of discussions about the Docker-in-Docker technique arguing if it is effective enough or not. I think I wouldn't go with this approach in production, but for sample applications I am totally fine with it.

Let's start.

1. Run a new container from the image saved in the previous post:
docker run --privileged -it -p 8888:8080 -p 1521:1521 -p 5500:5500 --name adftomcatdb efedorenko/adftomcat bash

Mind the option privileged in the docker command. This option is needed to make the container able  to run Docker engine inside itself.

2.  Install Docker engine in the container:
curl -fsSL get.docker.com -o get-docker.sh

sh get-docker.sh
After successful installation Docker engine should start automatically. It can be checked by running a simple docker command:
docker ps
If the engine has not started (as it happened in my case), start it manually:
service docker start
3. Login to Docker Hub:
docker login
And provide your Docker Hub credentials.

4. Pull and run official Oracle DB Image:
docker run --detach=true --name ADFDB -p 1521:1521 -p 5500:5500  store/oracle/database-enterprise:12.2.0.1

It's done!

Now we have a docker container with preconfigured Tomcat to run ADF applications and with Oracle DB running in a container inside the container. We can connect to the DB from both adftomcatdb container and the host machine as sys/Oradoc_db1@127.0.0.1:1521:ORCLDB as sysdba

Let's save our work to a docker image, so that we can reuse it later.

5. Create a start up shell script /user/local/tomcat/start.sh in the container with the following content:
#!/bin/bash
service docker start
docker start ADFDB
catalina.sh start
exec "$@"
6. Remove Docker runtimes folder in the container:
rm -r /var/lib/docker/runtimes/
7. Stop the container from the host terminal:
 docker stop adftomcatdb
8. Create a new image:
docker commit adftomcatdb efedorenko/adftomcatdb:1.0
9. Run a new container out of the created image:
docker run --privileged -it -p 8888:8080 -p 1521:1521 -p 5500:5500 --name adftomcatdb_10 efedorenko/adftomcatdb:1.0 ./start.sh bash
10. Enjoy!


That's it!

31 Jan 2018

Fn Function to build an Oracle ADF application

In one of my previous posts I described how to create a Docker container serving as a builder machine for ADF applications. Here I am going to show how to use this container as a function on Fn platform.

First of all let's update the container so that it meets requirements of a function, meaning that it can be invoked as a runnable binary accepting some arguments. In an empty folder I have created a Dockerfile (just a simple text file with this name) with the following content:

FROM efedorenko/adfbuilder
ENTRYPOINT ["xargs","mvn","package","-DoracleHome=/opt/Oracle_Home","-f"]

This file contains instructions for Docker on how to create a new Docker image out of existing one (efedorenko/adfbuilder from the previous post) and specifies an entry point, so that a container knows what to do once it has been initiated by the Docker run command. In this case whenever we run a container it executes Maven package goal for the pom file with the name fetched from stdin. This is important as Fn platform uses stdin/stdout for functions input/output as a standard approach.

In the same folder let's execute a command to build a new Docker image (fn_adfbuilder) out of our Docker file:

docker build -t efedorenko/fn_adfbuilder .

Now, if we run the container passing pom file name through stdin like this:

echo -n "/opt/MySampleApp/pom.xml" | docker run -i --rm efedorenko/fn_adfbuilder

The container will execute inside itself what we actually need:

mvn package -DoracleHome=/opt/Oracle_Home -f /opt/MySampleApp/pom.xml

Basically, having done that, we got a container acting as a function. It builds an application for the given pom file.

Let's use this function in Fn platform. The installation of Fn on your local machine is as easy as invoking a single command and described on GitHub Fn project page.  Once Fn is installed we can specify Docker registry where we store images of our functions-containers and start Fn server:

export FN_REGISTRY=efedorenko 
fn start

The next step is to create an Fn application which is going to use our awesome function:

fn apps create adfbuilderapp

For this newly created app we have to specify a route to our function-confiner, so that the application knows when and how to invoke it:

fn routes create --memory 1024 --timeout 3600 --type async adfbuilderapp /build efedorenko/fn_adfbuilder:latest

We have created a route saying that whenever /build resource is requested for adfbuilderapp, Fn platform should create a new Docker container basing on the latest version of fn_adfbuilder image from  efedorenko repository and run it granting with 1GB of memory and passing arguments to stdin (the default mode). Furthermore, since the building is a time/resource consuming job, we're going to invoke the function in async mode with an hour timeout.  Having the route created we are able to invoke the function with Fn Cli:

echo -n "/opt/MySampleApp/pom.xml" | fn call adfbuilderapp /build

or over http:

curl -d "/opt/MySampleApp/pom.xml" http://localhost:8080/r/adfbuilderapp/build

In both cases the platform will put the call in a queue (since it is async) and return the call id:

{"call_id":"01C5EJSJC847WK400000000000"}


The function is working now and we can check how it is going in a number of different ways. Since function invocation is just creating and running a Docker container, we can see it by getting a list of all running containers:


docker ps 

CONTAINER ID        IMAGE                               CREATED             STATUS                NAMES

6e69a067b714        efedorenko/fn_adfbuilder:latest     3 seconds ago       Up 2 seconds          01C5EJSJC847WK400000000000
e957cc54b638        fnproject/ui                        21 hours ago        Up 21 hours           clever_turing
68940f3f0136        fnproject/fnserver                  27 hours ago        Up 27 hours           fnserver



Fn has created a new container and used function call id as its name. We can attach our stdin/stdout to the container and see what is happening inside:

docker attach 01C5EJSJC847WK400000000000

Once the function has executed we can use Fn Rest API (or Fn Cli) to request information about the call:

http://localhost:8080/v1/apps/adfbuilderapp/calls/01C5EJSJC847WK400000000000

{"message":"Successfully loaded call","call":{"id":"01C5EJSJC847WK400000000000","status":"success","app_name":"adfbuilderapp","path":"/build","completed_at":"2018-02-03T19:52:33.204Z","created_at":"2018-02-03T19:46:56.071Z","started_at":"2018-02-03T19:46:57.050Z","stats":[{"timestamp":"2018-02-03T19:46:58.189Z","metrics":
....





http://localhost:8080/v1/apps/adfbuilderapp/calls/01C5EJSJC847WK400000000000/log


{"message":"Successfully loaded log","log":{"call_id":"01C5EKA5Y747WK600000000000","log":"[INFO] Scanning for projects...\n[INFO] ------------------------------------------------------------------------\n[INFO] Reactor Build Order:\n[INFO] \n[INFO] Model\n[INFO] ViewController\n[INFO]
....



We can also monitor function calls in a fancy way by using Fn UI dashboard:



The result of our work is a function that builds ADF applications. The beauty of it is that the consumer of the function, the caller, just uses Rest API over http to get the application built and the caller does not care how and where this job will be done. But the caller knows for sure that computing resources will be utilized no longer than it is needed to get the job done.

Next time we'll try to orchestrate the function in Fn Flow.

That's it!