31 Mar 2018

Deploying to K8s cluster with Fn Function

An essential step of any CI/CD pipeline is deployment. If the pipeline operates with Docker containers and deploys to K8s clusters then the goal of the deployment step is to deploy a specific Docker image (stored on some container registry) to a specific K8s cluster.  Let's say there is a VM where this deployment step is being performed. There are a couple of things to be done with that VM before it can be used as a deploying-to-kuberenetes machine:
  • install kubectl (K8s CLI) 
  • configure access to K8s clusters where we are going to deploy 
Having the VM configured, the deployment step does the following:
# kubeconfig file contains access configuration to all K8s clusters we need
# each configuration is called "context"
export KUBECONFIG=kubeconfig

# switch to "google-cloud-k8s-dev" context (K8s cluster on Google Cloud for Dev)
# so all subsequent kubectl commands are applied to that K8s cluster
kubectl config  use-context google-cloud-k8s-dev

# actually deploy by applying k8s-deployment.yaml file
# containing instructions on what image should be deployed and how  
kubectl apply -f k8s-deployment.yaml

In this post I am going to show how we can create a preconfigured Docker container capable of deploying a Docker image to a K8s cluster. So, basically, it is going to work as a function with two parameters: docker image, K8s context. Therefore we are going to create a function in Fn Project basing on this "deployer" container and deploy to K8s just by invoking the function over http.

The deployer container is going to be built from a Dockerfile with the following content:
FROM ubuntu

# install kubectl
ADD https://storage.googleapis.com/kubernetes-release/release/v1.6.4/bin/linux/amd64/kubectl /usr/local/bin/kubectl
ENV HOME=/config
RUN chmod +x /usr/local/bin/kubectl
RUN export PATH=$PATH:/usr/local/bin

# install rpl
RUN apt-get update
RUN apt-get install rpl -y

# copy into container k8s configuration file with access to all K8s clusters
COPY kubeconfig kubeconfig

# copy into container yaml file template with IMAGE_NAME placeholder
# and an instruction on how to deploy the container to K8s cluster
COPY k8s-deployment.yaml k8s-deployment.yaml

# copy into container a shell script performing the deployment
COPY deploy.sh /usr/local/bin/deploy.sh
RUN chmod +x /usr/local/bin/deploy.sh

ENTRYPOINT ["xargs","/usr/local/bin/deploy.sh"]

It is worth looking at the k8s-deployment.yaml file. It contains IMAGE_NAME placeholder which is going to be replaced with the exact Docker image name while deployment:

apiVersion: extensions/v1beta1
kind: Deployment

...

    spec:
      containers:
      - image: IMAGE_NAME
        imagePullPolicy: Always
...

The deploy.sh script which is being invoked once the container is started has the following content:
#!/bin/bash

# replace IMAGE_NAME placeholder in yaml file with the first shell parameter 
rpl IMAGE_NAME $1 k8s-deployment.yaml

export KUBECONFIG=kubeconfig

# switch to K8s context specified in the second shell parameter
kubectl config  use-context $2

# deploy to K8s cluster
kubectl apply -f k8s-deployment.yaml

So, we are going to build a docker image from the Dockerfile by invoking this docker command:
docker build -t efedorenko/k8sdeployer:1.0 .
Assuming there is Fn Project up and running somewhere (e.g. on K8s cluster as it is described in this post) we can create an Fn application:
fn apps create k8sdeployerapp
Then create a route to the k8sdeployer container:
fn routes create k8sdeployerapp /deploy efedorenko/k8sdeployer:1.0
We have created a function deploying a Docker image to a K8s cluster. This function can be invoked over http like this:
curl http://35.225.120.28:80/r/k8sdeployer -d "google-cloud-k8s-dev efedorenko/happyeaster:latest"
This call will deploy efedorenko/happyeaster:latest Docker image to a K8s cluster on Google Cloud Platform.


That's it!



24 Mar 2018

Run Fn Functions on K8s on Google Cloud Platform

Recently, I have been playing a lot with Functions and Project Fn. Eventually, I got to the point where I had to go beyond a playground on my laptop and go to the real wild world. An idea of running Fn on a K8s cluster seemed very attractive to me and I decided to do that somewhere on prem or in the cloud.  After doing some research on how to install and configure K8s cluster on your own on a bare metal I came to a conclusion that I was too lazy for that. So, I went (flew) to the cloud.

In this post I am going to show how to run Fn on Kubernetes cluster hosted on the Google Cloud Platform. Why Google? There are plenty of other cloud providers with the K8s services.
The thing is that Google really has Kubernetes cluster in the cloud which is available for everyone. They give you the service right away without asking to apply for a preview mode access (aka we'll reach out to you once we find you good enough for that), explaining why you need it, checking your background, credit history, etc. So, Google.

Once you got through all formalities and finally have access to the Google Kubernetes Engine, go to the Quickstarts page and follow the instructions to install Google Cloud SDK.

If you don't have kubectl installed on your machine you can install it with gcloud:
gcloud components install kubectl

Follow the instructions on Kubernetes Engine Quickstart to configure gcloud and create a K8s cluster by invoking the following commands:
gcloud container clusters create fncluster
gcloud container clusters get-credentials fncluster
Check the result with kubectl:
kubectl cluster-info
This will give you a list of K8s services in your cluster and their URLs.

Ok, so this is our starting point. We have a new K8s cluster in the cloud on one hand and Fn project on another hand. Let's get them married.

We need to install a tool managing Kubernetes packages (charts). Something similar to apt/yum/dnf/pkg on Linux. The tool is Helm. Since I am a happy Mac user I just did that:
brew install kubernetes-helm

The rest of Helm installation options are available here.

The next step is to install Tiller in the K8s cluster. This is a server part of Helm:
kubectl create serviceaccount --namespace kube-system tiller
kubectl create clusterrolebinding tiller-cluster-rule --clusterrole=cluster-admin --serviceaccount=kube-system:tiller
helm init --service-account tiller --upgrade

If you don't have Fn installed locally, you will want to install it so you have Fn CLI on your machine (on Mac or Linux):  

curl -LSs https://raw.githubusercontent.com/fnproject/cli/master/install > setup.sh
chmod u+x setup.sh
sudo ./setup.sh

Install Fn on K8s cluster with Helm (assuming you do have git client):
git clone git@github.com:fnproject/fn-helm.git && cd fn-helm
helm dep build fn
helm install --name fn-release fn

Wait (a couple of minutes) until Google Kubernetes Engine assigns an external IP to the Fn API in the cluster. Check it with:
kubectl get svc --namespace default -w fn-release-fn-api

Configure your local Fn client with access to Fn running on K8s cluster
export FN_API_URL=http://$(kubectl get svc --namespace default fn-release-fn-api -o jsonpath='{.status.loadBalancer.ingress[0].ip}'):80

Basically, it's done. Let's check it:
  fn apps create adfbuilderapp 
  fn apps list

Now we can build ADF applications with an Fn function as it is described in my previous post. Only this time the function will run and therefore building job will be performed somewhere high in the cloud.


That's it!