Bitnami Kubernetes Production Runtime 1.1

At Bitnami we have been working on the Bitnami Kubernetes Production Runtime, a curated collection of services needed to deploy on top of your Kubernetes cluster (currently supporting GKE and AKS) to enable logging, monitoring and certificate and DNS management. Gus Lees wrote a great blog post explaining why we created BKPR and what problem are we trying to solve.

One of the goals when we released BKPR 1.0 was that each minor release of BKPR would support 2 versions of Kubernetes, allowing users to follow a tick-tock model for upgrades (they can upgrade BKPR first, with the same Kubernetes version, and then upgrade Kubernetes itself, without having to upgrade both at the same time).

BKPR K8s versions support modelBKPR 1.0 had support for Kubernetes 1.9 and 1.10, as 1.10 was the default Kubernetes version for AKS and GKE at the time of the release. As both platforms moved to 1.11 we prepared the BKPR 1.1 release with support for 1.10 and 1.11 to keep up with our promise.

Aside from keeping up with Kubernetes releases, we want to continue adding components, features and documentation to BKPR. These are some of the new stuff you will be able to find:

  • Grafana support. We have added Grafana to the monitoring stack, making it the perfect companion to Prometheus
  • We have improved our troubleshooting guide, with additions on how to debug certificate management and DNS issues.
  • Although the default use case for BKPR is to deploy it on a new clean cluster, we also want to support migrating to BKPR if you are already running some of these components in your cluster. We have published a migration guide for Prometheus, so people can migrate their Prometheus deployment to the BKPR one without losing data from the time series database. We have applied that migration guide ourselves in our production clusters.
  • To continue improving the security of BKPR we have moved the Kibana image to a non-root image.

Using external cloud services from Kubernetes using the Service Catalog

Kubernetes is a great platform to run containerized workloads in production. Developers can express their applications orchestration in a declarative way, and set up their CI/CD workflows to deploy into production continuously.

However, many of the cloud native applications depend on managed services offered by the clouds. Developers use databases, pubsub queues, storage, etc. from the major clouds knowing that those services already implement many of the features they would need to manage themselves otherwise (backups, HA, etc.). Combining Kubernetes with external services available from public cloud providers can be a powerful way to deploy cloud native applications. Developers can focus on deploying their applications to their Kubernetes clusters, while delegating things like database management to a public cloud managed service.

In the past, this integration needed to happen outside Kubernetes. Developers needed to provision those services, then connect them to their applications creating secrets with the right credentials, breaking their declarative GitOps workflows. The Service Catalog bridges these two worlds by allowing developers instantiate services outside Kubernetes directly from their Kubernetes cluster, in a full declarative way.

Service Catalog is an extension API that enables applications running in Kubernetes clusters to easily use external managed software offerings, such as a datastore service offered by a cloud provider. These services are provided by a Service Broker, which is an endpoint talking to these providers. Once the cluster administrator deploys a ClusterServiceBroker, several ClusterServiceClasses and ClusterServicePlans will be available in the cluster for users to provision those services. To provision a service, users will create a ServiceInstance object and to connect it to their application they will create a ServiceBinding object.

The real power of the Service Catalog is that all of those actions, like provisioning a service instance, creating a binding and using that binding to connect to an application can be described as declarative YAML files, as any other Kubernetes object. This means they can be part of your application description and you can create those instances or bindings as part of your CI/CD pipeline or directly from your Helm Charts.

I will be talking about the Service Catalog and how you can use it with your Helm Charts to deploy Kubernetes applications that use some of these cloud managed services at Kubecon North America. If you are planning to attending, add the talk to your KubeCon schedule. I hope to see you there!

Kubeapps for Oracle Container Engine for Kubernetes

This article originally ran on October 22nd on the Oracle Cloud Native Blog.

Kubeapps is a web-based UI for deploying and managing applications in Kubernetes clusters. It allows your cluster users to deploy applications packaged as Helm charts directly from their browsers.

Bitnami has been working on making the experience of running Kubeapps on top of an Oracle Container Engine for Kubernetes (OKE) cluster great, including testing and improving Bitnami’s authored Helm charts so they work out of the box in OKE clusters.

In this blog post, we will explain how you can deploy Kubeapps into your OKE cluster and use it to deploy any of the many Bitnami’s Helm charts available. This post assumes that you already have an OKE cluster and kubectl is configured to talk to it.

Install Helm CLI locally and in your cluster

To deploy Kubeapps you will need to install the Helm CLI tool (helm). Follow the instructions on the Helm Github page to install the Helm CLI into your system. Take into account that Kubeapps 1.0 requires Helm 2.10 or later.

When creating an OKE cluster you have the option to have Tiller (Helm’s server component) deployed into your cluster.

You can check if Tiller is already running in your cluster:

$ kubectl get pods -n kube-system
NAME READY STATUS RESTARTS AGE  
kube-dns-66d8df795b-j6jnb 3/3 Running 0 22h  
[...]  
tiller-deploy-5f547b596c-djbnb 1/1 Running 0 22h

If you have a pod called tiller-deploy-* running, then Tiller is already deployed in your cluster. In that case you will need to upgrade your version running the following command:

helm init --upgrade --service-account tiller

If Tiller is not deployed yet in your cluster, you can deploy it easily by running:

helm init

Deploy Kubeapps in your cluster

The next step would be to deploy Kubeapps in your cluster. This can be done with Helm in your terminal by running:

helm repo add bitnami <https://charts.bitnami.com/bitnami>  
helm install — namespace kubeapps -n kubeapps bitnami/kubeapps

Kubeapps requires a token to login, then it will be used in any request to make sure that the user has enough permissions to perform the required API calls (if your cluster has RBAC enabled).

For this blog post, we will create a service account with cluster-admin permissions as explained in the Kubeapps documentation.

kubectl create serviceaccount kubeapps-operator  
kubectl create clusterrolebinding kubeapps-operator — clusterrole=cluster-admin — serviceaccount=default:kubeapps-operator

With the following command we will reveal the token that we will use to login into the Kubeapps dashboard:

kubectl get secret $(kubectl get serviceaccount kubeapps-operator -o jsonpath={.secrets[].name}) -o jsonpath={.data.token}’ | base64 — decode

Accessing the Kubeapps dashboard and logging in

The default values for the options in the Kubeapps Helm chart deploy the Kubeapps main service as a ServiceIP, which cannot be accessed externally. We will use Kubernetes port-forward option to be able to access it locally:

echo “Kubeapps URL: http://127.0.0.1:8080"  
export POD\_NAME=$(kubectl get pods — namespace kubeapps -l “app=kubeapps” -o jsonpath={.items[0].metadata.name})  
kubectl port-forward — namespace kubeapps $POD\_NAME 8080:8080

Once the port-forward is running you can access Kubeapps in your browser at http://localhost:8080

You will be prompted with a login screen. To log in, you can paste the token you obtained in the previous section:

Once you are logged in, you can browse all available charts in the Charts link:

Using Kubeapps to deploy Bitnami charts in your OKE cluster

Bitnami maintains a catalog of more than 50 charts and those have been fully tested in OKE clusters and polished to work out of the box on an OKE cluster. You can have a look to the Helm charts in the Bitnami repo by accessing http://localhost:8080/charts/bitnami/

As an example, we will deploy the Bitnami Wordpress Helm chart through Kubeapps.

After selecting the Wordpress chart, we will deploy it with the default values, which will create a LoadBalancer service and will deploy a MariaDB database in the cluster. You can check that both pods are up and running, and that PVCs, backed by OCI, have been provisioned:

$ kubectl get pods
NAME READY STATUS RESTARTS AGE  
my-wordpress-mariadb-0 1/1 Running 0 4m
my-wordpress-wordpress-5cfc65b9-dnblz 1/1 Running 0 4m
$ kubectl get pvc
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE  
data-my-wordpress-mariadb-0 Bound ocid1.volume.oc1.phx.abyhqljsslsar3caoqzxotoaci3svroeci4g2pkh7rva6gtck4tqbckslmnq 50Gi RWO oci 4m  
my-wordpress-wordpress Bound ocid1.volume.oc1.phx.abyhqljsiewzlxlyuaeulf6nsm4w2wqnzwq3ho3vbisrcjumroga6l765r6q 50Gi RWO oci 4m

Also, as this is a LoadBalancer service, OKE will provide a load balancer with an IP you can use to access your new Wordpress website:

Summary

In this blog post, we explained how you can use Kubeapps in your Oracle Container Engine for Kubernetes cluster to deploy and maintain OKE-ready Kubernetes applications from Bitnami. These applications were specifically tested for the Oracle platform, and you can rest assured that they follow Bitnami’s secure and up-to-date packaging standards. You can visit the Kubeapps Hub to keep track of what Bitnami charts are available and the supported versions.

Kubeapps New Release

Kubeapps, your applications dashboard for KubernetesKubeapps is an application dashboard for your Kubernetes cluster. As opposed to the Kubernetes dashboard, Kubeapps provides a central location for your applications and their full life cycle.

Over the past few weeks we have been working on a set of new features in Kubeapps and we are happy to announce that those new features are available in the latest release of Kubeapps.

Some of these features are not backwards compatible, so we recommend that you use the migration guide if you need to migrate from the previous Kubeapps version.

The latest features include:

Helm Chart to deploy Kubeapps

In this release, we have made the deployment of Kubeapps even easier. We are deprecating the CLI tool, and are giving users a Helm Chart to deploy instead. The Helm chart provides a lot more flexibility in the way you deploy Kubeapps into your cluster and it is a packaging format very popular in the Kubernetes community.

Helm Proxy and Security

One of the initial goals for Kubeapps was to be able to provide a secured Tiller (the in-cluster Helm component) deployment, following the Helm security best practices. You can check out our guide on our recommended way to secure Tiller with TLS certificates and RBAC here.

We wanted to go a step further and make sure that the service account interacting with Kubeapps and installing Helm charts had the needed permissions required to deploy the components of the chart. For this we deploy alongside with Tiller, as a sidecar container, a proxy that validates the requests checking that the user is allowed to perform the requested operation and finally redirect it to Tiller. For example, if the chart that the user wants to deploy has a Deployment, a ConfigMap and a Secret, the proxy will check that the service account initiating the request has RoleBindings associated to Roles that allow them to create those objects in the requested namespace before sending the request to Tiller.

Service Catalog integration improvements

We have made a lot of improvements with Kubeapps Service Catalog integration in this new release. We have implemented forms that get generated based on the JSON schema for a particular service. Now, when you request an instance for a particular service, you will be prompted with a form that is specific to the data required by the Service Broker for that particular service, instead of having to fill the JSON manually yourself.

For example, Google’s Cloud PubSub service, available through their GCP Service Broker, only requires a topic to fill in to provision an instance. Instead of having to submit a JSON with the data, you can fill that information through Kubeapps in a form:

Once you have provisioned an instance of a service you need to request a binding, which you will be able to use with your application. When requesting a binding you will get presented with another form to fill, based again on the specific service:

If you need a dashboard in your cluster to manage your applications and their life cycles, deploy Kubeapps in your Kubernetes cluster today following these simple instructions and let us know what you think!

Deploying a Kubeless function using Serverless templates

Serverless Framework is a framework that helps building serverless applications and deploying them in a consistent manner to different clouds (Google, AWS, Azure, etc.).

At Bitnami we have been working alongside the Serverless team to include Kubeless, the serverless framework for Kubernetes, in that list, helping people, who are already familiar with the Serverless Framework, to deploy their applications on-premise on top of Kubernetes.

In a previous blog post, we introduced the Kubeless plugin for Serverless and explained how we could deploy a function using the framework, which included having to create a serverless.yml file for your particular function.

From Serverless Framework version 1.20 onwards, building and deploying functions to Kubeless has become even easier, with the introduction of Kubeless templates. In this blog post, we will show you how to build and deploy a Python function using templates.

This posts assumes you have Kubeless installed in your Kubernetes cluster. You can follow the installation process for Kubeless in its README.md file.

The first thing we will need to do is to install the Serverless CLI globally:

$ npm install serverless -g

Once installed, we will create the needed scaffolding for our Python function, using the right template and specifying an optional path for your service:

$ serverless create --template kubeless-python --path new-project

This will create the needed files in the new-project folder.

Inside the folder you will find the following files: serverless.yml, handler.py and package.json.

serverless.yml is the Serverless Framework description file for your function.

# This is a serverless framework way to group  
# several functions. Not to be confused with K8s services  
service: new-project  
provider:  
 name: kubeless  
 runtime: python2.7plugins:  
 - serverless-kubelessfunctions:  
 # The top name will be the name of the Function object  
 # and the K8s service object to get a request to call the function  
 hello:  
 # The function to call as a response to the HTTP event  
 handler: handler.hello

handler.py is the file where you define the Python function to call as a response to the HTTP event. The template creates for you an example hello function for you:

import json  
  
def hello(request):  
 body = {  
 "message": "Go Serverless v1.0! Your function executed successfully!",  
 "input": request.json  
 }  
  
 response = {  
 "statusCode": 200,  
 "body": json.dumps(body)  
 }  
  
 print("hello world!")  
  
 return response

package.json is the npm package definition of our functions with all their dependencies, including the kubeless-serverless plugin.

Let’s try deploying the example function to Kubeless and run it:

$ cd new-project  
# Install npm dependencies  
$ npm install  
# Deploy the service using serverless  
$ serverless deploy

We can see that there is a new Service in our cluster with the same name as our function and a Function object:

$ kubectl get svc  
NAME CLUSTER-IP EXTERNAL-IP PORT(S) AGE  
hello 10.0.0.91 <none> 8080/TCP 2h$ kubectl get functions  
NAME KIND  
hello Function.v1.k8s.io

Now, let’s run the function and get the logs:

$ serverless invoke --function hello --data '{"Kubeless": "Welcome!"}' -l

Serverless: Calling function: hello...  
--------------------------------------------------------------------  
{ body: '{"input": {"Kubeless": "Welcome!"}, "message": "Go Serverless v1.0! Your function executed successfully!"}',  
 statusCode: 200 }

You can open up a separate tab in your console and stream all logs for a specific Function using this command:

$ serverless logs -f hello -t

That’s it! You can see how easy and straightforward is to deploy a Python function in Kubeless using the Serverless Framework templates.

Also for Nodejs functions

There is also a Nodejs template for Kubeless. You can create your scaffolding for your Nodejs application specifying the kubeless-nodejs template:

$ serverless create --template kubeless-nodejs --path my-node-project

In version 1.20 of the Serverless Framework, there is an open bug in the kubeless template for node and the runtime name was wrong. Check your serverless.yml file to make sure the name of the runtime version is nodejs6 or modify it to that before deploying the function. This will be fixed in the next version of the Serverless Framework.