Behold Automation at it’s Finest

Today’s Topic will be all about Continous Integration! Let’s Imagine this scenario, right now you are done with creating an awesome service that is capable of crawling the whole wide web or doing an e-commerce store. The problem now is that, you have to generate your own api blueprints (documentation), you have to manually deploy it to the cloud, make use of kubenetes settings. What is there was a way to automate all this these. By simply pushing to the Repository, all those manual labor will be done for you behind the scenes!

(Pst, we will also be using kubernetes for automate deployment!)

Laying the Ground-Work

Of course, laying the ground work is no menial task. Before we begin, lets lay some “rules”. The rules are that you should have already built a minimum of one service and is not bias(negative light) against google cloud platform services.

Process is as follows: Local Repository –> Google Cloud Repository(Private) –> Build Triggers(Cloudbuild.yaml)

  1. // # Sample Cloudbuild.yaml
  2. steps:
  3. - name: 'gcr.io/cloud-builders/go'
  4. args: ['install', '.']
  5. env: ['PROJECT_ROOT=$REPO_NAME']
  6. - name: 'gcr.io/cloud-zen/kube-doc:latest'
  7. args: ['go', 'run', 'main.go']
  8. env: ['REPO_NAME=$REPO_NAME']
  9. - name: 'gcr.io/cloud-builders/docker'
  10. args: ['build', '-t', 'gcr.io/$PROJECT_ID/$REPO_NAME:$REVISION_ID', '.']
  11. - name: 'gcr.io/cloud-zen/kube-deploy:latest'
  12. args: ['set','image','deployments/$REPO_NAME', '$REPO_NAME=gcr.io/$PROJECT_ID/$REPO_NAME:$REVISION_ID']
  13. images:
  14. - 'gcr.io/$PROJECT_ID/$REPO_NAME:$REVISION_ID'

Before pushing to the Google Cloud Repository, we first set up a trigger to look out for a file named Cloudbuild.yaml.

Inside the cloudbuild.yaml sample as we have shown above, it will execute a set of instructions and only upon successful execution will it be a successful build. In this sample, the service that we have built is based on go. We will first make use of the built-in docker image by cloudbuilder to run the commands:

  1. go install
  2. go test ./... (optional)

Followed by which, it will start to build the docker image based on the tags given to it and sends it to the Google Cloud Registry (GCR). The final command to execute would be to pull in a custom docker image that we have built to set the kubenetes system.

  1. # Sample Kube-Deploy Dockerfile
  2. FROM ubuntu:16.04
  3. ENV CLOUDSDK_PYTHON "/usr/bin/python2.7"
  4. ENV PATH /root/google-cloud-sdk/bin:$PATH
  5. ENV CLOUDSDK_PYTHON_SITEPACKAGES 1
  6. # Install dependencies
  7. RUN apt-get update && apt-get install -y curl && apt-get install -y python2.7
  8. # Install gcloud
  9. RUN curl https://packages.cloud.google.com/apt/doc/apt-key.gpg | apt-key add -
  10. RUN apt-get update
  11. RUN curl https://sdk.cloud.google.com | bash
  12. # Authenticate gcloud
  13. COPY /configs/gcloud /root/.config/gcloud
  14. RUN ls -a /root/.config/gcloud
  15. # Install kubectl
  16. RUN /root/google-cloud-sdk/bin/gcloud components install kubectl
  17. # Set and config cluster
  18. RUN gcloud config set container/cluster cloudzen
  19. RUN gcloud container clusters get-credentials cloudzen
  20. # RUN gcloud auth application-default login
  21. RUN gcloud container clusters describe cloudzen --zone asia-east1-a
  22. RUN gcloud container clusters list --zone asia-east1-a
  23. # Updating the cluster
  24. RUN gcloud container get-server-config --zone=asia-east1-a
  25. RUN gcloud container clusters get-credentials cloudzen --zone asia-east1-a
  26. ENTRYPOINT ["kubectl"]

What kube-deploy does is that it installs python, gcloud and kubectl on an ubuntu base image with gcloud authenticated. This allows others services to pull this image to run the kubectl command to set the image in the pod(replica set). With that, you have successful built a integration that helps you build and pull docker images and at the set time set your image for proxy purposes and your services is updated with 0 downtime and without any manual labor (apart from running git push to Google Cloud Repository)

Appendix

  1. # Sample haproxy.cfg - for kubernetes
  2. global
  3. log 127.0.0.1 local0
  4. daemon
  5. # maxconn 4000
  6. # debug
  7. defaults
  8. log global
  9. mode http
  10. option http-server-close
  11. timeout connect 5s
  12. timeout client 30s
  13. timeout client-fin 30s
  14. timeout server 20s
  15. timeout tunnel 1h
  16. stats enable
  17. stats refresh 5s
  18. stats show-node
  19. stats uri /stats/haproxy
  20. frontend www
  21. bind *:80
  22. acl is_kube_aglio path_beg /doc
  23. use_backend kube-aglio-http if is_kube_aglio
  24. default_backend nomatch
  25. backend nomatch
  26. errorfile 503 /usr/local/etc/haproxy/errors/404.http
  27. backend kube-aglio-http
  28. balance roundrobin
  29. server api1 kube-aglio.default:80 check



Ever developer knows that documentation is the key to everything and one of a nice tool that I’ve chanced upon is API Blueprints. in our current repository, we will have to have a .apib file which contains our documentation is .apib language. We would need some form of way to render it to html. One such tool we can make use of is aglio which helps us generate themes to come along with our html. All these works well but it comes with a price, what happens when we forget to run the command and push it to the cloud repository, also another problem is that anyone who wants to look at the documentation would have to clone the repository and that is not something that we want. So in the following steps, we would be building a function in go that would take care of passing the .apib in the repository into a google storage bucket.

  1. // Sample Kube-Doc Custom
  2. func main() {
  3. bucketName := "api-doc-build"
  4. repoName := os.Getenv("REPO_NAME")
  5. APIBfilePath := os.Getenv("APIB_FILE_PATH")
  6. ctx := context.Background()
  7. //Starts the client
  8. client, err := storage.NewClient(ctx)
  9. if err != nil {
  10. log.WithError(err).Fatal("Authentication Error!")
  11. }
  12. if repoName == "" {
  13. log.Fatal("'REPO_NAME' is required")
  14. }
  15. object := fmt.Sprintf("%s/doc.apib", repoName)
  16. doc, err := os.Open(APIBfilePath)
  17. if err != nil {
  18. log.Fatal("APIB File does not exist")
  19. }
  20. defer doc.Close()
  21. wc := client.Bucket(bucketName).Object(object).NewWriter(ctx)
  22. if _, err := io.Copy(wc, doc); err != nil {
  23. log.WithError(err).Fatal("Error Copying File")
  24. }
  25. if err := wc.Close(); err != nil {
  26. log.WithError(err).Fatal("Error Closing Client")
  27. }
  28. // Close the client when finished.
  29. if err := client.Close(); err != nil {
  30. log.WithError(err).Fatal("Error Closing Client")
  31. }
  32. }

If you build a docker image with this custom go script, what it does is that it will throw the .apib file in the repository that is pulling the image into a bucket. The question now is that we would want to create a custom server that is capable of allowing the developers to use our api without needing to clone our repository or having to use the aglio command manually.

  1. // Sample Dockerfile for Running Node Server
  2. FROM node:boron
  3. # Create app directory
  4. RUN mkdir -p /usr/src/app
  5. WORKDIR /usr/src/app
  6. # Install app dependencies
  7. COPY package.json /usr/src/app/
  8. RUN npm install
  9. # Bundle app source
  10. COPY . /usr/src/app
  11. EXPOSE 8080
  12. CMD [ "npm", "start" ]
  1. // Sample Index.js
  2. 'use strict';
  3. const express = require('express');
  4. const aglio = require('aglio');
  5. const gcs = require('@google-cloud/storage')({
  6. projectId: '$projectID'
  7. });
  8. const bucket = gcs.bucket(process.env.GCS_BUCKET || 'api-doc-build');
  9. // Configure express app
  10. const app = express();
  11. app.set('views', __dirname + '/views');
  12. app.engine('html', require('ejs').renderFile);
  13. app.get('/doc', (req, res) => {
  14. return bucket.getFiles((err, files) => {
  15. if (err !== null) {
  16. // files is an array of File objects.
  17. console.log(err);
  18. return res.status(500).send({error: 'error getting files'});
  19. }
  20. res.render('index.html', {
  21. docs: files.map((file) => {
  22. return file.id.split("%")[0];
  23. })
  24. });
  25. });
  26. });
  27. app.get('/doc/:repo', (req, res) => {
  28. let fileData = new Buffer('');
  29. const repo = req.param("repo");
  30. const remoteFile = bucket.file(`${repo}/${process.env.APIB_FILE_NAME || 'doc.apib'}`);
  31. // Validate
  32. if (repo === "") {
  33. return res.status(400).send({error: `Invalid repo name`});
  34. }
  35. // Download file from GCS bucket
  36. return remoteFile.createReadStream()
  37. .on('error', function(err) {
  38. console.log('Error:', err);
  39. return res.status(404).send({error: '404 File not found'});
  40. })
  41. .on('data', function(chunk) {
  42. fileData = Buffer.concat([fileData, chunk]);
  43. })
  44. .on('end', function() {
  45. // The file is fully downloaded.
  46. fileData = fileData.toString();
  47. if (fileData === undefined || fileData === "") {
  48. return res.status(500).send({error: `${process.env.APIB_FILE_NAME} document can not be empty`});
  49. }
  50. return aglio.render(fileData, {
  51. themeVariables: process.env.THEME || 'default'
  52. }, (err, html) => {
  53. if (err !== null) {
  54. return res.status(500).send({error: err});
  55. }
  56. return res.send(html);
  57. });
  58. });
  59. });
  60. app.listen(process.env.HTTP_PORT || 8080);
  61. console.log(`Running on :${process.env.HTTP_PORT || 8080}`);


This script above downloads the .apib file from the GCS bucket and later renders it on a node express server. With that, you have just been through the basic fundamentals of continous integration!