Blog

Continuous delivery in depth #1

by | Aug 29, 2016 | Developer | 0 comments

Following on from a previous “Lunch & learn” about how Jenkins is being used for Stratio’s Continuous Delivery jobs (watch on Stratio’s youtube channel), it seemed logical to provide more information on our Jenkins pipeline plug-in usage.

In this first issue, we will  follow how pipelines are being used at Stratio Big Data to achieve full lifecycle traceability, from the development team to a final productive environment.

Some pitfalls were mentioned during the “Lunch & Learn” meeting and will be explained to help you fully comprehend the nature of the underlying bug and the solution achieved. This will follow in a second issue.

Pipelines are code

Each of our pipelines is kept in a private github project under Stratio’s organization repo, where we mantain several elements:

  • l.groovy
  • libvars.groovy
  • libpipeline.groovy
  • dev-project.groovy

l.groovy is the main place for shared methods, used for parsing files, checking out code, building, running tests, building docker images. +70 methods, most of them “private”. The  Jenkins pipeline allows us to auto-load it from an internal jenkins repo, but to make it easier, we are skipping that functionality and keeping the file in github.
libvars.groovy is the home for shared variables. Groovy allows untyped vars, but some of them are typed allowing better maintenance. Some of those variables are constants, such as urls (internal nexus, gitolite or docker registry), slack channels, default versions.
libpipeline.groovy is the main method. It will decide what kind of operations will be performed on the current job. We will go back to this file later.
dev-project.groovy is the real pipeline. Real because it loads the previous three files, sets variables values and invokes the previous main method. As an example, we can look at one of Stratio’s open source projects (Stratio Crossdata), with comments about its objective:

Back to our libpipeline.groovy, we can see how some of the previously set up variables are being used:

Some of the unmentioned functionality is the brightest: Prior to Integration and Acceptance tests, several docker images are pulled, run and configured. With the tests ended, containers get destroyed. This way we can enjoy a clean environment for testing.

As you can imagine, both private and public repos can be checked out, from different git providers (github, gitlab, bitbucket). We are able to work with maven and make projects.

And as most of the elements can be defined from within each development team, some elements can be read from each git repo:

Javier Delgado Garrido
Automation fanatic, continuous tasks (inspection, testing, delivery) evangelist and perpetual new-knowledge addict.
X