Following on from a previous “Lunch & learn” about how Jenkins is being used for Stratio’s Continuous Delivery jobs (watch on Stratio’s youtube channel), it seemed logical to provide more information on our Jenkins pipeline plug-in usage.

In this first issue, we will  follow how pipelines are being used at Stratio Big Data to achieve full lifecycle traceability, from the development team to a final productive environment.

Some pitfalls were mentioned during the “Lunch & Learn” meeting and will be explained to help you fully comprehend the nature of the underlying bug and the solution achieved. This will follow in a second issue.

Pipelines are code

Each of our pipelines is kept in a private github project under Stratio’s organization repo, where we mantain several elements:

  • l.groovy
  • libvars.groovy
  • libpipeline.groovy
  • dev-project.groovy

l.groovy is the main place for shared methods, used for parsing files, checking out code, building, running tests, building docker images. +70 methods, most of them “private”. The  Jenkins pipeline allows us to auto-load it from an internal jenkins repo, but to make it easier, we are skipping that functionality and keeping the file in github.
libvars.groovy is the home for shared variables. Groovy allows untyped vars, but some of them are typed allowing better maintenance. Some of those variables are constants, such as urls (internal nexus, gitolite or docker registry), slack channels, default versions.
libpipeline.groovy is the main method. It will decide what kind of operations will be performed on the current job. We will go back to this file later.
dev-project.groovy is the real pipeline. Real because it loads the previous three files, sets variables values and invokes the previous main method. As an example, we can look at one of Stratio’s open source projects (Stratio Crossdata), with comments about its objective:

import groovy.transform.Field

@Field String lib

node('master') { //Common element load
    def cdpwd = pwd().split('/').reverse()[0].replaceAll('@.*', '')
    lib = load "../${cdpwd}@script/l.groovy"
    l.vars = load "../${cdpwd}@script/libvars.groovy"
    l.pipeline = load "../${cdpwd}@script/libpipeline.groovy"
}

// Some metadata for checking out, identifying and messaging abouts warnings/errors
l.v.MODULE = 'crossdata' 
l.v.MAIL = 'crossdata@stratio.com'
l.v.REPO = 'Stratio/crossdata'
l.v.ID = 'xd'
l.v.SLACKTEAM = 'stratiocrossdata'
l.v.FAILFAST = true 

// Stratio is polyglot, and so are its developments. 
// We need to know what build tool we have to use
l.v.BUILDTOOL = 'maven' 

// Should we deploy to sonatype oss repository (so maven artifacts become public)
l.v.FOSS = true 

// Each PR gets statuses, as soon as each run action passes or fails
l.v.PRStatuses = ['Compile', 'Unit Tests', 'Integration Tests', 'Code Quality'] 

l.v.MERGETIMEOUT = 70  // Timeous for each kind of operation we could perform
l.v.PRTIMEOUT = 30
l.v.RELEASETIMEOUT = 30
l.v.SQUASHTIMEOUT = 15

l.v.MERGEACTIONS = { // If our git hook sent a payload related to a PR being merged 
                  l.doBuild()

                  parallel(UT: {
                     l.doUnitTest()
                  }, IT: {
                     l.doIntegrationTest()
                  }, failFast: l.v.FAILFAST)

                  l.doPackage() //Might be a tgz, deb, jar, war
                  // java-scaladocs are published to our s3 bucket 
                  // (such as http://stratiodocs.s3-website-us-east-1.amazonaws.com/cassandra-lucene-index/3.0.6.1/)
                  l.doDoc() 

                  parallel(CQ: {
                     // Static code analysis with Sonarqube 
                     // and coveralls.io (for some FOSS projects)
                     l.doCodeQuality() 
                  }, DEPLOY: {
                     l.doDeploy()
                  }, failFast: l.v.FAILFAST)

                  // And push it to our internal docker registry
                  //, for a later usage in tests and demos
                  l.doDockerImage() 
                  // A Marathon cluster deploys the previously build image
                  l.doMarathonInstall('mc1') 
                  l.doAcceptanceTests(['basic', 'auth', cassandra', 'elasticsearch', 'mongodb', mesos', 'yarn'])
                 }

l.v.PRACTIONS = { // If our git hook sent a payload about a PR being opened or synced
               l.doBuild()

               parallel(UT: {
                  l.doUnitTest()
               }, IT: {
                  l.doIntegrationTest()
               }, failFast: l.v.FAILFAST)

               l.doCodeQuality()
               // We deploy a subset of our wannabe packages to an staging repo			   
               l.doStagePackage()
               // Work as packer, building a temporal docker image, so a container can be used for testing 
               l.doPackerImage() 
               l.doAcceptanceTests(['basic', 'auth', cassandra', 'elasticsearch', 'mongodb', mesos', 'yarn'])
              }

l.v.BRANCHACTIONS = { // We could receive a hook signalling a branch to be forged
                   l.doBranch()
                  }

l.v.RELEASEACTIONS = { // So we could release a final version
                    l.doRelease()
                    l.doDoc()
                    l.prepareForNextCycle()
                    // This time the image is the real deal
                    // It will end @ Docker Hub (https://hub.docker.com/r/stratio/)
                    l.doDockerImage() 

                    // Deploying again, to a production Marathon cluster
                    l.doMarathonInstall('mc2')
                    // Let the world know a new version is released, and spread its changelog
                    l.doReleaseMail() 
                   }

l.v.SQUASHACTIONS = {
                   // Currently just checks a PR statuse, rebases it, 
                   // invokes l.v.PRACTIONS, and merges the PR
                   l.doSquash() 
                  }

l.pipeline.roll()

Back to our libpipeline.groovy, we can see how some of the previously set up variables are being used:

def roll() {
    timestamps {
        try {            
            l.credentialsHandler()
            // The git hook gets parsed, so we can know the committer, 
            // pusher, ref, commitid, for a later git fetch
            l.doParseHook() 
            l.signalJobStart()

            if (l.isMerge()) {
                currentBuild.description = "${l.v.REF.replaceAll('refs/heads/', '')}"
                // Stages limit concurrency until milestone step 
                // (https://github.com/jenkinsci/pipeline-milestone-step-plugin) gets live
                stage name: "${currentBuild.description}", concurrency: 1 
                
                timeout(time: l.v.MERGETIMEOUT, unit: 'MINUTES') {
                    // Checkout some git ref, to a previously clean workspace
                    l.doFetch() 
                    // http://slack.com is one ofour preferred notification systems
                    l.doSlack('started')
                    // What is the Crossdata team expecting to be run when merging a PR? See a few lines above
                    l.v.MERGEACTIONS.call() 
                    // And we love to know our builds passed
                    l.doSlack('passed') 
                }                

            } else if (l.isPR()) { //Pull-merge requests
                currentBuild.description = l.evalPRDescription()
                stage name: "PR${PR}", concurrency: 1

                    timeout(time: l.v.PRTIMEOUT, unit: 'MINUTES') {
                        l.doFetch()
                        l.doSlack('started')
                        l.v.PRACTIONS.call()
                        l.doSlack('passed')
                    }

            } else if (l.isTagForReleaseBranching()) {              
                currentBuild.description = "Some branch to forge"
                stage name: "${l.v.BASEREF.replaceAll('refs/heads/', '')}", concurrency: 1
                timeout(time: l.v.BRANCHTIMEOUT, unit: 'MINUTES') {
                    l.doFetch()
                    l.doSlack('started')
                    l.v.BRANCHACTIONS.call()
                }
                l.doSlack('passed')

            } else if (l.isRCorRelease()) {                
                currentBuild.description = "${l.v.REF.replaceAll('refs/tags/', '').replaceAll('-RELEASE', '')} to be released"
                stage name: "${l.v.BASEREF.replaceAll('refs/heads/', '')}", concurrency: 1
                timeout(time: l.v.RELEASETIMEOUT, unit: 'MINUTES') {
                    l.doFetch()
                    l.doSlack('started')
                    l.v.RELEASEACTIONS.call()
                }
                l.doSlack('passed')

            } else if (l.isCommentOnPR()) {
                currentBuild.description = "Squashing ${l.evalPRDescription()}"
                stage name: "PR${PR}", concurrency: 1
                timeout(time: l.v.SQUASHTIMEOUT, unit: 'MINUTES') {
                    l.doFetch()
                    l.doSlack('started')
                    l.v.SQUASHACTIONS.call()
                }
                l.doSlack('passed')

            } else {                
                currentBuild.result = 'NOT_BUILT'
                currentBuild.description = 'github operation ommitted'
            }
            
        } catch (e) {
            l.doSlack('failed') // DUH!
            // mail notifications are not welcomed, buth here they come
            l.doFlowExceptionCatch(e) 
        } finally {
            // Clean up resources, and get ready for a next run
            l.doFinalize() 
        }
    }
}

return this;

Some of the unmentioned functionality is the brightest: Prior to Integration and Acceptance tests, several docker images are pulled, run and configured. With the tests ended, containers get destroyed. This way we can enjoy a clean environment for testing.

As you can imagine, both private and public repos can be checked out, from different git providers (github, gitlab, bitbucket). We are able to work with maven and make projects.

And as most of the elements can be defined from within each development team, some elements can be read from each git repo:

admin
Author