jenkins

Breaking up with Bamboo was not easy. We were going steady for a few years. But alas, as with most of the relationships, sometimes it is just not meant to be. When your partner keeps asking you for a licensing fee every time we go somewhere new, it kind of weighs in on you. It consumed too much of my resources from time to time although sometimes it was not all Bamboo’s fault.

I remembered the famous song. Let us see if anyone can guess it with the following image;

If you guessed I want to break free by Queen then you my dear reader have very good taste in music. Hats off to you!

So as I wanted to break free from Bamboo, I wanted to make sure I look out for something new. Someone who does not consume too much of my resources and does not ask me to pay a licensing fee every time I take them out somewhere new. Someone who is independent and gives you the ability to define things declaratively. This is when I fell in love with Jenkins.

Ok so I would end my metaphorical story of love from here on out as it would otherwise turn creepy when I want to talk about pipelines and stuff. So yes, after looking around with CircleCI, Travis CI and other hosted pipeline solutions, we decided to go with our own managed Jenkins cluster that served us well. This is how we made the transition.

Bamboo was such an ease to work with. The drag and drop kind of UI it provided was intuitive and easy to be productive from the get go. The dichotomy between the build plans and deployment plans was such an amazing feature to keep things separate yet inter-linked.

In a start-up culture, it is always a battle between faster time to market, efficiency and most importantly cost efficiency as you do not want to end up burning too much capital early on in your start-up life. Although Bamboo was great, in order for us to scale along with our rising number of micro-services, we had to keep paying for each agent we deployed on a VM. It came to a point where it was just not cost efficient to keep increasing the number of worker agents and we had to bear the cost of increased time to build and deploy our micro-services. This is even after we did most of what we can do aside from Bamboo to make our build and deployment phases faster and more efficient.

The other issue we had with Bamboo was that it did not give us a way to define our pipelines as code. In the era of infrastructure as code, it is definitely a nice or should I say must have feature to be able to define your pipeline as code. The major benefit, specially for us to have it as code was so that we could version control it and make incremental updates that could then be rolled out in a timely manner across our CI/CD platform. Bamboo does have its own API that we can use to mimic something of a pipeline as code, but it was getting way too complicated to define it with the Java API it had.

Along came Jenkins. Oh dear Jenkins, where have you been all my life(sorry the love is quite strong between us). I knew Jenkins back in the day when it was called Hudson. Yea remember those days everyone? Back then it had most of what we wanted and I did lose touch with it after a while. But my goodness, Jenkins is miles apart from what Hudson.

Jenkins is an open source CI/CD platform that gives you the ability to define your pipeline as code among other things of course. You can still use the freestyle project that you could use if you do not want to define your pipelines as code. But seriously people would think you are some weirdo if you do not use pipeline as code with Jenkins.

As most of our team was accustomed to the build and deployment plan concept of Bamboo I wanted to make the transition to Jenkins as seamless as possible. To do this, I still kept the separation of build and deployment plans in Jenkins with the use of separate folders (yea Jenkins has folders).

So this is a snippet of what it looks like;

Before we dive into what the build and deployment plans looks like, I wanted to note down a few plugins that I installed that I needed along the way(yea Jenkins has plugins for almost everything. Probably a plugin which can tell your future. Who knows)

This plugin gives a better UI to view your pipeline projects that the default Jenkins UI does not provide. Quite a nice view and was useful.

Bamboo had this feature of sharing a build artifact from a deployment plan which could be downloaded from within a deployment plan. As I was making the transition from Bamboo to Jenkins seamless, I wanted to keep this feature so used this plugin to do just that. Allowed me to archive a build artifact which could then be copied from within a deployment plan.

This gives you the ability to create Github issues when your build fails. You can define how much of the log you want to include in your issue created along with other parameters.(This only worked with the declarative scripting style of Jenkins pipelines. More on different types Jenkins provides later on this post)

There were instances where I wanted to have the ability to run a certain plan on a specific node. With your builds defined as pipelines, you can use the node parameter to tell Jenkins which node to run the pipeline.

These were the plugins that wanted to highlight although there are a few more that I added which did not warrant the need to be mentioned on this post.

Ok moving on, let us see what a build plan looks like right now;

So here, I first disable concurrent builds first(I do not want to have to deal with those issues. At least not right now). The next one is where we make use of the Copy Artifact Plugin I mentioned above. Here we say that a project in the DeployProjects folder is allowed to copy the artifact(s) generated from this build plan.

Next up, it’s time to parameterize everything because what would life be without parameters(Sad, life would be sad without parameterized builds)

With our case, I wanted to parameterize three things.

  • Branch : The Github repo branch we wanted to check out as I am not using a multi pipeline branch project here.
  • The name of the service : This is related to how we define things on some of the shell scripts we have that runs within the pipeline. (Not that important)
  • Repo : The Git repository URL

Note : ms-awesomness in the star-labs organization does not exist in real life(How cool would it be if it did though yea?)

And now, to look at what you have all been waiting for, the infamous, one and only, Jenkins pipeline code. Ladies and Gentlemen, let me introduce you, Pipeline as code on Jenkins… oh wait, the script is waaaaaay too long and I would not want you to keep scrolling down so you can check the code out on the gist I have shared here. Let me take some time to explain a few things from the script for the remainder of this post.

Before we dive in, I wanted to note that with Pipeline as code on Jenkins, there are mainly two ways you can go about writing your pipeline code.

  • Scripted
  • Declarative

I have used the scripted approach on the gist I have shared with you. It just gave me more flexibility compared to the declarative approach.

env.JAVA_HOME="${tool 'JDK12'}"
env.MAVEN_HOME="${tool 'M3'}"
env.PATH="${env.JAVA_HOME}/bin:${env.MAVEN_HOME}/bin:${env.PATH}"

The script starts with the node keyword which is how the scripted version of the pipeline is defined.

These few lines define the environment variables I wanted to set before starting my build process. The tool key refers to the Global tool configuration you define in your Jenkins configuration.

Next up, we define our first stage in our pipeline;

stage('Checkout source') {
git(
url: "${repo}",
credentialsId: "${GithubCredentialsId}",
branch: "${branch}"
)
mvnHome = tool 'M3'
java_path = tool 'JDK12'
}

Here we checkout the Github repository. Note the ${repo} refers to the build parameter we defined earlier as part of the Jenkins project.

The ${GithubCredentialsId} is actually an environment variable defined that links to the credential ID configured that stores the Github robot account credentials stored. I used an environment variable rather than defining the ID directly just so that if we ever wanted to change or add a new account, we would only have to change the environment variable value.

Finally we set the maven home and the java path variables(oh yes, it is a Java project. Long live Java!!)

The other stages that follow are quite similar so I would not explain those. Let us look at the next important section on that script;

stage('Checkout dev-ops-dependencies') {
withCredentials([[$class: 'UsernamePasswordMultiBinding', credentialsId: "${GithubCredentialsId}",
usernameVariable: 'USERNAME', passwordVariable: 'PASSWORD']]) {
sh '''
TAG_NAME=`echo "${branch}" | cut -d'/' -f2`
bash dev-ops/dependency-scripts/handle-dependencies.sh $TAG_NAME ${SERVICE_NAME} $USERNAME $PASSWORD
'''
}
}

So this is a shell script we use internally to do some dependency checks to make sure we have defined everything we need before building our micro-service. The first part defines the withCredentials method. This is a way for us to retrieve the Github robot user name and password we have defined in a secure manner. The only reason we needed this is because the shell script that is invoked afterwards needs this to do a git checkout of a few other repositories to do its own job. That brings us to the next important section. We can execute shell scripts on the Jenkins pipeline code as defined above.

After the build completes and the tests pass(yay), it is time to archive the artifact(the jar file created) so that it can be downloaded from the deployment plan.

stage('Copy artifacts') {
archiveArtifacts artifacts: "target/${SERVICE_NAME}*.jar", fingerprint: true
}

That is it. This will archive the artifact and make it available to the respective deployment plan.

Finally it is time to execute the deployment plan to get this micro-service deployed.

stage('Invoke deployment') {
 build job: "../DeployProjects/${SERVICE_NAME}", wait: false, parameters: [[$class: 'StringParameterValue', name: 'BUILD_NAME_TO_DEPLOY', value: "${env.JOB_NAME}"],[$class: 'StringParameterValue', name: 'BUILD_NUMBER_TO_DEPLOY', value: "${env.BUILD_NUMBER}"], [$class: 'StringParameterValue', name: 'BUILD_BRANCH_NAME', value: "${params.branch}"]]
 } 

I use the build job method passing in the name of the deployment project I wanted to execute. The important bits are the wait attribute which is set to false meaning I do not want my build plan to wait until the deployment finishes and the parameters attribute which passes the build parameters needed for the build plan to execute.

The deployment plan then takes over. Copying the artifact that was shared by the build plan is done with the following snippet;

stage('Copy artifact') {
sh 'rm -rf target/'
copyArtifacts filter: 'target/*.jar', fingerprintArtifacts: true, projectName: '${BUILD_NAME_TO_DEPLOY}', selector: specific('${BUILD_NUMBER_TO_DEPLOY}')
}

All the parameters needed for the copy artifact method was passed down from the build plan. The rest of our process usually revolves around then taking this artifact and deploying stuff to kubernetes.

And that my dear readers is why and how I broke up with Bamboo and hooked up with Jenkins. Thank you for reading and have an amazing day ahead.

Leave a Reply

Your email address will not be published. Required fields are marked *