Skip to content

Advanced Jenkins setup: Creating Jenkins configuration as code and setting up Kubernetes plugin

This blog post demonstrates how anything in Jenkins could be configured as a code through Java API using groovy code, and how changes could be applied right inside Jenkins job. I particularly will demo how to configure Kubernetes plugin and credentials, but the same concept could be used later to configure any Jenkins plugin you are interested in. We will also look at how to create custom config which could be used either for all
or only specific Jenkins instances so you can setup different instances differently based on security policy or any other criteria.

The Why…

Recently I have been working on a task to improve deployment of our master Jenkins instances on Kubernetes.
On of the requirements was to improve the speed, as we have more than 40 Jenkins masters running on different
environments like test, dev, pre-prod, perf, prod etc and deployed in Kubernetes over AWS cluster. The deployment job took around an hour, involved downtime and required multiple steps.

The process of delivering a new version of Jenkins was as below:

  1. Raising a PR including various plugin config changes included in the Docker image, like adding a new slave into Kubernetes cluster, updating slave image version, increasing container cap, adding new environment variable or new vault secret,
    or adding new credentials, updating a script approval with a new method etc.
  2. After thorough code review by fellow devops engineers PR would be merged and nightly deployment scheduled.
  3. Finally deployment job would deploy all masters included in the config file with latest image.

Issues with this approach.

One of the issues was not considering that active jobs could be running by the time of deployment on the instance that is being updated, and given update was implying deleting a Kubernetes deployment of the master and deploying a new image, all jobs were simply lost on that instance. That was particularly disrupting performance environment and some long running end-to-end test jobs as they would normally run overnight. Let alone daily deployment, it would simply be impossible given hundreds of slaves running builds all day long.

The deployment job was also running sequentially, so if deployment failed you would only find this by the end of the build from generated report and then rerun
deployment again for this very failed instance.

But even after deployment job has been improved, deploying instances in parallel, adding a check to wait until there are no active jobs running on instance, running health check, retry, etc., the main issue was still there: you had to go through long process which required new image for every subtle change and downtime was inevitable, plus you couldn’t update Jenkins promptly.

This, in it’s turn, ended up with updating Jenkins for urgent changes manually through UI and meant config changes now in the Docker image and active instance could easily diverge or conflict.

The What..

So it seemed a different approach was required to bring the next features:

  1. Being able to update any Jenkins master or slave immediately – no new image, no redeploy, no downtime
  2. No manual changes through UI – everything is kept as a code
  3. Jenkins current state and state of image + config is kept in sync
  4. Any change could be tested immediately without vicious cycle: create a new image, deploy, test, and if fails – repeat!
  5. Creating a configuration that could be applied for specific environment only(prod vs test/dev Jenkins), with inheritance of common config and custom per jenkins config

If you never Dockerised or ran Jenkins as container before, or you never administered Jenkins in general, it worth reading my blog posts dedicated to Jenkins:

Dockerizing Jenkins, Part 1: Declarative Build Pipeline With SonarQube Analysis
Dockerizing Jenkins, part 2: Deployment with maven and JFrog Artifactory
Dockerizing Jenkins, part 3: Securing password with docker-compose, docker-secret and jenkins credentials plugin
Part 4: Putting Jenkins Build Logs Into Dockerized ELK Stack

Part 1 shows how to Dockerise Jenkins and it’s administration and configuration in details, so you will be more prepared for this post, as it requires some familiarity with running Jenkins inside container and it’s administration.

The How: Enter Config As A Code.

Jenkins and it’s plugins are written in Java, meaning any public API is accessible for modification, but because Jenkins is a decade old product, it’s architecture is designed for the times when applications were configured mainly though UI, but yet any modification is reflected in internal xml config for persistence. So when you Dockerise Jenkins, you normally create config through reverse engineering, add the change through UI and then bake generated xml into your Docker image. Even though it may seem easy, in fact for the reasons provided above, it is not the best practice apparently and requires many steps and downtime.

So let’s see what steps do we need to make to achieve our goal of fully automated, programmatic and no downtime updates:

  1. Familiarise ourselves with a Java API of plugin we are interested in.
  2. Write groovy code to interact with Java API of ScriptApproval, Kubernetes and Credentials plugins
  3. Create flexible common config which could be applied for all or specific instance with minimum duplication.
  4. Create a way to checkout externalised source code and config and apply it during container instantiation
  5. Apply source code or config changes when required to Jenkins with a Jenkins job

I will start with very simple plugin called ScriptApproval, just so you see how concept of configuring plugins work, then we will look at more advanced Kubernetes config and finally to see how to segregate config per Jenkins instance we will look at how to configure Credentials plugin.

Configuring ScriptApproval plugin

So given I have the config:

scriptApproval{
    approvedSignatures=[
    'method groovy.util.ConfigSlurper parse java.lang.String',
    'staticMethod java.lang.System getenv',
    'method org.jenkinsci.plugins.scriptsecurity.scripts.ScriptApproval approveSignature java.lang.String',
    'staticMethod org.jenkinsci.plugins.scriptsecurity.scripts.ScriptApproval get'
    ]
}

I can run the code in Script Console:

import org.jenkinsci.plugins.scriptsecurity.scripts.*

ScriptApproval script = ScriptApproval.get()

ConfigObject conf = new ConfigSlurper().parse(new File(System.getenv("JENKINS_HOME") + '/jenkins_config/scriptApproval.txt').text)

conf.scriptApproval.approvedSignatures.each{ approvedSignature ->
  println("checking for new signature ${approvedSignature}")

  def found = script.approvedSignatures.find { it == approvedSignature }

  if (!found){
    println("Approving signature ${approvedSignature}")
    script.approveSignature(approvedSignature)
  }

}

And should see next screen:

We use groovy ConfigSlurper which reads config file from given path and returns a map which we can navigate to retrieve what we need.
Then we make API calls to setup ScriptApproval plugin and the API could be found here:
https://github.com/jenkinsci/script-security-plugin/blob/master/src/main/java/org/jenkinsci/plugins/scriptsecurity/scripts/ScriptApproval.java

So this way you can configure any plugin you want, now let’s move to more complex plugin configuration

1. Familiarise ourselves with a Java API of plugin we are interested in.

First thing is to make sure we pick up right branch as different versions could be non backward compatible.

So let’s say I am running some instance of Jenkins and want to find version of the plugin:

➜  ~ curl -s  -u kayan:kayan "http://localhost:8080/pluginManager/api/json?depth=1" |\
	 jq -r '.plugins[] | select (.shortName == "kubernetes") | .version'

0.11

Now we need to clone the code from GitHub to study it’s API and how it works:

git clone https://github.com/jenkinsci/kubernetes-plugin

As our current version of installed plugin is 0.11 we need to checkout that very version so our interaction will be in line with installed plugin,
but first I need to find it:

git tag -l | grep 11

kubernetes-0.11

And then:

git checkout kubernetes-0.11

Note: checking out 'kubernetes-0.11'.

If I do some stats on code:

➜  kubernetes-plugin git:(c8e3642)stats src java             
calculating...
total lines: 5707

it may first seem too crazy to go though 5 thousand lines of Java code, but in fact, all we need is to find how main components of the plugin are created,
so we can mimic that in our groovy script

Now if you look at the plugin config in Jenkins:

you can find out that all we need in fact is to find constructors and getter/setter of 3 components:
KubernetesCloud, PodTemplate, ContainerTemplate, and some other properties of those 3.

Once we check the API, we can then set any Object and properties, and check after running our groovy script if it looks same as in original config set up from UI

2. Write some groovy code to interact with that Java API

So let’s see how our kubernetes.groovy will look like:

import hudson.model.*
import jenkins.model.*
import org.csanchez.jenkins.plugins.kubernetes.*
import org.csanchez.jenkins.plugins.kubernetes.volumes.workspace.EmptyDirWorkspaceVolume
import org.csanchez.jenkins.plugins.kubernetes.volumes.HostPathVolume

//since kubernetes-1.0
//import org.csanchez.jenkins.plugins.kubernetes.model.KeyValueEnvVar
import org.csanchez.jenkins.plugins.kubernetes.PodEnvVar

//change after testing
ConfigObject conf = new ConfigSlurper().parse(new File(System.getenv("JENKINS_HOME") + '/jenkins_config/kubernetes.txt').text)

def kc
try {
    println("Configuring k8s")


    if (Jenkins.instance.clouds) {
        kc = Jenkins.instance.clouds.get(0)
        println "cloud found: ${Jenkins.instance.clouds}"
    } else {
        kc = new KubernetesCloud(conf.kubernetes.name)
        Jenkins.instance.clouds.add(kc)
        println "cloud added: ${Jenkins.instance.clouds}"
    }

    kc.setContainerCapStr(conf.kubernetes.containerCapStr)
    kc.setServerUrl(conf.kubernetes.serverUrl)
    kc.setSkipTlsVerify(conf.kubernetes.skipTlsVerify)
    kc.setNamespace(conf.kubernetes.namespace)
    kc.setJenkinsUrl(conf.kubernetes.jenkinsUrl)
    kc.setCredentialsId(conf.kubernetes.credentialsId)
    kc.setRetentionTimeout(conf.kubernetes.retentionTimeout)
    //since kubernetes-1.0
//    kc.setConnectTimeout(conf.kubernetes.connectTimeout)
    kc.setReadTimeout(conf.kubernetes.readTimeout)
    //since kubernetes-1.0
//    kc.setMaxRequestsPerHostStr(conf.kubernetes.maxRequestsPerHostStr)

    println "set templates"
    kc.templates.clear()

    conf.kubernetes.podTemplates.each { podTemplateConfig ->

        def podTemplate = new PodTemplate()
        podTemplate.setLabel(podTemplateConfig.label)
        podTemplate.setName(podTemplateConfig.name)

        if (podTemplateConfig.inheritFrom) podTemplate.setInheritFrom(podTemplateConfig.inheritFrom)
        if (podTemplateConfig.slaveConnectTimeout) podTemplate.setSlaveConnectTimeout(podTemplateConfig.slaveConnectTimeout)
        if (podTemplateConfig.idleMinutes) podTemplate.setIdleMinutes(podTemplateConfig.idleMinutes)
        if (podTemplateConfig.nodeSelector) podTemplate.setNodeSelector(podTemplateConfig.nodeSelector)
        //
        //since kubernetes-1.0
//        if (podTemplateConfig.nodeUsageMode) podTemplate.setNodeUsageMode(podTemplateConfig.nodeUsageMode)
        if (podTemplateConfig.customWorkspaceVolumeEnabled) podTemplate.setCustomWorkspaceVolumeEnabled(podTemplateConfig.customWorkspaceVolumeEnabled)

        if (podTemplateConfig.workspaceVolume) {
            if (podTemplateConfig.workspaceVolume.type == 'EmptyDirWorkspaceVolume') {
                podTemplate.setWorkspaceVolume(new EmptyDirWorkspaceVolume(podTemplateConfig.workspaceVolume.memory))
            }
        }

        if (podTemplateConfig.volumes) {
            def volumes = []
            podTemplateConfig.volumes.each { volume ->
                if (volume.type == 'HostPathVolume') {
                    volumes << new HostPathVolume(volume.hostPath, volume.mountPath) } } podTemplate.setVolumes(volumes) } if (podTemplateConfig.keyValueEnvVar) { def envVars = [] podTemplateConfig.keyValueEnvVar.each { keyValueEnvVar ->

                //since kubernetes-1.0
//                envVars << new KeyValueEnvVar(keyValueEnvVar.key, keyValueEnvVar.value)
                envVars << new PodEnvVar(keyValueEnvVar.key, keyValueEnvVar.value)
            }
            podTemplate.setEnvVars(envVars)
        }


        if (podTemplateConfig.containerTemplate) {
            println "containerTemplate: ${podTemplateConfig.containerTemplate}"

            ContainerTemplate ct = new ContainerTemplate(
                    podTemplateConfig.containerTemplate.name ?: conf.kubernetes.containerTemplateDefaults.name,
                    podTemplateConfig.containerTemplate.image)

            ct.setAlwaysPullImage(podTemplateConfig.containerTemplate.alwaysPullImage ?: conf.kubernetes.containerTemplateDefaults.alwaysPullImage)
            ct.setPrivileged(podTemplateConfig.containerTemplate.privileged ?: conf.kubernetes.containerTemplateDefaults.privileged)
            ct.setTtyEnabled(podTemplateConfig.containerTemplate.ttyEnabled ?: conf.kubernetes.containerTemplateDefaults.ttyEnabled)
            ct.setWorkingDir(podTemplateConfig.containerTemplate.workingDir ?: conf.kubernetes.containerTemplateDefaults.workingDir)
            ct.setArgs(podTemplateConfig.containerTemplate.args ?: conf.kubernetes.containerTemplateDefaults.args)
            ct.setResourceRequestCpu(podTemplateConfig.containerTemplate.resourceRequestCpu ?: conf.kubernetes.containerTemplateDefaults.resourceRequestCpu)
            ct.setResourceLimitCpu(podTemplateConfig.containerTemplate.resourceLimitCpu ?: conf.kubernetes.containerTemplateDefaults.resourceLimitCpu)
            ct.setResourceRequestMemory(podTemplateConfig.containerTemplate.resourceRequestMemory ?: conf.kubernetes.containerTemplateDefaults.resourceRequestMemory)
            ct.setResourceLimitMemory(podTemplateConfig.containerTemplate.resourceLimitMemory ?: conf.kubernetes.containerTemplateDefaults.resourceLimitMemory)
            ct.setCommand(podTemplateConfig.containerTemplate.command ?: conf.kubernetes.containerTemplateDefaults.command)
            podTemplate.setContainers([ct])
        }

        println "adding ${podTemplateConfig.name}"
        kc.templates << podTemplate

    }

    kc = null
    println("Configuring k8s completed")
}
finally {
    //if we don't null kc, jenkins will try to serialise k8s objects and that will fail, so we won't see actual error
    kc = null
}

And here is Kubernetesconfig file for the script:


kubernetes {
    name = 'Kubernetes'
    serverUrl = 'https://kingslanding.westeros.co.uk'
    skipTlsVerify = true
    namespace = 'kingslanding'
    jenkinsUrl = 'http://kingslanding-dev-jenkins.kingslanding.svc.cluster.local'
    credentialsId = 'VALYRIAN_STEEL_SECRET'
    containerCapStr = '500'
    retentionTimeout = 5
    connectTimeout = 0
    readTimeout = 0
    podTemplatesDefaults {
        instanceCap = 2147483647
    }
    containerTemplateDefaults {
        name = 'jnlp'
        alwaysPullImage= false
        ttyEnabled= true
        privileged= true
        workingDir= '/var/jenkins_home'
        args= '${computer.jnlpmac} ${computer.name} -jar-cache /var/jenkins_home/jars'
        resourceRequestCpu = '1000m'
        resourceLimitCpu = '2000m'
        resourceRequestMemory = '1Gi'
        resourceLimitMemory = '2Gi'
        command = ''
    }
    podTemplates = [
        [
            name: 'PARENT',
            idleMinutes: 0,
            nodeSelector: 'role=jenkins',
            nodeUsageMode: 'NORMAL',
            customWorkspaceVolumeEnabled: false,
            workspaceVolume: [
                type: 'EmptyDirWorkspaceVolume',
                memory: false,
            ],
            volumes: [
                [
                    type: 'HostPathVolume',
                    mountPath: '/jenkins/.m2/repository',
                    hostPath: '/jenkins/m2'
                ]
            ],
            keyValueEnvVar: [
                [
                    key: 'VAULT_TOKEN_ARTIFACTORY',
                    value: '{{with $secret := secret "secret/jenkins/artifactory" }}{{ $secret.Data.value }}{{end}}'
                ],
                [   key: 'VAULT_ADDR',
                    value: '{{env "VAULT_ADDR"}}'
                ],
                [   key: 'CONSUL_ADDR',
                    value: '{{env "CONSUL_ADDR"}}'
                ]
            ],
            podImagePullSecret: 'my-secret'
        ],
        [
            name: 'Java',
            label: 'java_slave',
            inheritFrom: 'PARENT',
            containerTemplate : [
                image: 'registry.host.domain/jenkins-slave-java',
                alwaysPullImage: true,
                resourceRequestCpu: '1000m',
                resourceRequestMemory: '1Gi',
                resourceLimitCpu: '2000m',
                resourceLimitMemory: '2Gi'
            ]

        ],
        [
            name: 'Go',
            label: 'go_slave',
            inheritFrom: 'PARENT',
            containerTemplate : [
                image: 'registry.host.domain/jenkins-slave-go',
                alwaysPullImage: true,
                resourceRequestCpu: '1000m',
                resourceRequestMemory: '1Gi',
                resourceLimitCpu: '2000m',
                resourceLimitMemory: '8Gi'
            ],
            volumes: [
                [
                    type: 'HostPathVolume',
                    mountPath: '/var/go/go_stuff',
                    hostPath: '/go/go_stuff'
                ],
                [
                    type: 'HostPathVolume',
                    mountPath: '/var/go/some_other_go_stuff',
                    hostPath: '/go/some_other_go_stuff'
                ]
             ]

        ]

    ]
}

Kubernetes plugin analysis.
Now if you already have Kubernetes plugin setup, you can create your config by reverse engineering, go through xml file and grab properties into the config.

One thing worth mentioning, it uses default values where possible, thus, reducing number of lines in the config,
so if you always use privileged containers you can set up that property by referring to ‘conf.kubernetes.containerTemplateDefaults.privileged‘ when it is not explicitly configured.

You may see I have commented some lines in kubernetes.groovy as current installed version of the plugin is not compatible with version 1.0, which is presumed as next version in our example when we will update plugin (current latest version is much higher, this is just for reference). For example PodEnvVar in v 0.11 has been replaced by KeyValueEnvVar since version 1.0. But from config perspective, they are same, as config is referring to abstracted ‘keyValueEnvVar’ property and when you switch to new version, your config doesn’t need to change, as all you do is change couple lines in your code and then test your config.

Another important thing is if there is some bug in the code of actual plugin, you can ‘fix’ it by just changing some logic in your config groovy which will avoid you from downgrading or waiting till it is fixed (which happens a lot with jenkins plugins, so I personally always use couple versions back strategy)

For example if you check changelog, you can see that there was a bug in version 0.10, ‘Fix workingDir inheritance error #136’, which was fixed in 0.11 in this PR

But the point is with our flexible config, we could have fixed that by just applying the next code:

String workingDir = Strings.isNullOrEmpty(template.getWorkingDir()) ?
 (Strings.isNullOrEmpty(parent.getWorkingDir()) ? DEFAULT_WORKING_DIR : parent.getWorkingDir()) :
 template.getWorkingDir();

right inside our config, as we have full control over how plugin is configured!

We can also be always ahead of actual enhancements, for example biggest enhancement in 0.10 was allowing to inherit podTemplate from base/parent podTemplate:
‘Allow nesting of templates for inheritance. #94

But with our config, we could have done it even before this enhancement, as we have our own containerTemplateDefaults to refer to when config is common for multiple podTemplates.

3. Create flexible common config which could be applied for all or specific instance with minimum duplication.

I currently implemented credentials plugin config to only support simple UsernamePasswordCredentialsImpl type of credentials

import hudson.model.*
import jenkins.model.*
import com.cloudbees.plugins.credentials.CredentialsScope
import com.cloudbees.plugins.credentials.domains.Domain
import com.cloudbees.plugins.credentials.impl.UsernamePasswordCredentialsImpl

def domain = Domain.global()
def store = Jenkins.instance.getExtensionList('com.cloudbees.plugins.credentials.SystemCredentialsProvider')[0].getStore()

def instance = System.getenv("JENKINS_INSTANCE_NAME").replaceAll('-','_')

ConfigObject conf = new ConfigSlurper().parse(new File(System.getenv("JENKINS_HOME")+'/jenkins_config/credentials.txt').text)

conf.common_credentials.each { key, credentials ->
    println("Adding common credential ${key}")
    store.addCredentials(domain, new UsernamePasswordCredentialsImpl(CredentialsScope.GLOBAL, key, credentials.description, credentials.username, credentials.password))
}


conf."${instance}_credentials".each { key, credentials ->
    println("Adding ${instance} credential ${key}")
    store.addCredentials(domain, new UsernamePasswordCredentialsImpl(CredentialsScope.GLOBAL, key, credentials.description, credentials.username, credentials.password))
}

println("Successfully configured credentials")

The config looks like below:

common_credentials {
    jenkins_service_user = [
        username: 'jenkins_service_user',
        password: '{{with $secret := secret "secret/jenkins/jenkins_service_user" }}{{ $secret.Data.value }}{{end}}',
        description :'for automated jenkins jobs'
    ]

    slack = [
        username: '{{with $secret := secret "secret/slack/user" }}{{ $secret.Data.value }}{{end}}',
        password: '{{with $secret := secret "secret/slack/pass" }}{{ $secret.Data.value }}{{end}}',
        description: 'slack credentials'
    ]
}

kayan_jenkins_credentials  {
    artifactory = [
            username: 'arti',
            password: '{{with $secret := secret "secret/jenkins/artifactory" }}{{ $secret.Data.artifactory_password }}{{end}}',
            description: 'Artifactory credentials'
    ]
}

As you can see, common_credentials is used for all jenkins instances, but ‘kayan_jenkins’, on top of common one, is additionally using kayan_jenkins_credentials,
which is achieved thanks to dynamically generated properties in groovy.

Let’s say you have environment variable in your jenkins called JENKINS_INSTANCE_NAME
Now if you do

def instance = System.getenv("JENKINS_INSTANCE_NAME").replaceAll('-','_')
conf."${instance}_credentials".each { key, credentials ->

it will only have conf.kayan_jenkins_credentials property when running inside ‘kayan_jenkins’ instance.

Improved common config with exclusions and inclusions
This could be improved further as described in this example config:

common_credentials {
    exclude{
        tyrion-jenkins
    }
    data{
        jenkins_service_user = [
             username: 'jenkins_service_user',
             password: '{{with $secret := secret "secret/jenkins/jenkins_service_user" }}{{ $secret.Data.value }}{{end}}',
             description :'for automated jenkins jobs'
         ]
         slack = [
             username: '{{with $secret := secret "secret/slack/user" }}{{ $secret.Data.value }}{{end}}',
             password: '{{with $secret := secret "secret/slack/pass" }}{{ $secret.Data.value }}{{end}}',
             description: 'slack credentials'
         ]
    }
}

custom_credentials  {
    include{
        john-snow-jenkins
        arya-jenkins
        sansa-jenkins
    }
    data{
        artifactory = [
                username: 'arti',
                password: '{{with $secret := secret "secret/jenkins/artifactory" }}{{ $secret.Data.artifactory_password }}{{end}}',
                description: 'Artifactory credentials'
        ]
    }
}

tyrion-jenkins_credentials  {
    data{
       nexus=[
                'username':'deployment',
                'password':'{{with $secret := secret "secret/jenkins/nexus" }}{{ $secret.Data.nexus_password }}{{end}}',
                'description':'Nexus credentials'
        ]

    }
}

We have common_credentials excluding all instances which are not interested in some config, then we have custom_credentials sharing same config, but including all instances we specify. Note this eliminates duplication required in current version of the config, as for every custom jenkins instnace you would need to copy-paste same config.
Yet you can still have config specific to only one instance. The code not implemented though as of time of writing this blog. But it should be fairly simple.

4. Create a way to checkout externalised source code and config and apply it during container instantiation

So how our config is actually applied to Jenkins? Jenkins is loading groovy scripts from init.groovy.d/ meaning we need our scripts to be there before Jenkins starts.
So somewhere in your entrypoint.sh you should have these lines added:


#!/usr/bin/env bash

git clone ssh://git@your_scm_here/jenkins_config_as_code.git ${JENKINS_HOME}/jenkins_config
mv ${JENKINS_HOME}/jenkins_config/*.groovy ${JENKINS_HOME}/init.groovy.d/

consul-template \
  -consul-addr "$CONSUL_ADDR" \
  -vault-addr "$VAULT_ADDR" \
  -config "jenkins_config.hcl" \
  -once

We checkout the code and config, call consul-template to populate secrets as we can’t keep them in the repo, then when Jenkins runs, it will load every script and script will load the config and make necessary API calls. The checkout obviously requires .ssh/id_rsa file exists before the call, so you need to ensure it is either
set up by another consul-template call preceding checkout, or use Kubernetes secrets and mount volume with that file.

5. Apply source code or config changes when required with a Jenkins job

Our final goal is to be able to update Jenkins at any time – on the fly, as we want no downtime.

node {
    stage('checkout') {

        sh '''

		    git clone ssh://git@your_scm_here/jenkins_config_as_code.git ${JENKINS_HOME}/jenkins_config
			mv ${JENKINS_HOME}/jenkins_config/*.groovy ${JENKINS_HOME}/init.groovy.d/

		'''
    }

    stage('run consul template'){
        sh '''
			consul-template \
			  -consul-addr "$CONSUL_ADDR" \
			  -vault-addr "$VAULT_ADDR" \
			  -config "jenkins_config.hcl" \
			  -once        
        '''
    }

    stage('update credentials') {
        load("/var/jenkins_home/init.groovy.d/credentials.groovy")
    }

    stage('update k8s') {
        load("/var/jenkins_home/init.groovy.d/kubernetes.groovy")
    }

}

Once again we checkout updated code/config, run consul-template, and in final step call load so groovy code could be executed. With this model in place, we could easily run additional test steps before applying config to Jenkins, for example we can spin up test Jenkins container right from the job, apply config to it, add some tests to check if it is configured as expected and only after that apply changes to our actual Jenkins. But even if we don’t, should something go wrong, for example consul-template, the job will fail and we will see immediately what needs to be changed, as oppose to building a new image, deploying, and finding out that it is actually broken, fixing issue, rebuilding the image, deploying again… the vicious cycle!

Now all we need to do is just run this job every time config has changed, or ever trigger job automatically when PR is merged into master of config_repo:

And Jenkins will get updated!

Hardening Security
One thing worth mentioning about last step, because this job is exposing some internals of Jenkins, from security perspective it is a bit risky, so I would recommend configuring it with project-based security:

That will ensure only admins can see and run this job.

I hope you enjoyed reading this and found something that you were looking for.

The git repo with all examples available on my GitHub

Happy coding!

Inspired by:
Machine Head – Catharsis