Kubernetes and CI/CD — How to integrate in your development process


  • Target of this story

Target of this story

The main goal of this article is to give a rough overview of what Kubernetes is, how Kubernetes can help development and DevOps teams, what things to look out for and tips on how to deal with such a cluster.

It gives also a introduction how to configure and setup Jenkins for Kubernetes with support for multiple build tools. Things like deploying applications are part of the concept, but not every detail (like writing Helm Charts) is covered here. Otherwise, reading the article will take hours and it’s already too long anyway ;).

There are basically many different views and models on this, all of them making sense for their use case. Whether there is a real right or wrong can therefore not really be answered.

The story therefore only represents the experience and opinion of the author and does not claim to do fit to every scenario.

What the hell is Kubernetes (for beginners)?

All those who already know this can skip this section.

Basic overview

Kubernetes (K8S) is a platform for orchestrating multiple hardware nodes into one (or more) large cluster(s). This is nothing new or fancy, as IT has been doing this for years with virtualization or with clustering capabilities of applications.

So why should I use Kubernetes and what are the benefits?

The biggest difference with classic clusters and the applications managed on them is that Kubernetes is a platform and the nodes are managed by Kubernetes. This means that an OP’s team does not have to decide which application to launch on which node. Kubernetes deploys it on the node with enough free resources. If a node crashes, Kubernetes automatically redeploys the application to another node. It also “monitors” the applications and if one becomes unhealthy, it tries to restart it (self-healing).

This also means that deployments are much easier than classic deployments, especially if you have a set of services. They describe what should be deployed with which configuration and how often. Kubernetes decides on which nodes which application has to be started, takes care of scaling and if something crashes that it is restarted.

Resources are also used much more efficiently than on classic servers or on virtual machines. If there is free space (memory/CPU) on a node, Kubernetes deploys the applications on that node. Unlike a classic server/VM infrastructure, you don’t have to permanently pay for all resources, even those that have 8 CPUs and 32 GB of RAM, and the application doesn’t use half of them or isn’t even deployed 80% of the month.

These resources can be used by other services as spike resources or shut down with the ability to scale with the number of nodes until they are needed to save costs.

Applications and Container

Applications will be started in containers, based on OCI images (e.g. Docker or Podman). Those containers are deployed inside of “Pods” as smallest, deployable unit in Kubernetes.

File system

It supports also a shared filesystem for the applications. To access the real storage, Kubernetes needs Storage Classes as plugins.

Access for a Pod can be requested with a “Persistent Volume Claims” (PVC). When a pod starts with a PVC, it claims the allocated volume with the Storage Class and can read/write data to it. These volumes have a lifecycle independent of the pod.

Namespaces and Networking

Also, Kubernetes has a concept of distinguishing multiple environments on a cluster. This is called “Namespaces”. Namespaces offer some kind of isolation between environments with own access roles. It is for example possible to create one cluster and offer different deployment stages via Namespaces without the need to create a new cluster. This gives a lot of flexibility.

In summary, namespaces are a type of virtual isolation that contain applications and allow different stages or business domains to be supported on one cluster. Kubernetes manages the nodes, so there is no need to define on which nodes a namespace is available.

DNS / Named Service Discovery

From a networking perspective, Kubernetes spans all nodes with its own virtual network and provides an internal DNS resolver for easy service discovery. Each namespace and service has it’s own DNS name inside the cluster with the schema


This allows services to connect to each other and gives Kubernetes the freedom to deploy an application with any free IP address on any node. The controller knows which services are available on which IP and can delegate requests to the correct service.

Ingress, Load balancing and Routes

To expose a service “to the outer world”, an Ingress Controller an be created to manage the routing between the IT network and the Pods. Usually, an ingress controller is used together with a load balancer. The deployment of those components will be done with a simple YAML file via a Kubernetes command (nothing to fear ;)).

To define the routes like “access my-service via /myservice and delegate to the service my-service”, an Ingress route must be defined. This happens also via a simple YAML file.

Rough Overview of previous topics

The following picture should help to get a rough idea of a cluster. It contains the controller plane for cluster management and any worker nodes. Various namespaces in which applications have been deployed are spanned across the workers. The deployment is distributed over the nodes. A filesystem is connected via Storage Classes to provide persistent volumes.

schematic representation of a Kubernetes cluster

Setup hints for Kubernetes (common overview)

A new Kubernetes instance can be created very easily with MicroK8S, K3S or K3D or simply with Docker Desktop under preferences.

But these tools are more for developers as some kind of playground or small IoT environments. They are mostly deployed on one machine and are not made to deal with many nodes and to handle crashes of the required Kubernetes components themselves.

For real environments (test environments, production…) it is recommended to use a full Kubernetes deployment.

Kubernetes Management

For cluster management support, there are also a lot of tools available:

If you want to run Kubernetes in your own data center, there are a few things you should keep in mind.

The basic components of Kubernetes are explained at the Kubernetes components overview page.

Hints about sizing and scaling

To run Kubernetes as a stable platform, you generally need an odd number of controllers and etcd services greater than 1, and at least 2 (again, an odd number would be recommended) nodes.

The reason for the odd number is that if a node fails, it is still possible to safely deploy components without immediately experiencing scalability or overload issues. Ultimately, however, how you scale the cluster depends on factors such as the overall size of the cluster, the resilience you need to ensure, performance requirements, and so on.

It is also recommended to have at least 40% free resources as there may be spikes or short notice requirements.

If you’re using a cloud-based Kubernetes cluster from Google, AWS, or Azure, you don’t have to worry so much about these things.

Additional links and resources

Here are some interesting links to tools and components for Kubernetes:

Welcome to CI/CD

In the world of CI/CD, the main goal is to push new code and have some systems to build, test, and deploy it without manual interaction.

To achieve this, everything from build to deployment/operate should be automated!

But CI/CD has some questions:

  • How can I deploy my feature or bugfix branches?

Let’s start with the issues in classic environments

In a classic environment (including VMs), you need to talk to your OP’s team to set up some additional environments. Your manager will ask about the cost and it can take days before you can access everything. It’s also not so easy to present to interested customers because you have to tell colleagues that there is no deployment between 3 and 4 and nobody should run tests that can break the system.

And if you need to deliver a critical hotfix or present the latest development features to someone without disturbing other team members that they should not merge their branches or that someone from OP’s team needs to reconfigure the environment or that they simply cannot use the environment for the next 1–2 days until the problem is fixed.

Another issue is, that if you need some changes on your Jenkins you need to define permissions. Somebody needs to update the instance, maintain the supported build-tools (e.g. Gradle, Maven, NPM…) and so on.

I think a lot of people know these things.

So switching back to the good old days doesn’t seem to be an option either.

Kubernetes and CI/CD

With the help of Kubernetes, we can solve a lot of the problems mentioned above. Not all of them in every case, but many of them will be much easier.

Let’s pick some low hanging fruits:

  • With the concept of namespaces, it is possible to create an almost unlimited number of independent environments in seconds. No waiting for OP’s, no waiting for the one guy who can set something up, no unwanted interference from others during a presentation or dedicated test. The only limitation are the resources inside the cluster.
    And if developers have no access to define new namespaces, they can order a new one, but this takes only seconds and for the Kubernetes Admins it is not important, how and which applications have to be deployed inside.

Namespaces? Deployments with YAML files? CI/CD? What are you talking about? What kind of problem is this supposed to solve?

Ok, let’s bring it together…

Basic concept / idea

Well, we now know that we can create multiple environments with namespaces, developers can define deployments themselves, and Kubernetes can scale very well.

Let’s think about a concept of how to implement all these things to best benefit from Kubernetes.

In general you don’t want to allow developers to deploy applications directly on the cluster. This should be done by a tool like Jenkins automatically.

The reason for this is that developers tend to “quickly deploy something here to test something” and “quickly change a configuration there to fix something". That may be fine on their machines, but it’s very critical if you want reliable deployments.

A reliable deployment means that it is reproducible and that you don’t have to talk to the one person who knows what needs to be changed to make it work and who is currently on vacation for 4 weeks. It means, that the deployment description is working from test stages up to production without manual interaction and manual deployments.

If it is forbidden to deploy something by hand, developers must define these things in the deployment descriptors. They must commit the required changes to a repository and there is the ability to track changes —No more “I didn’t change anything, but now it’s broken” excuses!

As a basis we have 3 teams. Each team has to maintain different services and for each team we need separate environments. Some services are used by other teams and each team wants to have those things:

  • Building the application (Build-Stage)

Phew…a lot of environments, right? Nobody wants to maintain that on classic environments and not every stage is needed all the time.

But let’s break it down to the real requirements and put it on a timeline for a Sprint:

Timeline for required stages

For sure, sometimes some environments are used more often and depending on the team structure, this plan can look totally different.

But if you try to run all tests during feature/bugfix development and automate as much as possible so that your develop branch is always release ready, you don’t need a separate QA department to test your applications all the time.
This should be a team effort with the help of a some QA engineers to improve test quality and make sure everyone is thinking about edge cases.

Most QA engineers are also very good at developing features, and a mixed team helps for a better understanding of everything and stabilizes the application. It helps also the developers to understand testing more better and to think twice about the implementation and the QA engineers have a better understanding of the application and (if required/wanted) they can improve their coding skills.

Ok…back to the topic. As you can see you don’t need all environments all the time. So we can “share” the resources between the environments instead of having them all together all the time. And if we don’t use all resources we can stop the nodes and save money while they are not available.

Splitting the stages into Namespaces

It is a very good approach to deploy one Jenkins instance for each team. This avoids configuration issues (“Please do not upgrade plugin XY, we can’t use the new one currently”) and it avoids build congestion when, for example, Team A’s applications build longer or more often than Team B’s applications.

Teams can work independently with their requirements for their applications for which they are responsible.

To get our stages ready, we can split them into namespaces. In general, I would suggest defining some naming conventions for namespaces.

For example you can “prefix” namespaces with the team name and then define the name of the stage. Let’s say we have a “payments” team, an “advisory” team and an “automotive” team.

The default namespaces can look like that (example payments):

  • payments-build

Such a “namespace group” or “team group” may look like the following:

schematic overview

Now we have solved the general availability of multiple stages and we are able to use them. We are also able to add a new namespace very quickly for some special requirements (e.g. payment-stage-ceo-presentation).

If the cluster is well configured and there are no notorious resource bottlenecks, we can scale environments by creating namespaces and scaling nodes according to the applications they need.

Setting up the build system with Jenkins

Well, after solving the issue with the environments, we need to fill the cluster with live.

First, the build system must be created. You can use the CI/CD tool of your choice for this, as long as it can be deployed as a container.

In this story, I’m going to use Jenkins because it is the most popular and offers some features such as Groovy support for extremely flexible pipeline definition and allows you to define all the build tools you need in small containers that can be orchestrated for team requirements, rather than maintaining them all together on one large build system.
This approach also allows older builds to be reproduced with the build tools in use at the time.

Another point for me is, that I can use my Kubernetes JCasC Management tool, which has already automated most of the steps and works strictly with JCasC (Jenkins Configuration as Code). This means, that if my system crashes, I’m able to recreate all instances/namespaces/jobs in a very short time. If you are interested in more information, you can also read the story “Jenkins — Jenkins Configuration as Code (JCasC) together with JobDSL on Kubernetes”.

Finally, what we want to achieve with the build system is the following:

At the operate level we want to have a system, which is running inside our “<team>-build” namespace and which is completely defined as code in version control system (VCS) like Git to have “infrastructure-as-code” (Store level).
If we need to define something, we want to checkout this repository, change the settings, push it back and start the deployment (Manage level).

The Jenkins instance should be predefined from the configuration of this repository and fetch its complete configuration from a versioned repository.

This approach makes the whole system reproducible and traceable. The started Jenkins container is always the default container without special configuration files here and copied plugins there.

Without these adjustments to the Jenkins container and due to the fact that the configuration is available as YAML files in a repository, any update of Jenkins is as simple as can be. And if you want to switch your Kubernetes cluster (e.g. from self-hosted to a cloud-hosted solution) the process is the same and it takes minutes (or maybe hours if you have a lot of namespaces) instead of days or months.

After the first Jenkins instance is ready to deploy and Kubernetes is prepared that Jenkins is accessible through the Ingress routing of Kubernetes, we can start thinking about the build tools we need for our applications.

Setting up the build tools for Jenkins on Kubernetes

I think everyone knows the situation when a build tool needs to be upgraded. One of the best examples is a NodeJS upgrade. Let’s say team A is using NodeJS 16, team B is using NodeJS 10, but they want to upgrade to NodeJS 14.

They also have some Java backend services and for that Team A needs Maven 3 and Team B needs Gradle 6, but Team B also wants to upgrade to Gradle 7.

How do you manage this zoo of tools? “Install them all, Jenkins offers quite a few for that”, may be your first thought. — Wrong!

Over time, it becomes increasingly difficult to manage all these tools, especially if you have additional historical systems that cannot be updated.

It also doesn’t help if you want reproducible build systems, because Jenkins UI support for older build tools is also limited.

Creating a base image for all build tools

It is recommended to create a base image for all build tools that already contains a Jenkins user and group. In addition, an entrypoint should be included to prevent the container from being stopped immediately:

Example Dockerfile of a base image for Jenkins Worker

This image should be pushed to your container registry (and don’t forget to do that via a build pipeline ;)).

Let’s say we pushed the image with the name “jenkins-worker-base”.

If you are using Java, it makes also sense to have a second “jenkins-worker-base-java” image, which is using

FROM adoptopenjdk/openjdk11:ubi-minimal

instead of using the empty image. Those containers have Java already inside, which saves time ;).

It is also possible to define the FROM argument with more ARG arguments and define the base image with build args from the pipeline.

Now we can create new images based on the “jenkins-worker-base” for each build tool and version (preferably with the build tool version tag).

So, let’s create a simple nodejs image, based on our new jenkins-worker-base:

Example Dockerfile for simple NodeJS image

If special versions are required, the Dockerfile can look like that:

Example Dockerfile for extended flexible NodeJS image

This Dockerfile allows the pipeline to define the node version with the build argument “NODE_JS_VERSION”. The image can be tagged with the same version and as a result you have reliable images in your registry.

With the same procedure we can create an image for Gradle:

Example Dockerfile for extended, flexible Gradle worker

Now we have completed all the infrastructure setup around Jenkins. We have namespaces for our build system and deployment stages, we have prepared Jenkins images to support multiple build tools, and we are almost ready to deploy Jenkins.

Prepare the final configuration and deployment of Jenkins

Now it is time to create a Jenkins Configuration as Code YAML file for Jenkins. With the help of JCasC it is possible to configure Jenkins with a single file instead of the old config.xml and other configuration files.

Another advantage of JCasC is that it offers the possibility to reload the configuration on-the-fly. Means: you can screw a bit in the Jenkins configuration and if nothing works, press Update configuration and everything is as before.

It allows also to deploy Jenkins on a new, plain cluster or namespace and everything is configured and ready-to-use.

As mentioned earlier, I refer to my K8S JCasC Management tool in this story because it allows to skip a lot of manual steps and a lot of searching on the Internet on how to do x and how to configure y. You are free to read the documentation and compare the templates, etc. to do it another way, but here but it would go beyond the story and distract from the essentials.

The K8S JCasC Management tool comes with a pre-configured Jenkins configurations and is prepared to fetch the configuration from a remote file. This also allows to update the configuration in an external Git repository, press the “Update configuration” button on Jenkins and everything is reconfigured.

If Jenkins is completely broken, just reinstall it and the job is done. The same is true for a Jenkins update.

For such configurations it is necessary to put every configuration change in this configuration repository. Every manual configuration is lost, after Jenkins was re-installed or after pressing the “Update configuration” button. As a small gift, you get a versioned configuration of Jenkins and every change is traceable and reproducible. Others may say “backup” to it ;).

An example of such a repository can be found at https://github.com/Ragin-LundF/k8s-jcasc-mgmt-example. Under “projects/example-project/jcasc_config.yaml” you can find a JcasC configuration.

The interesting part is the “jenkins.clouds.kubernetes” section in this file.

This array defines a configuration for a cluster and supports some templates.
Jenkins allows to define here which containers it should start when a build is started.
It is also possible to inherit from other containers, which is used here with the

“inheritFrom”: “pipeline-base-container”

part. Jenkins requires a JNLP container for worker that can communicate with the controller and worker nodes and is able to delegate between the other containers. We also always want to create Docker images in our build pipeline, so both are added to the “pipeline-base-container” template and all others inherit from it without redefining it again and again in the container section.

The following example tells Jenkins that when a build is started, all containers defined in a templates.container section must be started. Each Jenkins instance can have its own set of templates, depending on the requirements of the project.

This is also where it becomes important for each team to have their own Jenkins instance, so as not to run too many build tools in parallel. Unused ones should also be removed from the container list.

An extract of a JCasC configuration for Kubernetes templates

Generally, they do not consume CPU power, but in our example, they have defined resources that are allocated by Kubernetes while the containers are running to ensure that each container can fully utilize the resources it has ordered.

In addition to this compromise, it has the advantage that it is possible to use different, independently maintained build tools in different versions. You can add several versions of one tool to a build pipeline without any conflict, by adding them to the container section under the templates with a new unique name.

The example above configures a kubernetes template for a Jenkins worker agent, which contains the following containers:

  • JNLP (required by Jenkins)

And a second agent definition, which contains:

  • JNLP (required by Jenkins)

Depending on the application we want to build (NodeJS only or Gradle/NodeJS configuration) we can choose the agent with our Jenkinsfile:

Example pipeline (not working) with multiple containers involved

As you can see, the “agent” selects the Kubernetes template “gradle_java” from the Jenkins configuration.

Inside the definition “stages” -> “stage” -> “steps” you will see the definition “container(name: ‘xy’)”. When Jenkins runs the pipeline, it switches between the containers deployed for this worker agent here.

All the containers have a shared volume under the hood, which means, that every change inside container A is visible directly on the filesystem of container B and so on.

This makes it possible, for example, to build a web application in a NodeJS container, copy the data to a static resource directory in the Java service, build the Java application inside the Gradle container, and finally create a container image that contains the UI and backend inside the Docker container.

After the configuration of Jenkins was finished (and pushed to a repository) and Jenkins was deployed (and working) on the Kubernetes, builds should work.

We have a flexible CI pipeline for each team. We are able to maintain build tools independent of Jenkins with images and we can assign them via JCasC to Jenkins depending of the team requirements.

Allow Jenkins to deploy into other Namespaces

The final step for our configuration is to add the necessary part to start the CI process and deploy to the namespaces. As a result we have a CI/CD pipeline on Kubernetes.

Kubernetes has the concept of Role Based Access Control (RBAC). You should always try to manage the RBAC namespace by namespace and try to avoid global roles as much as possible.

If you want to use Jenkins in a namespace, most roles are bound to that namespace and other namespaces are not usable by Jenkins.

We need to create some roles that enable Jenkins to deploy applications in the defined namespaces documented above.

The K8S JCasC Management tool has some additional definitions for this. The descriptors can be found under “charts/jenkins-controller/templates” within the files with the prefix “k8s-mgmt-jenkins-agent-deploy-*.yaml”.

But we don’t want to dive too deep here. The tool is prepared to support additional namespaces, which can be configured in the jenkins_helm_values.yaml file.

If we configure the following within this file, the previously mentioned templates will prepare the roles on the cluster:

Jenkins_helm_values.yaml definition for additional namespace roles

Now Jenkins is fully prepared to build in its own namespace and deploy to other namespaces.
Congratulations, the CI/CD goal has been achieved from a Jenkins perspective.
Stages are prepared and waiting for upcoming deployments of Jenkins.

Deployments with Kubernetes

In the previous steps, we have already used Jenkins with so-called “charts” and maintained some configuration values in some value.yaml files for these Charts. But what are Charts and how do they work together with the value.yaml files?


The secret behind these Charts is Helm. Helm is an alternative, template-based deployment tool for Kubernetes. It supports own Chart registries (also supported by Artifactory, for example) to centrally manage these Charts similar to Maven repositories and make them available to interested users, OP’s teams and customers.

If you want to use Helm, it is strongly recommended to use Helm ≥ v3, as Helm v2 requires massive permissions on Kubernetes due to the use of Tiller.

For OpenShift users, it is necessary to upgrade to OpenShift v4 to use Helm v3.

There are also other tools like Kustomize that offer similar functionality, but we want to concentrate on Helm in this story.

Helm allows us to define some kind of Kubernetes YAML deployment descriptor as a template and externalize the configuration into a values.yaml file. Helm merges the two and generates the Kubernetes YAML file, which is then deployed. There are more features, but this is what we want for now.

For each service we need an image in a container repository and ideally a Charts repository to make the Charts available to others.

Documentation on how to write Helm Charts can be found on their website. There is also a command available to create the boilerplate definition for a service.

If the Charts are written and the necessary configuration is defined in a standard values.yaml file, users can overwrite these values.yaml files completely or only parts of them. For example Jenkins has many options to configure in its value.yaml file. Most of the default values are ok and for our deployment we only need to overwrite some values like you can see in the k8s-jcasc-mangement examples repository.

It allows also, to define the default settings and to only change stage specific settings in multiple values.yaml files.

This means that the deployment definition is available via the Charts, a base configuration is available via the default values.yaml and stage specific values can be overwritten by OP’s and/or customers in additional files. Everything is merged together and the result is a predefined deployment with flexible configuration and an overview of the settings only for each stage.

Helm charts should be part of the application and maintained by the developers. For modern microservice (oriented) services it makes the most sense because OP’s teams and customers in most cases have no knowledge about how an application should be deployed. The developers should know best. Microservices have also shifted a lot of complexity from developers to OP’s, which often leads to problems. When developers maintain these Charts, the complexity comes back a bit to the developers, or better: to the people who created and are responsible for the services.

It reduces clarifications with OP’s and/or customers and ensures that applications are deployed as the developers intended.

There is also the question of versioning Helm Charts in the Helm Chart Repository. Theoretically, you can create the Helm Charts once and publish a new version only if something has been changed in the charts and make the real application version configurable via values.yaml.

If they are part of the application repository, you can also push new Helm Charts for every application version.

My personal preference is to push the Charts with every new version of the application. The simple reason is that modern services bring enough complexity with own versions of each service. The requirement to have one overarching version for the entire product can add an extra layer. Now if the charts have a different version, you end up in a cascade of version definitions when a customer has problems:

  • “I have deployed version 1.2.0 of your product”


  • “If you used version 1.2.0, did you use Helm Chart version 1.0.1 or 1.0.0? And was the service you used version 1.3.0 as mentioned in the documentation or did you re-install the old application version 1.2.0?”

As you can see, nothing you want to deal with. If the Chart version is synchronized with the application version, you can reduce one error layer and one less version to talk about.

Helm helps to define how the deployment of a service works and gives the ability to store this deployment definition as Charts in a repository and make it available to others.

Good job, all done, sit back and relax? No! We want to define deployments for a complete application landscape, which means we need to use not just one Helm Chart, but many of them.


That’s where Helmfile enters the game.

Helmfile is designed to add a new layer to Helm Charts and to orchestrate and deploy multiple Helm Charts with one command. It also adds another layer for configuration to be able to override the values in the Helm Chart values.yaml files depending on the environment.

With the full toolset, we are able to define:

  • Helm Charts for each application to describe the deployment as code.

To manage Helmfiles, they should have their own repository, because they are responsible to deploy all applications of a team and not only one special application like Helm Charts.

All internal stages should be preconfigured in this repository. If required, for example, the Jenkins deployment job can set the missing secrets for database access with environment variables.

Only the production environment should have it’s own repository which can be synced with the dev repository via Pull Requests or as an external upstream.

Helmfiles should also be managed by developers for the same reason mentioned in the Helmet section.

You can find here a very good video how to work with Helmfile:

Bring everything together and finish this odyssey

Back to the original concept with namespaces, deployments on them, and (if desired) the ability to store unused resources.

Let’s say, we have separated the E2E test stage from the developer stage to not disturb developers. And we want to deploy the application to the development stage only, if all tests are successful.

A Jenkins pipeline can be defined now as the following:

CI/CD with separate E2E test stage with stable dev-stage

The E2E test stage is only used for the E2E tests in this process. This deployment contains all dependent modules to have reliable data. The Helmfile for this must be configured to use the current version of the current module and only stable versions of the other modules. This can be achieved with additional Helmfiles within the modules (similar to docker-compose that devs can deploy everything locally) or with an additional branch/repository for E2E tests.

After they are completed, the stage is uninstalled and does not require any further resources, which means that the Kubernetes nodes can be scaled down after passing the tests.

If you don’t need a permanent deployed dev stage as defined in the environment timeline plan above, the process can look like that:

CI/CD with minimum resources without permanent available stage

The other stages can be deployed with own deploy and uninstall jobs in Jenkins to have them only available if they are used.

To define the Helmfiles configuration for all stages, the repository of the Helmfiles can have branches for each environment with an initial “develop” branch to update the Helmfiles.

To bring the changes to a stage, for example a Pull Request is required from develop to the “dev-stage” branch. If this was reviewed and approved, the changes can be merged to the “dev-stage” (and maybe trigger a deployment).

If we now create branches for all stages and merge changes to all of them, it is possible, for example, to change version numbers in the QA branch to be able to test special versions independently of the other branches.

You can also use multiple repositories for each stage and merge changes via upstream or pull requests. For example, if developers had added a new configuration value, they can create a PR to the other branches, making the change visible to other stages.

Links to an example

To see how it can work, you can have a look into those repositories, which provide a nearly full working example for everyone. The only thing, which is not working for everybody out of the box, is the docker push command in the Jenkins pipeline. The simple reason for that is, that I use here my own repositories and I don’t want to commit my tokens ;).

Because of the lack of an Artifactory server or something similar, I decided to create as much in Github as possible (like Helm repository), to make it transparent, how it looks finally.

Last words

If you’ve read the whole story, I want to thank you for your patience and hope some things were helpful.

It may be impossible for you to build a similar environment due to business organization, other requirements, or some missing pieces.

However, it may already help to think about ways to use Kubernetes and how configuration-as-code and infrastructure-as-code can help secure the systems you need and how very high automation together with namespaces can open up possibilities that would be unthinkable in classic environments.

Alternative CI/CD tools for Kubernetes

This section is a short add-on with some links to other Kubernetes native CI/CD tools.

Depending on your requirements, it may make more sense to look at optimized tools. They all have some differences and work more like Bitbucket Pipelines or GitHub Actions.

The following tools are not cloud tools and can be used with your own infrastructure:

Get the Medium app

A button that says 'Download on the App Store', and if clicked it will lead you to the iOS App store
A button that says 'Get it on, Google Play', and if clicked it will lead you to the Google Play store