By now you probably know I love anything Kubernetes and so does our company Track.health. As part of our continuous improvements to our overall CI/CD process, we strive to make it more streamlined and self-correcting.

One missing piece to the puzzle was the ability to validate Kubernetes files. YAML is tricky(at least it is to me), especially when you want to do any sort of updates to it. Add on the Kubernetes schema, it gets even more tricky with a plethora of valid elements and configuration options. We had issues with certain deployments failing only at deployment time due to us developers editing the file and having an incorrect indentation of some sort.

It was quite frustrating to have it fail at the end of the deployment pipeline and this is is when we wanted to explore options on validating Kubernetes files so that we can more proactively monitor any failures well ahead of deployment time.

Options 1 : Using the –dry-run flag

Kubernetes has the dry-run flag that can be passed when you apply resources. This will do everything except actually create the resource on the server. Therefore, if there is an error in our Kubernetes definition files, it will spit out an error which we can then handle.

This option works well but the only downside we saw with it is that it always needs a connection to the cluster in order for it to work. We therefore continued with our search which led us to the next option which is what we went with at the end.

Option 2: Use kubeval

Kubeval worked well for our needs as it works without a need for a connection to the cluster. It validates your Kubernetes files against the schema exposed.

We use helm to parameterize our Kubernetes files and the fact that kubeval worked seamlessly with helm was a win for us.

As part of integrating kubeval to our CI/CD pipeline, we did have to handle certain error scenarios which were not exactly issues with our Kubernetes files, but the fact that either the connection was reset to the Kubernetes API or a connection was refused. We made sure our script caught and handled these error scenarios gracefully and not trigger that as an actual failure.

A few caveats with kubeval

As kubeval works with validating the schema, certain failures will not be caught which will only be discovered on creating the resource on the cluster. For example, we had some issues with our network policies which had an incorrect structure and that was not caught by kubeval because as per the schema, it was still a valid structure. 

This IMO is fine. Even when you develop an application you make sure you do both client side and server side validation. This is no different. Kubeval as a tool will cover you in terms of the client side validation, but the server will still do its own validation and you need to make sure in your pipeline journey that both sides of the coin are taken care of.

Ending thoughts

Our CI/CD pipeline triggers the kubeval validation on each commit to our repository which holds all our Kubernetes files and thereby giving us proactive monitoring of our Kubernetes resources well before we deploy them to our cluster.

It has worked well for us so far. 

I would like to hear your thoughts and approaches on how you went about handling such scenarios in your own workplace.

Follow me on:

twitter: https://twitter.com/dinukadev

blog: https://dimashup.com/

GitHub: https://github.com/dinukadev


Leave a Reply

Your email address will not be published. Required fields are marked *