Creating a Kubernetes Controller to Get GHS Token for a GitHub Application
In my case, I’ve created a special Kubernetes controller to work with a GitHub App to exchange the App JWT for an installation-specific and short-lived GHS token. This controller updates the token before it expires and updates some third-party integrations such as ArgoCD OCI repository credentials and Dockerconfig JSON secrets. Unfortunately, this controller is currently useless due to GitHub limitations: only personal access tokens (classic) can access private registries (see GitHub discussion for more details).
TL;DR
References to create your own Kubernetes controller:
- Sample controller GitHub repository: sample-controller
- Generators for kube-like API types GitHub repository: code-generator
My implementation:
- GitHub App jwt2token Kubernetes controller: github-app-jwt2token-controller
- Helm chart source code: github-app-jwt2token-controller
- Helm chart public repository: helm
GitHub discussion:
- Using GitHub Apps to access private registry is here
Basic Understanding of Kubernetes Controllers
Kubernetes controllers are control loops that monitor the state of your cluster resources and make necessary adjustments. Kubernetes provides several built-in controllers to manage various resources such as pods, deployments, services, and replica sets. By design, controllers continuously reconcile the current state of the cluster resources with the desired state defined in the Kubernetes resource manifests.
To understand what is under the hood of Kubernetes Controllers, we can refer to an official controller example, sample-controller. This repository provides a comprehensive diagram that helps to understand the underlying components of a typical Kubernetes controller.
You can see that the client-go library covers a lot of interaction and components by itself.
From the custom Kubernetes controller implementation perspective, we can identify two main tasks in the controller workflow:
- Use informers to watch add/update/delete events for the Kubernetes resources we want to know about.
- Consume items from the workqueue and process them.
client-go provides informers for standard resources, for example, deployments: k8s.io/client-go/informers/apps/v1/DeploymentInformer. These informers contain everything needed at the top of the picture, they provide the reflector, indexer, and local storage.
But we are here not for watching some resources, we need to define and proceed with our own.
How I Created a Kubernetes Controller
Here is a high-level plan to achieve my goals:
- Create Custom Resource Definition (CRD) for ArgoCD repositories.
- Create CRD for Dockerconfig JSON.
- Create CRD for GHS tokens.
- Implement logic to manage ArgoCD repositories.
- Implement logic to manage Docker configurations.
- Implement logic to remove expired GHS tokens.
Preparation
Before we begin, we need to get code-generator. It helps us to automatically generate tons of code instead of manually writing it.
Then we need to update the hack/update-codegen.sh
script to have something like this:
1 |
|
Custom Resource Definition
Here, we need to create YAML definitions for all our custom resources:
CRD definition for the resource that be responsible for updating the password field for OCI repository by GHS token generated by the controller:
1 | apiVersion: apiextensions.k8s.io/v1 |
CRD definition for the resource that be responsible for generating/updating docker config JSON with the actual GHS token:
1 |
|
CRD definition to store all generated GHS tokens:
1 |
|
Please pay attention, we are trying to create namespaced CRDs to limit access to them in the future if needed.
Then, we need to define types in pkg/apis/githubappjwt2token/v1/types.go
to be able to work with defined CRDs. Also, we need to add special annotations in the code to tell code-generator
what should be generated for this type. For example:
1 | // +genclient |
The full implementation you can find here - types.go
Right now we can run ./hack/update-codegen.sh
to generate clientset, informers, and listers.
All generated codebases will be located under the pkg/generated
path of the repo. And we can go to the implementation logic of our controllers.
Controller Logic
The logic for controller_argocdrepo.go
and controller_dockerconfigjson.go
is pretty similar and described in the diagram below:
Every resource with kinds ArgoCDRepo
and DockerConfigJson
in the scope of githubapp.technicaldomain.xyz/v1
is processed by the appropriate controller.
The controller checks the resource status (subresource), in the status recorded md5 of the appropriate GHS
custom resource. The controller checks for the existence of this GHS
resource, if it exists do nothing, otherwise, the controller calls a special function to retrieve the GitHub App private key, GitHub App ID, and GitHub App Installation ID from the secret (name of the secret defined in the custom resource field).
Then the controller creates and signs JWT based on the GitHub App private key, GitHub App ID, and GitHub App Installation ID, after that the controller calls a special GitHub API endpoint to exchange this JWT for a GHS token. Then calculate md5 for this token, create a new GHS
resource, and store inside this resource token and information about token expiration. md5 is used as the name of the GHS
resource, also this name is recorded in ArgoCDRepo
or DockerConfigJson
status field.
After that, based on the type of the resource, the controller looks for all ArgoCD repositories defined in the ArgoCDRepo
custom resource and updates the password field to the new token.
For DockerConfigJson
the story is about the same, but in this case, the controller generates a docker config.
The logic implemented in controller_ghs.go
is much more straightforward.
Here the Kubernetes controller just checks for the resource expiration time, and if it is less than 15 minutes - simply delete the resource. That’s it.
Local Run and Debug
To debug this controller, you should have a deployed and accessible Kubernetes cluster (k3s works, I’ve checked).
To run the controller locally, please ensure that you have updated your kubeconfig and that your cluster is set for debugging in the selected context.
Then simply run go run . -kubeconfig=$HOME/.kube/config
. The controller will be compiled and will try to connect to your Kubernetes cluster.
To work with your favorite debugger and put some point break, please refer to your IDE guidelines and best practices.
Installation
This controller can be easily installed using the github-app-jwt2token-controller
Helm chart.
Here is an example of an ArgoCD application:
Folder structure:
1 | . |
And content
1 | apiVersion: v2 |
1 | apiVersion: githubapp.technicaldomain.xyz/v1 |
A secret with the name technicaldomain-gha-argo-app
should be created in advance using the same namespace where the controller is deployed.
Here is an example of the secret:
1 | apiVersion: v1 |
Conclusion
Creating a Kubernetes controller is not a complicated process. The official documentation and examples provide a quick understanding of what you need to do and how to do it. Creating your controller and custom resources unlocks the unlimited power of Kubernetes and brings real flexibility in extending functionality.
Unfortunately, my controller is currently limited due to the absence of necessary functionality from GitHub’s side. However, I believe that it will be implemented someday, allowing me to enjoy all the features provided by my controller.