Skip to main content

6 posts tagged with "gardener"

View All Tags

· 2 min read
Jens Schneider

TLDR;

From May 22th until 26th, the colleagues from SAP, StackIT, x-cellent, and 23 Technologies met for another Gardener hackathon. One output is another repository collecting the hackathon outputs. Go ahead and checkout the repo, for a concise summary of all past hackathons and information on future hackathons.

Another great experience with great achievements

When we met on Monday 22th, we synchronized our expectations for the week in the first place. Almost everyone agreed that having a good time together belongs to the main expectations. A few days after the hackathon, I can definitely state that we had a good and productive time together. From the social perspective, we enjoyed the fruitful discussions during lunch or dinner. From the hacking perspective, we were really fascinated by the progress made with respect to some topics which have made it to the agenda for several times before:

  • Supporting pure IPv6 shoot clusters and
  • Replacing the bash scripts for node provisioning with a golang-based approach.

Moreover, we were working on a more research oriented topic dealing with the deployment of "masterful clusters" aka. "autonomous shoots". Even though the final concept for "gardener-like initial clusters" was not developed during the hackathon, the collected experience with respect to this challenging task is crucial for further steps. Besides the bigger topics mentioned above, we brought the following task close to (or even in) production:

  • We moved the machine-controller-manager deployment responsibility to the gardenlet
  • We introduced an InternalSecret resource in the Gardener API
  • We replaced the ShootStates With data in backup buckets
  • We found a concept for Garden cluster acccess for extensions in Seed clusters.

Of course, there are still open questions and not every issue was solved during this short week. Therefore, we are happy that the colleagues from x-cellent opted for organizing the next Gardener hackathon in November/December 2023.

Conclusion

Once again, the Gardener hackathon was a great experience with great achievements for the overall project. The community work towards a "managed Kubernetes done right" service is still gathering pace which is forms a great basis for all future development.

· 7 min read
Marius Wernicke

TLDR;

We recently built new Kubernetes clusters on Hetzner Cloud. We had several challenges to get the cluster up and running.

This started with the selection of the correct Kubernetes version, the CNI solution and the actual deployment of 23KE. Spoiler: We had to change the CNI solution and reset containerd.

If these instructions in this blog post are followed, you can build a working Gardener cluster.

Table of Contents

Introduction

In times of rising costs and the reduction of the CO2 footprint in all areas of life, we also wanted to reduce these sustainably in our day-to-day operations.

We had been running okeanos.dev on a managed kubernetes cluster on Azure. This is very expensive so we wanted to minimize these costs. In this case, the European cloud from Hetzner was the obvious choice. The following question was how we could best build a k8s cluster there.

After some research, ClusterAPI provider for Hetzner (CAPH) from Syself turned out to be the optimal solution.

When testing the provider, we had to overcome a few challenges, which we would like to discuss in the following.

Requirements

You need to install some basic tools to work with CAPH and Gardener. It makes sense to set up a management vm (on Hetzner) running a kind cluster and on it the management cluster.

Setup the management cluster

To start, let's create a kind cluster with a customized Kubernetes version (Currently it is important that the kubernetes version is >1.26)

kind create cluster -n my-cluster --image=kindest/node:v1.25.9

When the cluster is up, initialize the management cluster with CAPH

clusterctl init --core cluster-api --bootstrap kubeadm --control-plane kubeadm --infrastructure hetzner

and export your environment variables

export HCLOUD_SSH_KEY="MY_SSH_KEY" \
export CLUSTER_NAME="my-cluster" \
export HCLOUD_REGION="nbg1" \
export CONTROL_PLANE_MACHINE_COUNT=3 \
export WORKER_MACHINE_COUNT=3 \
export KUBERNETES_VERSION=1.25.9 \
export HCLOUD_CONTROL_PLANE_MACHINE_TYPE=cpx31 \
export HCLOUD_WORKER_MACHINE_TYPE=cpx41 \
export HCLOUD_TOKEN="YOUR_HCLOUD_TOKEN_HERE"

The region can be for sure anything else like hel1.

To be able to build the machines a secret must be created:

kubectl create secret generic hetzner --from-literal=hcloud=$HCLOUD_TOKEN

You can also still build yourself a customized node image, but we didn't do that. We used the Ubuntu 22.04 image, which is available from Hetzner.

Now let's create the my-cluster.yaml with the private network flavor as in the quickstart-guide:

clusterctl generate cluster my-cluster --kubernetes-version v1.25.9 --control-plane-machine-count=3 --worker-machine-count=3 --flavor hcloud-network > my-cluster.yaml

Modifications

After creation you need to modify the my-cluster.yaml. Remove the following blocks (it gives two of them) because we need to reset the containerd config

- - content: |
- version = 2
- [plugins."io.containerd.grpc.v1.cri".containerd.runtimes.runc]
- runtime_type = "io.containerd.runc.v2"
- [plugins."io.containerd.grpc.v1.cri".containerd.runtimes.runc.options]
- SystemdCgroup = true
- [plugins."io.containerd.grpc.v1.cri".containerd.runtimes.crun]
- runtime_type = "io.containerd.runc.v2"
- [plugins."io.containerd.grpc.v1.cri".containerd.runtimes.crun.options]
- BinaryName = "crun"
- Root = "/usr/local/sbin"
- SystemdCgroup = true
- [plugins."io.containerd.grpc.v1.cri".containerd]
- default_runtime_name = "crun"
- [plugins."io.containerd.runtime.v1.linux"]
- runtime = "crun"
- runtime_root = "/usr/local/sbin"
- owner: root:root
- path: /etc/containerd/config.toml
- permissions: "0744"

and add

+ - mkdir /etc containerd
+ - containerd config default > /etc/containerd/confi.toml
+ - sed -i 's/SystemdCgroup = false/SystemdCgroup = true/g' /etc/containerd/config.toml
- systemctl daemon-reload && systemctl enable containerd && systemctl start containerd
+ - sysctl fs.inotify.max_user_instances=8192
+ - sysctl fs.inotify.max_user_watches=524288

The containerd-config is missing some options. In addition, the SystemdCgroup must be set to true and the inotify settings needs to be increased. Only then the vpn-seed-server starts, which is created when a shoot is created.

When you have finished the modification, you can start building the worker cluster by appling the modified file on the management cluster

kubectl apply -f my-cluster.yaml

With watch clusterctl describe cluster my-cluster you can watch one control-plane and three workers are being built.

Setup the worker cluster

NOTE: From now on you should make sure that you are working on the newly built cluster. Make sure that you have exported the Kubeconfig for the worker cluster!

Get the kubeconfig of the worker cluster

clusterctl get kubeconfig my-cluster > /path/to/my/worker-cluster.kc

and export it to work on the cluster (you can do this on your local machine. You don't need to work anymore on the management vm):

export KUBECONFIG=/path/to/my/worker-cluster.kc

At the point of "Deploy a CNI solution" keep in mind not to install cilium like in the guide but calico. We had a lot of trouble with cilium.

helm repo add projectcalico https://docs.tigera.io/calico/charts
kubectl create namespace tigera-operator
helm install calico projectcalico/tigera-operator --version v3.25.1 --namespace tigera-operator

Wait till calico is ready.

Now deploy the CCM for hcloud only. Set privateNetwork.enabled=true because we need a private network to function properly:

helm repo add syself https://charts.syself.com
helm upgrade --install ccm syself/ccm-hcloud --version 1.0.11 \
--namespace kube-system \
--set secret.name=hetzner \
--set secret.tokenKeyName=hcloud \
--set privateNetwork.enabled=true

At the end we need a CSI to build volumes on hcloud:

cat << EOF > csi-values.yaml
storageClasses:
- name: hcloud-volumes
defaultStorageClass: true
reclaimPolicy: Retain
EOF

helm upgrade --install csi syself/csi-hcloud --version 0.2.0 \
--namespace kube-system -f csi-values.yaml

The remaining control planes should now be built and added to the cluster. Wait until the entire cluster has Ready status before proceeding.

NOTE: If you want to transform the workload cluster into a management cluster, you need to do the "move"-steps in the guide. But its recommended to have a management vm with kind installed and hold there the management cluster.

Install of 23KE

If you plan to install 23KE, you need to keep in mind that there are a few customizations that need to be made for hcloud.

If you have installed 23KE and the data from the installation is in a repository, the following file need to be adjusted.

Add the following lines in gardenlet-values.yaml

settings:
+ loadBalancerServices:
+ annotations:
+ load-balancer.hetzner.cloud/location: nbg1
+ load-balancer.hetzner.cloud/ipv6-disabled: "true"
+ load-balancer.hetzner.cloud/disable-private-ingress: "true"

Commit and push your changes. You can execute a flux reconcile source git 23ke-config to speed up the things.

If any ingress won't get a public IP, you can add some annotations manualy e.g. for nginx-ingress-controller:

kubectl annotate svc -n garden nginx-ingress-controller load-balancer.hetzner.cloud/location=nbg1 \
load-balancer.hetzner.cloud/ipv6-disabled=true \
load-balancer.hetzner.cloud/disable-private-ingress=true

Now all things should come up as well as e.g. the dashboard should be accessible. If something is not running smoothly, the cluster can be inspected with k9s and you have an easy overview of what the problem is.

In the next steps, secrets can be added to connect to a public or private cloud. With these and a cloudprofile it is then possible to build and run shoots (k8s clusters).

Summary

With all these steps, it is possible for you to build a functional Gardener cluster on Hetzner Cloud. With this you can run a low cost K8s cluster and run what you want on it. Whereas 23KE is already very cool.

This setup has allowed us to drastically reduce the cost of a fully functional Gardener. As a pleasant side effect, we are no longer dependent on a cloud in the US, but now run Gardener in a data center in Germany with a German operator that is GDPR compliant and uses green electricity (good for our CO2 footprint).

· 2 min read
Jens Schneider

Thanks to everyone

In the last week of September 2022, I participated in the Gardener Hackathon. Unfortunately, I need to attend the event remotely, as I caught Covid the week before the event. Therefore, I want to thank everyone making this experience possible for me, anyway.

Even remote hacking together is great

As Tim also participated remotely, we formed a remote hacking team contributing to the development of the Gardener extension-registry-cache. Of course, we were in touch with the on-site contributors via video calls which laid the foundation for three highly productive hacking days. Right after tying down a common todo list, we distributed the workload to the on-site and remote teams, and started hacking. From time to time, we held synchronization meetings, so that everyone was up-to-date and the current state of work was not only reflected in commits, branches, and pull request but also communicated to everyone in the team.

Beyond the internal communication within the team, Tim and Rafael organized a demo session on day 2. It was really amazing to see the progress made with respect to the various topics covered by the hackathon.

Conclusion

Clearly, the hackathon was a great event, and even hacking together remotely was a great experience. Of course, the social aspect of working together on-site cannot be mimicked. Therefore, I am already looking forward to the next Gardener hackathon which I hopefully can attend on-site.

· 6 min read

TLDR;

Recently, we consolidated Gardener related Helm charts in a helm repository hosted on GitHub pages of the gardener-charts repository. For this purpose we implemented a custom chart release bot - the gardener-chart-releaser. Keep on reading to find out more.

Table of Contents

Introduction

Installing Gardener is a complicated process, even though the garden-setup installer is provided in the same GitHub organization space. One of the reasons is that the Gardener related Helm charts are spread over multiple repositories. Consequently, other Helm chart-based installers like Schrodit's gardener-installation popped up. Moreover, we consolidated gardener related Helm charts in the 23ke-charts repository by a simple Python script, and developed a very basic installation approach based on these charts. The chart collection was released in a Helm repository hosted on GitHub pages by helm's chart-releaser. With the consolidation of the Gardener charts, we faced the issue that the collected charts need to be kept up to date somehow. There our journey begins, and we started to keep track of the upstream charts with the help of renovate. However, renovate is designed to keep dependencies up to date and finds its limitations, when it comes to tracking multiple versions of the same piece of software. First, we tried to find a workaround by tracking three branches for the last three minor versions and shifted the latest branch, as soon as a new minor release appeared. Even though this approach could potentially succeed, we were running into issues from time to time due to missing automatic merges, or failures in the branch shift routine. As a consequence, we decided to build our own tracking tool, the gardener-chart-releaser.

The Gardener Chart Releaser

As already stated above, we wanted to keep track of the last three minor versions of all Gardener related Helm charts, and release these charts in a single helm repository. In order to achieve this, we needed to make some decisions. First, we needed to drop our old Python-based Helm chart import script, as working with helm charts in code is way easier, when using the Go-based helm packages directly. Further, helm's chart-releaser is written in Go and there exists a solid implementation of a git and GitHub module in Go. So, we reimplemented our chart import and release functionality in Go with a view to tracking the last three minor releases. Another design goal was to keep the tool simple, especially from the user point of view. As of now, the user only needs to worry about a configuration file in yaml format. Consider the following example:

# contents of config.yaml
destination:
owner: gardener-community
repo: gardener-charts
sources:
- name: gardener-controlplane
version: v1.53.0
repo: gardener/gardener
charts:
- charts/gardener/controlplane
- ...

The destination map defines a GitHub owner and repository, where the collected charts are released. Under sources a list of source Helm charts is provided by an owner/repo entry and a list of paths pointing to the charts to be released. With a valid config.yaml a user can simply run

export GITHUB_TOKEN=....
gardener-chart-releaser update

and the configured charts will be collected and released. Note that the version field is ignored for the actual release process, as we want to track several versions. However, the version field has its own reasoning. Keep on reading ;-)

Export the charts to a local directory

Just by collecting and releasing charts to a GitHub repository you won't get to see the charts' contents at all. But what if you want to work with the charts itself in a local development scenario. For this purpose, you can export the charts to a local directory instead of releasing them to a remote repo. Just call

gardener-chart-releaser export

and the charts will be exported to a local ./charts directory. In this case, the version field in config.yaml defines the version to be exported.

Update all version field to the latest version

As the entire Gardener ecosystem is moving quickly, your config.yaml will be outdated soon. In order to avoid manual updates of the version fields, we introduced another command called fetchLatestVersions. If you run

gardener-chart-releaser fetchLatestVersions

your config.yaml will be updated, so that you will find the versions of the latest upstream releases in the file. Of course, it only makes sense to run this before a local export to make sure that the most recent versions of charts are exported.

Handling Gardener extensions

You might be wondering how Gardener extensions are managed in this approach, as these are not provided as Helm charts upstream. Remember that we wanted to build a single point of truth for a Gardener provisioning, and consequently the gardener-chart-releaser also packages Gardener extensions as charts. For each entry like e.g.

sources:
- name: runtime-gvisor
version: v0.5.1
repo: gardener/gardener-extension-runtime-gvisor
charts:
- controller-registration

in the configuration file, it will generate a Helm chart for the specified extension and release it the same way as the Gardener core charts. Furthermore, this approach provides the opportunity to release charts for the extension itself (i.e. controllerRegistration and controllerDeployment) and the charts for the admission controllers as sub-charts. For instance

sources:
- name: provider-azure
version: v1.29.0
repo: gardener/gardener-extension-provider-azure
charts:
- controller-registration
- charts/gardener-extension-admission-azure

will package a top-level chart called provider-azure with sub-charts for the extension controller and admission controller, respectively.

Running the release process nightly

As we want to be as transparent as possible, we set up a GitHub action, so that the chart-releaser is run nightly. This will ensure that we do not miss any important upstream change and the Helm repository is always up to date.

Summary

The gardener-chart-releaser enables a single point of truth for Gardener related Helm charts and could be a good starting point for custom Gardener installation routines.

· 12 min read
Jens Schneider

TLDR;

Recently, we developed the gardener-extension-mwe, which serves as a minimal working example for Gardener extensions. If you are only interested in the code, go and checkout the repository on github. If you want to learn more, keep on reading.

Table of Contents

Introduction

Starting the development of new Gardener extensions can be challenging. As the Gardener documentation is fairly complex and driven by the history of the project, getting into the overall concepts is not easy. Moreover, code examples for Gardener extensions reside in separate Git repositories lacking documentation. However, early in March 2022 the gardener-extension-shoot-networking-filter was published, which comes at a more beginner friendly level than the e.g. the cloud-provider extensions. In particular, it extends Shoot clusters by the use of managed resources, which might be more straight-forward than the interaction with a cloud service provider as performed by e.g. the gardener-extension-provider-aws. Thus, gardener-extension-shoot-networking-filter provides a reasonable starting point for new developments, which target at automated deployments into Shoot clusters.

However, going beyond the identification of a starting point, it makes sense to take a closer look at the concepts for extension development. In the extension directory of the Gardener Git repository, we find several Go-packages defining interfaces, which can be implemented by extension controllers. Put simply, we can search for files matching pkg/controller/*/actuator.go, in order to find interfaces for controllers acting on the corresponding resources. E.g., if our controller defines a (golang)-type -- called actuator -- implementing the interface defined in pkg/controller/extension/actuator.go, the controller will reconcile resources of type *extensionsv1alpha1.Extension. Consequently, the controller will take care for all the steps we define in the Reconcile method of the actuator, when Extension resources are updated. Of course, there might be more complex scenarios where reconciling Extension resources would be insufficient. In these cases, other interfaces like e.g. defined in pkg/controller/infrastructure/actuator.go would need to be implemented. However, these cases lie beyond the scope of this blog post.

Next, we will dive into some basic workflows for Gardener extension development.

Basic workflows

In software engineering, it is reasonable to develop on a local machine with a controllable toolchain. As already mentioned above, Gardener extensions are implemented in Go. Therefore, let's identify a few requirements for the development:

  • An installation of Go
  • A text editor, which (optionally) supports gopls
  • A Go debugger, which is most likely to be Delve
  • A Gardener development environment. This can be setup by
    • Running Gardener locally (also checkout #5548, if you are running on Linux)
    • Setting up a development Gardener on some cloud infrastructure. This definitely comes closer to the real world scenario your extension will eventually live in. The block diagram below depicts the overall setup including the requirements from above.
┌──────────────────────────────────────────────┐
│ Development Computer │
├──────────────────────────────────────────────┤
│ │
│ ┌──────────────────────────────────────────┐ │
│ │ - Your toolchain │ │
│ └──────────────────────────────────────────┘ │
│ │
│ ┌────────────┐ ┌──────────────┐ │
│ │Kubeconfigs │ │Your code │ │
│ └──┬──────┬──┘ └────────┬─────┘ │
│ │ │ │ │
└────────┼──────┼────────────────────┼─────────┘
│ │ │
│ │apply │
apply │ │resources │reconcile
resources│ │ │resources
│ └──────────────────┐ │
│ │ │
│ │ │
▼ ▼ ▼
┌─────────────────┐ ┌─────────────────┐
│ Garden Cluster │ │ Seed Cluster │
├─────────────────┤ ├─────────────────┤
│- Project │ │- Extension │
│- Seed │ │- Controller │
│- Shoot │ │- ... │
│- ... │ │ │
└─────────────────┘ └─────────────────┘

As you can see, the code for the extension controller is running on your local development machine, and reconciles resources, such as Extension resources, in the Seed cluster. Of course, you will not have dedicated clusters for the Garden cluster and Seed cluster, when running Gardener locally. However, the overall buidling blocks stay conceptually the same. Once these requirements are met, you are good to go for your first steps with Gardener extensions. Wait! I have to setup an entire Gardener, if I want to rapidly prototype an extension controller? Yes and No. Depending on your development/test case, it might be reasonable to "fake" a Gardener environment on a vanilla Kubernetes cluster. We will get to this development case below. For rock solid testing, however, you most probably need a real world Gardener environment.

The mininal working example

As of May 2022, we provide a Minimal Working Example (MWE) for gardener extensions. Of course, this example did not come out of nowhere. Therefore, we review the development process and break the example down to its components in the following: Taking a look at other extensions, it is observed that we need some boiler plate code for running our controller, so that it works together with all the other Gardener components. For the MWE, we collected the relevant files and adjusted them to our needs. Thus, we can have a look at the cmd directory of the gardener-extension-mwe and find a simple structure with 3 files, which are responsible for starting our controller and ensuring that it acts on the defined resources.

cmd
└── gardener-extension-mwe
├── app
│   ├── app.go
│   └── options.go
└── main.go

If you want to start the development of a new extension, you do not need to worry too much about these files. Most probably, you can simple copy them over, and adjust some variables to your needs. Actually, we also copied these files from the gardener-extension-shoot-networking-filter and adjusted them to the needs of the MWE. Given that we have the boilerplate code in the cmd directory available now, we can go ahead and define a type which implements an actuator interface. For this, we need the files in the pkg directory. Lets take a look at the structure:

pkg
└── controller
└── lifecycle
├── actuator.go
└── add.go

Also here, we find only two files, and the implementation of the interface is located in actuator.go. This is the place where most of the magic of your new controller happens. In the case of the MWE, the actuator will only output logs, when Extension resources are reconciled. Obviously, all code is written in Go and consequently we will also need to pull in some dependencies. For this, we need the files go.mod and go.sum. Typically, the source code of the dependencies is also committed to the repository, which comes at advantages and downsides. The main advantage is that all code needed for building an application is available in the repository. On the other hand, committing several 1000s lines of code during vendoring clutters the commit history of the repository. Therefore, we only provide the files mentioned above in the initial commit of the MWE and perform the vendoring (by running go mod vendor) in another commit. At this state of the repository, we can already start and go the first steps with our new controller in a vanilla Kubernetes cluster.

Rapid prototyping on a Kubernetes cluster (tested with version 1.22.6)

Assuming you have read the basic workflows section, we are ready to dive into the exemplary development techniques. So let's fetch the code and setup the repository:

git clone https://github.com/23technologies/gardener-extension-mwe.git
cd gardener-extension-mwe
git checkout 3c238bd # checkout the commit containing first vendoring
mkdir dev
cp PATH-TO/KUBECONFIG.yaml dev/kubeconfig.yaml

Now, we can already start our controller and should get some output showing that it was started:

go run ./cmd/gardener-extension-mwe --kubeconfig=dev/kubeconfig.yaml  --ignore-operation-annotation=true --leader-election=false

However, we will not observe any other output, since the controller is still freewheeling. Remember, reconciliation will be triggered on Extension resources. As our vanilla Kubernetes cluster does not know anything about Extension resources yet, we will have to "fake" the Gardener environment. In other Gardener extensions, we find resources for a "fake" Gardener setup in the example directory. Therefore, we prepared the example directory in another commit. Let's check it out: Open a new terminal pane and navigate to your repository and go for

git checkout 50f7136 # this commit entails the example directory
export KUBECONFIG=dev/kubeconfig.yaml
kubectl apply -f example/10-fake-shoot-controlplane.yaml
kubectl apply -f example/20-crd-cluster.yaml
kubectl apply -f example/20-crd-extension.yaml
kubectl apply -f example/30-cluster.yaml

Now, the cluster simulates a Gardener environment and we can apply an Extension resource:

kubectl apply -f example/40-extension.yaml

Take another look at the terminal running our controller now. It should have logged a "Hello World" message. Of course, we can also delete the Extension resource again and the controller will tell us that the Delete method was called.

kubectl delete -f example/40-extension.yaml

As we have the code and a method to trigger its execution available now, we can go ahead for a more interactive approach based on the Delve debugger. Let's start all over again and run our controller using Delve

dlv debug ./cmd/gardener-extension-mwe -- --kubeconfig=dev/kubeconfig.yaml  --ignore-operation-annotation=true --leader-election=false

and we will end up in a commandline with a (dlv) prompt. Next, we ask dlv to break in the Reconcile method

(dlv) b github.com/23technologies/gardener-extension-mwe/pkg/controller/lifecycle.(*actuator).Reconcile

and continue the execution of the controller

(dlv) c

Afterwards, you should observe some output of the controller, again. However, Delve will not break the execution until the Reconcile method is called. Thus, we apply the Extension resource once again

kubectl apply -f example/40-extension.yaml

and Delve will stop in the Reconcile method. Now, you can step through the code, see where it enters code paths pointing into the vendor directory, and inspect the values of certain variables. Obviously, the amount of variables you can inspect is limited in the MWE, but e.g. we can have a look at the *extensionsv1alpha1.Extension passed to the Reconcile method

(dlv) p ex.ObjectMeta.Name

which should print "mwe". Generally, this is a great way to approach unknown software, since you will quickly get a feeling for the different components. Thus, we expect that you can benefit from this workflow, when developing your own extensions. Even though this approach offers capabilities for rapid prototyping, it is still limited, since we cannot act e.g. on Shoot clusters as available in a real world Gardener. Therefore, we step into the development in a Gardener environment in the next section.

Development in a real Gardener environment

Developing and testing our extension in a real world Gardener requires a ControllerRegistration resource in the Garden cluster causing the installation of the controller in Seed clusters. Generally, the installation performed by Helm charts and consequently, we need to provide these charts in the repository. Also for the MWE, we prepared the charts directory containing all relevant Helmcharts for the deployment of the controller. Note that this set of charts is very limited and in production scenarios you might want to add something like a VerticalPodAutoscaler as done e.g. in the gardener-extension-shoot-networking-filter. However, implementing production ready charts goes beyond the scope of this post, and consequently the MWE charts were added in another commit. These charts target at running the controller in Seed clusters. Thus, in charts/gardener-extension-mwe/values.yaml, the image for the deployment is defined. However, we do not want to push that image to a public container registry for each and every change we make to our code. Moreover, we want to run the controller on our local machine for development purposes. Therefore, we need to tweak the values before generating the controller-registration.yaml. Let's go through it step by step:

git clone https://github.com/23technologies/gardener-extension-mwe.git
cd gardener-extension-mwe
mkdir dev
cp PATH-TO/KUBECONFIG-FOR-SEED.yaml dev/kubeconfig.yaml

Next, we generate the controller-registration.yaml, such that the controller is not deployed to the seed cluster and we can hook-in the controller running locally. In particular, we set replicaCount=0 and ignoreResources=true in ./charts/gardener-extension-mwe/values.yaml, before generating the controller-registration.yaml:

yq eval -i '.replicaCount=0 | .ignoreResources=true' charts/gardener-extension-mwe/values.yaml
./vendor/github.com/gardener/gardener/hack/generate-controller-registration.sh mwe charts/gardener-extension-mwe v0.0.1 example/controller-registration.yaml Extension:mwe

Now, let's deploy the generated controller-registration.yaml into the Garden cluster:

export KUBECONFIG=PATH-TO/GARDEN-CLUSTER-KUBECONFIG.yaml
kubectl apply -f example/controller-registration.yaml

From now on, Extension resources of the type mwe will be deployed to Seed clusters when new Shoot clusters with

---
apiVersion: core.gardener.cloud/v1beta1
kind: Shoot
metadata:
name: bar
namespace: garden-foo
spec:
extensions:
- type: mwe
...

are created. In our controller, the Reconcile method will be triggered, when these Extension resources are reconciled. Therefore, we can run the extension controller with Delve now

dlv debug ./cmd/gardener-extension-mwe -- --kubeconfig=dev/kubeconfig.yaml  --ignore-operation-annotation=true --leader-election=false --gardener-version="v1.44.4"

and we can perform debugging operations as explained above. Remember, Delve will not break the execution until the Reconcile method is called. Now, Gardener will create Extension resources for Shoots which will trigger the Reconcile method of our controller. Consequently, we need a new terminal pane in the repository root and execute

export KUBECONFIG=PATH-TO/GARDEN-CLUSTER-KUBECONFIG.yaml
kubectl apply -f example/50-shoot.yaml

Note that it will take some time until the corresponding Extension resource will be created in the Seed cluster. Hang on tight and wait for the Reconcile method being executed. You can start investigating where your code goes using Delve now. Happy hacking!

Last words

This blog post shares our experience, when getting started with Gardener extension development. We hope that this contribution helps you get started more quickly than us. If you have any comment or ideas for improvements, do not hesitate to contact us. We are always willing to improve our work.

· 7 min read
Jens Schneider

TLDR;

Recently, we developed the gardener-extension-shoot-flux, which enables preconfiguring Shoot clusters. If you want to give it a try, go and checkout the repository on Github. If you want to learn more, keep on reading.

Table of Contents

Introduction

Flux offers a set of controllers allowing for reconciling a Kubernetes cluster with a declarative state defined in e.g. a Git repository. Thus it enables GitOps workflows for Kubernetes clusters. Moreover, it provides a general approach of deploying software components into Kubernetes clusters. Gardener is a multi cloud managed Kubernetes service allowing end users to create clusters with a few clicks in its dashboard. However, the user will obtain a vanilla Kubernetes cluster and has to take care for all the components to be deployed into it. Of course, the deployment can be performed manually by applying Kubernetes manifests to the cluster. On the other hand, tools like Flux can help to keep track of the deployments and automate the overall process. Thus, the combination of Gardener and Flux features the potential of creating new Kubernetes clusters in a pre-defined state. For the end users, this results in the seamless creation of clusters with all components on their wish list installed. The gardener-extension-shoot-flux bridges the gap between Gardener and Flux and allows for reconciliation of Shoot clusters to resources defined in a Git repository. By concept, the extension operates on a per-project basis so that clusters in different projects can be reconciled to different repositories.

The rest of this post is organized as follows: First, we will review a few use cases for this extension. Further, the general concept of the extension is outlined, and finally we provide an example on how to use the extension.

Example use cases

Development

Imagine you are developing software which will eventually run on a Kubernetes cluster in the public cloud. Moreover, you and your colleagues want to be able to perform some end-to-end tests besides running your local test suite. For these end-to-end test, an environment mimicking the final production environment is required. Therefore, you might need tools like cert-manager or MinIO. However, you do not want keep several testing clusters in the public cloud available for economic reasons and, in consequence, you need to create new clusters on demand. In this case, the gardener-extension-shoot-flux comes handy, since it allows to configure the cluster asynchronously. Put simply, you can define the desired state of your cluster in a Git repository, and the new clusters will be reconciled to this state automatically. Eventually, this will save the effort to configure the clusters each and every time manually. Of course, you could achieve something similar by hibernation of the development clusters. However, in that case you are less flexible, since throwing away the cluster in case you lost track of your clusters state comes at the price of reconfiguring the entire cluster.

CI/CD

Similar to the development use case above, you might want to run your CI/CD pipeline in Kubernetes clusters coming with a few components already installed. As your pipeline runs frequently, you want to create clusters on the fly or maybe pre-spawn just a few of them. In order to keep your pipeline simple, you can use the gardener-extension-shoot-flux for the configuration of your CI/CD clusters. This way your pipeline can focus on the actual action and does not have to perform the cluster configuration beforehand. This most probably results in cleaner and more stable CI/CD pipelines.

General concept

The general concept of this extension is visualized in the block diagram below.

                 ┌─────────────────────────────────────────────────────────┐
│ Gardener operator │
├─────────────────────────────────────────────────────────┤
│ - A human being │
│ ├────────────┐
│ │ │
│ │ │
└────────┬────────────────────────────────────────────────┘ │
│ ▲ │configures
│deploys │ │SSH-key
│Configmap │read SSH-key │
│ │ │
▼ │ │
┌────────────────────────────────────┴───────────────────┐ │
│ Garden cluster │ │
├────────────────────────┬─────────────────────────┬─────┤ │
│ Projetct 1 │ Project 2 │ ... │ ▼
├────────────────────────┼─────────────────────────┼─────┤ ┌─────────────────────┐
│- Configmap containing │- Configmap containing │ │ │ Git repository │
│ flux configuration │ flux configuration │ │ ├─────────────────────┤
│ │ │ │ │ - Configuration for │
┌───►│- ControllerRegistration│- ControllerRegistration │ ... │ │ shoot clusters │
│ │ │ │ │ └─────────────────────┘
│ │- Shoot with extension │- Shoot with extension │ │ ▲
│ │ enabled │ enabled │ │ │
│ │ │ │ │ │
read config │ │ │ │ │ │
and generate│ └────────────────────────┴─────────────────────────┴─────┘ │reconcile
SSH-keys │ │
│ ┌────────────────────────┐ ┌────────────────────────┐ │
│ │ Seed cluster │ │ Shoot cluster │ │
│ ├────────────────────────┤ ├────────────────────────┤ │
│ │- Controller watching │ │ │ │
└────┼─ extension resource │ │- Flux controllers ────┼──────────────┘
│ │ │ │ │
│ │deploys │ │- GitRepository resource│
│ │ │ │ │
│ ▼ │ │- A main kustomization │
│- Managed resources │ │ │
│ for flux controllers │ │ │
│ and flux config │ │ │
│ │ │ │
└────────────────────────┘ └────────────────────────┘

As depicted, the Gardener operator needs to deploy a ConfigMap into the Garden cluster. This ConfigMap holds some configuration parameters for the extension controller. Moreover, the Gardener operator needs to configure an SSH-key for the Git repository in case of a private repository. This key can be read from the Secret called flux-source in the Garden cluster which is created by the extension controller. Of course, the process of adding the SSH-key to the repository depends on the repository host. E.g. for repositories hosted on Github, the key can simply be added as "Deploy key" in the web-interface.

The extension controller is running in Seed clusters. Besides generating Secrets containing SSH-keys, it reads the configuration from the Garden cluster and creates Managedresources to be processed by the Gardener Resource Manager. These Managedresources entail the resources for the Flux controllers, a GitRepository resource matching the configuration, and a main Kustomization resource. Once the Gardener Resource Manager has deployed these resources to the Shoot cluster, the Flux controllers will reconcile the cluster to the state defined in the Git repository.

You might wonder how the communication between Seed clusters and Garden cluster is established. This is achieved by making use of the Secret containing the gardenlet-kubeconfig which should be available, when the gardenlet is run inside the Seed cluster. Most probably, this is not the most elegant solution, but it resulted in a quick first working solution.

Example Usage

Of course, you need to install the extension before you can use it. You can find ControllerRegistrations on our Github release page. So, you can simply go for

export KUBECONFIG=KUBECONFIG-FOR-GARDEN-CLUSTER
kubectl -f https://github.com/23technologies/gardener-extension-shoot-flux/releases/download/v0.1.2/controller-registration.yaml

in order to install the extension.

For an exemplary use of the extension, we prepared a public repository containing manifest for the installation of Podinfo. As a Gardener operator you can apply the following ConfigMap to your Garden cluster

apiVersion: v1
kind: ConfigMap
metadata:
name: flux-config
namespace: YOUR-PROJECT-NAMESPACE
data:
fluxVersion: v0.29.5 # optional, if not defined the latest release will be used
repositoryUrl: https://github.com/23technologies/shootflux.git
repositoryBranch: main
repositoryType: public

As the repository is public you can create a new Shoot now and enable the extension for this Shoot. Take the snipped below as an example.

apiVersion: core.gardener.cloud/v1beta1
kind: Shoot
metadata:
name: bar
namespace: garden-foo
spec:
extensions:
- type: shoot-flux
...

Gardener will take care for the Shoot creation process. As soon as you can, you can fetch the kubeconfig.yaml for your new Shoot from e.g. the Gardener dashboard. Now, you can watch this cluster by

export KUBECONFIG=KUBECONFIG-FOR-SHOOT
k9s

and you should see that a podinfo deployment should come up. Great! You successfully created a Shoot with the gardener-extension-shoot-flux.