History
According to the git log I started this website in 2008. Since then it has been based on ikiwiki, a simple word processor.
This website has been hosted on many different physical and virtual servers since then. And now it is ...
Moving into kubernetes
It is time for its next step. When you are reading this, the website is likely already being served by a tiny container in a larger kubernetes cluster.
But why moving it in the first place? Isn't a static webserver running nginx good enough?
The ungleich infrastructure
The one or other of you knows that I work for ungleich, a Swiss Open Source company with the focus on sustainability. The infrastructure at ungleich has been always evolving and one of the earliest credos was to run anything that is potentially being offered as a product ourselves. Thus any service you can get from ungleich, is also being run internally - anything from Matrix to Nextcloud to Mattermost to Mastodon to Netbox ..., you name it.
VM workloads are getting old
While there is still a significant amount of virtual machines running at ungleich, internally most (more than 80%) of the workload has been migrated to kubernetes a long time ago. The main advantage of kubernetes for ungleich is to be able to run many similar services (again such as matrix) and deploy them using argocd.
While we are still using cdist for configuration management and for configuring servers (both bare metal as well as VMs), deploying applications via kubernetes is now a well known pattern and effectively reduces the effort for maintenance, as many apps can be updated with one git commit.
This particular website was running on a virtual machine we internally call "staticweb", as it only hosts statically generated websites, without any dynamic content at all.
And that VM has been on our "to migrate" list for about 1.5 years. So it's time to move on...
How to run a website in kubernetes
There are so many different ways to run applications in kubernetes and a lot depends on your environment and your workflos.
Today I want to show you a rather simple approach. As I mentioned, this website is built using ikiwiki and backed by git. It actually uses a Makefile for a long time and since today also a Dockerfile to generate its own container.
Makefiles are not always nice, but they have one very nice way of working: if one command fails, the makefile aborts. So we can use it essentially to:
- build the container
- upload the container
- update the argocd manifest to refer to the latest container
And each step is executed only if the previous one was successful.
Instead of using a too fancy build pipeline that runs async in some amazing build cluster I am just executing
make
on my notebook and everything else is built & triggered and uploaded.
If you can read this, my build was successful and this website is now running in kubernetes.
Garbage collection & improvements
One of the issues of building images over and over again for a website is that there can be a lot of cruft. As we are using an internal harbor instance that runs IPv6 only to host our images, at some point the storage would run out if ... we did not specifiy a policy for automatic image deletion. In case of this website, harbor checks once per week whether there are more than 5 images and if so, removes them.
One drawback of the current build is that the ikiwiki run takes about 2 minutes and depending on my connection the image push might also take about 2 minutes and then argocd waits maybe 5 minutes until it updates the app itself, thus resulting into about 10 minutes delay between start of build until a new version is online.
As this website is not that frequently updated this does not pose a real problem, but maybe you will read about some improvements here in the future.
That said - happy hacking and enjoy your day.