These are the manifests and IaC files for my homelab.
I've migrated the cluster from a single-node k3s cluster to a three-node talos cluster. The talos config I've used to install talos onto the nodes is in ./talos-config.
Important caveat: Because I wanted to have a dualstack network setup, I needed
to deactivate the default CNI flanel. After applying the machine configs and
bootstrapping the kubernetes cluster but BEFORE bootstrapping flux, cilium needs
to be installed manually, preferably via helm
. See
./infrastructure/cilium/README.md for
details.
Apps in my cluster were migrated via a lift-and-shift approach from systemd processes provisioned via ansible.
Jellyfin is an opensource media server and, thus, a replacement for the (arguably) more popular plex.
Paperless is an opensource document management system.
Pihole is a network service for blocking ads. It also comes equipped with a simple DNS server - and that's for I mostly use it for.
Scrapy is my favorite Python framework for web
crawling tasks. It's mature, has sensible modules and a lot of quality of life
features. Scrapy's main concept is the Spider
class that contains the crawl
logic. One can use Scrapy's CLI to start such a Spider
. For deployment of
spiders, there's a daemon service with a JSON API called
scrapyd
. It may schedule and spawn
crawling processes based on spiders.
It's tempting to improve scrapyd
to start Kubernetes jobs instead and, indeed,
there's some effort in implementing a kubernetes-native scrapyd
version:
scrapyd-k8s. I'm not convinced of its
maturity yet but I'll definitely spend some time trying this one out.
Finally, there's also scrapydweb that
provides a nice frontend for scrapyd
.