Differently complicated hosting – The Before

After a few months of getting comfortable with NixOS as my desktop operating system, I decided it was time to try it out for servers. But first I wanted to write about the setup I had before.

Disclaimer: This post is likely full of bad ideas – you probably shouldn't setup anything you care about like this. It is my opinion that my most valuable learning is when I'm learning what not to do, and I know there are some gems lurking in here.

How it started

At home, I was running a 8th-gen i5 NUC doing double duty as my home router and server, called homegw. It was running Ubuntu 22.04 with LXD. Router stuff happened on the base OS, with dnsmasq, AdGuard Home and a bunch of ufw rules. It also ran xl2tpd to bring up my Andrews and Arnold LT2P connection so I can have real IPv6 here, which my cable ISP doesn't provide.1

There was an LXD VM for Home Assistant, and LXD containers for a backup server (restic and Arq over SSH), for Plex, and one running various tools for... ahem downloading Linux ISOs.

There was a Wireguard tunnel from this machine out to Mullvad, and a static route with NAT to on this link. Mullvad operates a SOCKS5 proxy on this address, so I configured all my Linux ISO downloading tools to use this proxy.

Another Wireguard tunnel connected to a dedicated server, hetty, running in Hetzner's Falkenstein data centre. It was nabbed from their server auction a little over a year ago. It was an 8-core 9th-gen i9, 128Gb RAM and 2x1Tb NVMe drives, for the bargain price of 54€ per month. In retrospect, way more computer than I needed, and more money than I should be spending.

This too ran Ubuntu 22.04, with LXD. In this case, a single LXD container called kubeservices running microk8s. I had Kubernetes setup with flux, a GitOps tool that keeps your cluster in sync with a bunch of YAML defined in Git. Hosted here were:

Kubernetes PersistentVolumeClaims provided all the stateful storage, using the microk8s host path provisioner, all ultimately ending up in a single directory on the host, an LXD custom volume with daily snapshots and backed up to OneDrive2 and to homegw with restic.

I'd chosen to use LVM on top of a LUKS device made from an mdadm initialised RAID1 mirror. ZFS was tempting, but whilst this was easy to setup with Ubuntu Desktop, it was less straightforward with Ubuntu Server. So I took the easy road. Or so I thought.

LXD creates a ThinPool for it's storage when using LVM, which eventually bit me when it's space reserved for metadata filled up. Despite having plenty of free data space, I couldn't figure out how to reallocate more space for metadata (I don't think you can). So I ended up ejecting the second SSD from the RAID1 mirror, and adding it as a new disk to the volume group in order to expand the size of the thin pool in order to recover.

That was my first big “this was a mistake” moment, letting LXD allocate all the remaining volume group space for a single Thin Pool was a bad idea. Ultimately I think LVM was not a good choice for this kind of setup, and I wouldn't make it again.

Oh, and I'd made the same choice of LVM with LXD on homegw. Multiple times I filled the disk and had a bit of a nightmare recovering from it. One does not simply.

Another “this was a mistake” moment was from filling OneDrive with restic backups. It turned out the Postgres Operator by default creates a single initial backup using pgrestbackup, and then captures the write-ahead log forever. Eventually my daily restic snapshots, even with regular pruning, filled the dedicated OneDrive account I'd setup for the job. At that point, restic could simply not recover. Any kind of pruning operation needed some amount of storage space in OneDrive that impossible to provide. You can pay for extra storage, but only on the primary Microsoft 365 account, so I couldn't buy my way out of it. In the end I trashed the entire restic repository and started again.

A simple change to the PostgresCluster YAML switched PGO to giving daily backups, storing up to 3 in the cluster. My disk usage went down to 600Gb to 150Gb after 3 days.

        repo1-retention-full: "3"
        repo1-retention-full-type: count
      - name: repo1
          full: "0 4 * * *"

Another consequence of filling the metadata for the thin pool mentioned earlier was the disk switched to read-only. Upon recovery I had some corrupted Postgres DBs and needed to restore from backup. The Postgres Operator makes that possible with by poking at different parts of the YAML. It's an incredible tool, but not knowing it inside out, I spent a lot of time feeling frustrated that I couldn't just jump on the server and fix things. Instead, everything is orchestrated, and it's feels a bit like operating a light switch with a broom stick.

For my use, I don't need anything like the capabilities PGO offers, and I should KISS.

The big takeaway

Understand your tools. Read the docs. Be curious about what can go wrong. Test those scenarios at your leisure, not in production.

Next time

All of that is gone. I'll be back to describe what replaced it.

1 Notably, I've found IPv4 performance is often better over this link too, with lower ping times to many sites with AA compared to Virgin Media, despite having to transit VM to get to AA first.

2 You can pick up Microsoft 365 Family for under 50GBP per year if you watch out for offers on the “gift card” version of it. This gives you 6x 1Tb OneDrive accounts, which is some of the cheapest cloud storage out there. Encrypting what you put there is a good idea, so tools like restic with rclone are your friends.

#hosting #kubernetes #ubuntu #linux