Setting up an OVH “kimsufi” server with encrypted filesystem running Debian 11.
In this post I’ll give an overview of my findings on how to install a Debian 11 on a “Kimsufi” OVH dedicated server.
One crucial thing I’ve found out is that the hardware you get isn’t very well described before you buy, especially the motherboard configuration and capability, so your mileage may vary.
What and why
After running my services on a “scaleway dedibox” for many many years, I’m starting to feed the need for something a little more powerful. The dedibox was a great deal, for 8.99€/month (before taxes), with a 1T drive, but the CPU was always on the slower side, and 4G of memory is starting to feed a little cramped.
OVH’s offer is a little confusing, with a ton of options, in terms of CPU, Memory, and Disk, but I found a few ones around 20€, and this one caught my attention. I just waited for an availability with a hosting location in France and baught it.
Please note you may not be able to get the same price if you are a US or Canada resident (or at least if you can’t provide a French address).
At the time of purchase, the price in euros was 17.99€.
The competition:
- Scaleway dedibox still has interesting offers, and I did hesitate. I find it a little more expensive for what I want but it ha 1Gps bandwidth (OVH has 100Mbps) and a /48 for ipv6 (I found out that OVH gives you a /128 😱😱😱).
- Ikoula also seems to have very good offers. The one that I was interested in was sold out though. They offer 250Mbps network, which seems like an ok deal.
Why an encrypted root filesystem?
Obviously, this doesn’t protect me from hackers, it’s not really a security thing. And the /boot
partition won’t be encrypted anyway (I know there are ways to do it, I’m not sure there are ways to do it without a password being typed physically on the keyboard, and I’m not that interested).
It’s more about what happens if a disk fails and gets thrown away, or about decommissioning the server — at that point I won’t have to worry about anything.
Guides
Well, it looks like some people have done exactly this already, so why don’t I just follow their guidance?
- This script on GitHub looks like it’s almost exactly what I’m trying to do. The only difference I could see is that they have 4 disks and I have 3. Pretty easy.
- This post also looks super similar, and even mentions the “kimsufi” brand - doesn’t have RAID 5 though, while I was thinking about RAID 5
Findings and bad surprises
Well, I tried a bunch of things. The “installation” part (deboostrap
, the part that’s chrooted, etc.) goes well but when rebooting, the server never even reponsded to ping.
A few findings
- That specific server doesn’t support UEFI boot. I researched the motherboard (you can find the model by using
dmidecode
), and it’s UEFI capable, it’s just not enabled; - That range of server doesn’t offer KVM. I couldn’t even find a paid option (I would have paid a few bucks to just see what’s going on on the screen, and/or be able to enable UEFI).
This makes me think that there is also a chance that there’s a fairly wide discrepancy from server to server in the same range - these may be recycled hardware from higher rangers, where customers used to have KVM, hence access to BIOS options.
After finding that UEFI wasn’t supported, I switched to grub-pc
and tried again, to no avail.
I have two hypotheses about the server not booting on the initramfs.
- There’s something wrong about using GPT-partitioned disk. I’m not saying it’s not supported, it’s well possible I was doing something wrong, but I never got it to boot with a GPT partition;
- It could be that I didn’t set the
DEVICE
in the initramfs config correctly, but I’m pretty sure I tested witheth0
andeno1
. What I ended up doing was to addnet.ifnames=0 biosdevname=0
to the defaultgrub
config file in/etc/default/grub
.
LVM or no LVM?
Do I need LVM? On the one hand, it’s flexible, and it would allow to add a separate /var directory
On the other hand, my current server doesn’t have this issue, /var never got full - but did I just get lucky?
I’m deciding I will use LVM, just in case I want to separate /var or something similar later
Step by step setup
This is how I’ve decided I would use my disks. Sda will have a /boot
partition, while sdb and sdc will have some swap.
I did a tiny little bit of research, apparently doing swap on RAID doesn’t provide any advantage over doing swap on 2 devices with the same priority.
With all my trials and errors, I ended up messing it up a little bit, and I only have 2x512MB of swap …. let’s hope for the best with so little swap (but also so much memory).
So, for the disk setup, just partition as desired, I’ll assume you know how to do that.
/dev/sda1
is boot/dev/sdb1
and/dev/sdc1
are swap/dev/sda2
,/dev/sdb2
, and/dev/sdc2
are equal size partitions that we’ll use for RAID 5
From the OVH manager, boot on a rescue image (at the time of this post, “rescue-pro”), and log in (as root).
At this stage you have a sort-of working debian base in /mnt
(now /
).
What’s left is mostly the initramfs, the dropbear install for that, and installing grub.
At this stage, you are ready to reboot.
Remember to deactivate the rescue netboot in your OVH console.
First boot
WARNING: PV /dev/mapper/cryptroot in VG vg0 is using an old PV header, modify the VG to update.
# Update the old LVM header
vgck --updatemetadata vg0
You’re all set, and you can start really setting up your server.
Discussion
RAID1 vs. RAID5
When using the OVH web-based installer, you have to set the root partition to be RAID1 (or no RAID), but it cannot be RAID5. I don’t know where this restriction is coming from, and as far as I can tell, having RAID 5 for the root partition works just fine.
RAID5 and –assume-clean
After you issue the mdadm
command to create the RAID array, you may notice a very high load on the server for a few hours. It’s spending cycles re-recreating the parity disk. One of the original articles was using the --asume-clean
flag, which prevents that. From what I’ve gathers, it’s not a great idea, and since this will happen only once, I don’t think it’s a big deal - it just makes the debootstrap
very slow, hence frustrating.
Network setup through legacy mode
I have chosen to use /etc/network/interfaces, instead of the systemd new method, as well as to use the old interface names. I know these will end up being deprecated at some point, but this is what I know, and considering I don’t have a KVM for this server, I would rather minimize the number of potential issues.