Creating a new Ubuntu Server

What steps are required to setup a new Ubuntu Server at home? In this post, I document my requirements for a new Kubernetes worker node and the packages and configurations I used to get it online and operational.

Alec Di Vito
Alec Di Vito 12 min read
Creating a new Ubuntu Server

Runnings raspberry pi's as your Homelab kinda sucks. They are not very powerful, not expandable, and if you try to run heavier workloads (like kube-promethus 🤦‍♂️) you end up in a loop where your servers are constantly dying (please don't ask). Last year I spent some money to build a new Windows gaming computer, however, it hasn't gotten the use I was expecting. Why not re-purpose it as another computer in the Kubernetes Cluster!

This blog posts follows my journy of converting my windows computer into a Linux Ubuntu Server. It also includes the steps that I took to join it to my k3S cluster.

My hope by putting a powerful home computer online, I'll actually use it more. Things like AI, coding and gaming will be more accessible being able to access them over the internet instead of relying on only being able to use the computer in one spot.

To be real, I just want to make my iPad have the ability to code in a web browser and play games! Is that too much to ask? I guess I'll have the answer in the future. Anyways, on with the migration!

Loading Ubuntu

The first step to getting the computer setup was to first load a USB with a copy of Ubuntu Server. Sadly I didn't have one, but I discovered that SD cards work in absence of one. Loading the Ubuntu image onto the SD card was straight forward, especially on windows, by using the tool Etcher Balena.

💡
I'm picking Ubuntu because it's what I know

I have a i5-13600KF in my computer. This CPU is over-clockable but has no GPU on it (thats what the kf stands for). The sad part about this CPU is that in recent news, it was reported that some workloads can damage the CPU into an unrepairable state. There's a BOIS update available to (hopefully) fix the issue. I've never done a motherboard update before and was concerned.

But it turns out it's easy if you know your motherboard. I used another spare SD card I had lying around, created a 16GB FAT32 partition and copy and pasted the updated firmware into it. The BIOS was able to pick it up and do the update with no issues! Was very happy with the experience.

After the BIOS update, I set the Ubuntu image SD card as my bootable disk and choose a minimal install for my instance of ubuntu Server.

Hardening a new Linux System

List of commands to run

With a new install of Ubuntu, it's best to spend a bit more time hardening the system to hopefully protect it in the future from attackers. And the first thing to do with any new install is getting it up to date with the latest packages. It's also the time to install some extra packages that we'll need during our install process.

sudo apt upgrade -y && sudo apt update -y
sudo apt-get install vim

After thats completed, it's time to go through a list of things to check and configure. By no means is this an exhaustive list of things to do, however I think it's the bare minimum for configuring a server to protect it.

Don't be root

First, validate that there is only one user with root access, and that user is root. The default Ubuntu install is pretty good in this regard. The user I was signed in as did not show up, so I felt like it was good to move on.

awk -F: '($3=="0"){print}' /etc/passwd

# Expected output
root:x:0:0:root:/root:/bin/bash

Protecting SSH

The best way to protect from hackers is to make sure they don't get access to the system in the first place. So the first place to protect is signing into it. The first time you SSH to a fresh install of ubuntu, you'll be able to login with your password that you created during setup. We can skip this though if we add our SSH key to the server.

# Copy your computer keys over to the server in question
ssh-copy-id <user>@<ip-address>

After that, we can provide sshd with some new settings that make it harder to access the system. Configuring sshd can be done through modifying the file at /etc/ssh/sshd_config. On more modern OS's, such as Ubuntu 24.04, they allow you to add your custom configurations in the directory /etc/ssh/sshd_config.d/*.conf. You'll this is support if the original sshd_config file has an Include /etc/ssh/sshd_config.d/*.conf command in it.

In a new file in this directory. Add the following:

# Don't let the root user login
PermitRootLogin no
# Only allow our user to SSH in
AllowUsers <your-user-name>
# Don't allow empty passwords
PremitEmptyPasswords no
# Don't allow Password Authentication have SSH access
PasswordAuthentication no
# Don't allow challenges, only SSH keys allowed
ChallengeResponseAuthentication no

With that configured, we can restart sshd and only be able to access the server using our SSH key.

sudo systemctl reload ssh

Thats all for hardening the OS. I'm sure I'll add more here as I get my friends to read this article 🫡

Ubuntu Virtual Firewall (ufw)

Firewalls are a great way to control the internet traffic. This can be useful for scoping the only allowed traffic to your system. ufw is a package available that creates a virtual firewall through creating Linux IPTables to control the type of traffic that is allowed into the computer.

Understand that ufw isn't always recommended though

K3S website says to not to use ufw, I still do it for the little bit of extra protection it may add.

sudo apt install ufw

For now we can just enable access to ssh into the server. We'll block everything else. As we go along in the blog post, we'll open up some more ports so we can talk to different systems.

sudo ufw allow 22/tcp              # SSH
sudo ufw allow 6443/tcp            # Kube-API Server
sudo ufw allow 10250/tcp           # Kubelet metric
sudo ufw allow 10.42.0.0/16 to any # Kubernetes networking
sudo ufw allow 10.42.0.0/16 to any # Kubernetes networking
sudo ufw allow 51820/udp           # This is for wireguard!
sudo ufw enable                    # Start the program
sudo ufw status                    # List the rules

This is by no means a exhaustive list. You will need to enable more rules to be able to message VPN clients if you use a tool like Wireguard. You have your own workloads and requirements, understand what ports to keep open. Somethings like Kubernetes DaemonSet may require you to open up more ports.

Preparing Kubernetes

Installing Software to help the node succeed

Adding a node to my Kubernetes Cluster requires a handful of steps. A new computer needs to connect to my Wireguard VPN, Connect to the NAS and finally install supporting software packages before becoming a node.

VPN setup with Wireguard

My cluster is connected using the Wireguard VPN. Although this hurts performance of my network, it means that I can have computers locally and in the cloud be apart of the same network. This makes communication and setup much simpler.

sudo apt install wireguard -y

My setup is configured to build a mesh Wireguard network, meaning that all computers have a direct connection to all the computers on the network. The new computer needs to generate a new private and public key. The public key then needs to be distributed to all of the existing client nodes.

The tool I use to help me create the mesh network is called wg-meshconf. I run it locally and it creates all of the configs I need that I then distribute to all of my computers. I have CSV file with all of the current existing nodes. Adding one more node was as easy as adding a new row to the CSV file and running wg-meshconf.

# NOTE: I've already added my new host to the CSV file
wg-meshconf init
# This creates the new config
wg-meshconf genconfig

These commands create a new private key for my new server, creates a config file for it and updates the existing configs which I can use to share the new client. I would have liked to say that I copy the updated configs to my servers, but I don't have access to those files anymore ☠️. Instead, I'll run the wireguard command line tool (wg) to add the new node.

# I ran this command on each remote computer
PUBLIC_KEY="YourPublicKeyhere"
IP_RANGE="10.10.0.0/24"
OTHER_IPS="10.42.0.0/16,10.43.0.0/16" # Kubernetes IPs
UBUNTU_SERVER="192.168.0.2"
sudo wg set wg0 peer $PUBLIC_KEY allowed-ips "$IP_RANGE,$OTHER_IPS" endpoint $UBUNTU_SERVER
sudo ip -4 route add $IP_RANGE dev wg0
🧠
That last command (sudo ip ...) is extremely important, if you don't include it, you'll get a Destination Net Unreachable when you ping from another computer over the wireguard network.

I also need to ssh into my new node and copy the new config into it.

scp conf.local <user>@<ip-address>:/etc/wireguard/wg0.conf

Then for protection, and as recommended by the wireguard website, let's make the /etc/wireguard file in-accessible and only accessible by root!

sudo chmod 600 /etc/wireguard/
sudo chown root:root /etc/wireguard/*

Then we can just start the wireguard server and you should see it successfully running.

sudo systemctl enable wg-quick@wg0.service
sudo systemctl status wg-quick@wg0.service

At this point, it's best to confirm that 2 nodes have the ability to communicate with each other using ping.

Configure NAS with NFS (if available)

My Kubernetes Cluster has 3 tiers of storage, on device, Distributed Block Storage over all compute nodes and Network Attached Storage. To allow our new linux server to connect to the NAS, we need to configure it. First install:

sudo apt install nfs-common

nfs-common installs both the client and the server packages. For our usecase we'll only use the client packages. Once installed, we can mount the drive.

LOCAL_PATH="/nfs/storage"
VOLUME_PATH="/example"
HOST_IP="192.168.0.1"

# Create and mount a local folder
sudo mkdir -p $LOCAL_PATH
sudo mount -t nfs4 $HOST_IP:$VOLUME_PATH $LOCAL_PATH

# Validate that the mount exists
$ df -h
Filesystem                         Size  Used Avail Use% Mounted on
...
$HOST_IP:$VOLUME_PATH              1.0T  0.5T  512G  50% $LOCAL_PATH

As an exersize, we can test reading and writing to the mount.

$ cd $LOCAL_PATH
$ echo "Hello world" > README.md
$ cat README.md
Hello world

After it's been enabled for the current session, we also need to make sure that it's running forever after ever restart (because we will restart the computers after updates right??? Guys Right???).

sudo vim /etc/fstab

# And in the file, append the following line
$HOST_IP:$VOLUME_PATH $LOCAL_PATH nfs auto,nofail,noatime,nolock,intr,tcp,actimeo=1800 0 0

Storage partitions

Depending on the size of your storage, you may want to create different partitions of your main storage device. For me, this is a want for my larger computers with things like NVMe drives. My goal is to keep half for Kubernetes and the other half for large static files, like video games or Machine Learning models.

Logical Volume Management (LVM)

This is an option you can choose when creating a new ubuntu server. It makes configuring storage easier (and now i can say that from experience). Take all what i'm going to say with a grain of salt as i've just learnt about LVM and set it up myself.

How to manage logical volumes
The Ubuntu Server installer has the ability to set up and install to LVM partitions, and this is the supported way of doing so. If you would like to know more about any of the topics in this page,…

Read the documentation, may make this section easier to understand.

There are 3 layers that make up this system

  1. The Physical Volume (pv) (pvscan, pvdisplay, pvcreate , ect)
  2. The Volume Group (vg) (vgscan, vgdisplay, vgcreate, ect)
  3. The Logical Volume (lv) (lvscan, lvdisplay, lvcreate, ect)

The physical volume is your actual storage device. If you have many hard drives, each should have one very large partition with type LVM. The system will handle spreading data around on the physical disks disks.

The next volume down is the Volume Group. You put Volume Groups on Physical Volumes. This is were you can group multiple physical drives together into what appears to be one "Logical Partition" to the user.

Now that you have one "Logical Partition" (made up of one Volume Group), you can start cutting them into Logical Volumes. You can think of Logical Volumes like a normal disk partition. For my use case, I created one more partition for large files that I won't care about if they just sit on this computer.

sudo lvcreate --name lg-files --size 300g ubuntu-vg

The main logical volume is only 100G, I also want to increase this to 300g. This can be done by resizing the Logical Volume.

sudo lvresize -L +200G --resizefs ubuntu-vg/ubuntu-lv

# And validate
$ sudo lvscan
  ACTIVE            '/dev/ubuntu-vg/ubuntu-lv' [300.00 GiB] inherit
  ACTIVE            '/dev/ubuntu-vg/lg-files' [300.00 GiB] inherit

Nvidia Drivers

We want to be able to use our shiny new graphics card right? I'm not gonna pretend to know what I need to do here and just follow the install instructions Ubuntu provides.

NVIDIA drivers installation
This page shows how to install the NVIDIA drivers from the command line, using either the ubuntu-drivers tool (recommended), or APT. NVIDIA drivers releases: We package two types of NVIDIA drivers:…

And those instructions told me to do the following:

# List avaliable drivers
sudo ubuntu-drivers list
# Install the recommended drivers for your computer
sudo ubuntu-drivers install --gpgpu

Kubernetes Node

With the computer hardened, software configured, it's time to join the cluster to the cluster. Because I'm using Raspberry pis, i've decided to use K3S, as it's a bit less resource intensive then normal Kubernetes. They have this lovely one line command that makes setting up a new node easy, which is great for HomeLabers (like yours truely).

Add in some --node-labels so that we can pick this node over the raspberry pis for our AI workloads, and were ready to start deploying new workloads.

curl -sfL https://get.k3s.io | \
    INSTALL_K3S_EXEC="agent" sh -s - \
    --flannel-iface $NETWORK_INTERFACE \
    --server https://$MASTER_IP:6443 \
    --node-label purpose=heavy-workload \
    --node-label network=cloud \
    --node-label location=home \
    --node-label node-number=2000 \
    --node-label environment=prod \
    --node-label computer=amd \
    --node-label gpu=nvidia \
    --node-label attached-storage=nfs \
    --node-label node.longhorn.io/create-default-disk=true \
    --token "$TOKEN"
➜  ~ k get nodes
NAME                 STATUS   ROLES                  AGE    VERSION
n-1                  Ready    control-plane,master   153d   v1.30.3+k3s1
n-2                  Ready    <none>                 153d   v1.30.3+k3s1
n-3                  Ready    <none>                 153d   v1.30.3+k3s1
n-4                  Ready    <none>                 153d   v1.30.3+k3s1
n-5                  Ready    <none>                 10m    v1.30.3+k3s1
n-6                  Ready    <none>                 153d   v1.30.3+k3s1

Future Steps to Consider

Things not covered in this article

This post covered getting a new computer operational for joining a Kubernetes cluster. Although writing automation would be a great idea, I'm not getting paid for this so I would rather quick solutions that I can get done in a day and get me operational. And hey, maybe this blog post is supposed to be a runbook so it takes less time for me to run all these commands in the future. If I was setting up computers day in and day out, I would consider writing automation, but currently the plan is to get 3 computers running over the next 3 years and hopefully never need to buy a computer again.

Future Additions

There are some things that I've skipped over in this post, mostly because I'm just trying to move fast to get my computer setup and useful to me. I only needed something stronger then Raspberry Pi's for new workloads that aren't simple web servers. Topics that I hope to cover in the future are:

  • Node Scanning (ChatGPT recommends lynis or nessus)
  • Backup and restore operations (Hopefully to my NAS)
  • NAS backup and restore operations (Hopefully to the cloud)

Notes

While writing this article, and configuring the node, ChatGPT was a great help, I was surprised that it was still sometimes faster to google things and read the information myself. Maybe my prompts aren't very good.

It did a great job picking up on lvm given the context. Using the ubuntu documentation along with ChatGPT output made it easy and quick to get partition setup.

I'm excited to use my node for new and exciting workloads.