DevOps

Navigating Firmware Updates on Legacy Dell PowerEdge R410 Servers: A Homelab Odyssey

In the realm of homelabbing, managing and maintaining legacy hardware can often turn into an adventurous challenge. Such was my recent experience with a Dell PowerEdge R410 server. The task at hand was updating its firmware, a task that became an odyssey, especially as Dell had discontinued support and removed update packages for the server. Here is a detailed account of the journey.

The Challenge: iDRAC, TLS Protocols, and Browser Compatibility

Our adventure begins with iDRAC (Integrated Dell Remote Access Controller), an integral tool for remote server management. The stumbling block: the iDRAC firmware on the R410 does not support modern Transport Layer Security (TLS) protocols. The workaround: using Mozilla Firefox with TLS1 enabled.

Despite modern browsers shunning TLS1 due to known security vulnerabilities, the legacy iDRAC firmware necessitated its usage.

The Roadblock: “Non-Dell Authorized” Updates

After navigating through the quagmire of outdated security protocols, I hit a roadblock during the firmware update installation. Each attempt was met with the message, “The updates you are trying to install are not Dell authorized,” due to expired Dell certificates. Dell’s decision not to renew these certificates for legacy systems meant the only path forward was patient experimentation, made arduous due to the server’s slow boot times.

The Silver Lining: Self-Updating iDRAC and Lifecycle Controller

As I ventured deeper into the server’s firmware labyrinth, a pivotal revelation surfaced: the iDRAC could update itself and the Lifecycle Controller. These self-updates included new certificates, indispensable for the proper functioning of the Unified Server Configurator (USC), thereby laying the groundwork for the remaining firmware updates.

Achieving this required a detailed sequence of steps:

  1. Download the iDRAC6 Image Executable: The first order of business was to download the iDRAC6 image executable. This executable, named “iDRAC6_1.92_A00_FW_IMG.exe“, is crucial for the update and could be downloaded from the official Dell support website.
  2. Extract the iDRAC6 Executable: After obtaining the executable, the next step was to extract its contents. Operating within a Linux environment, I used the ‘unzip’ command. This action produced a file named “firmimg.d6”. The extraction process may differ depending on your operating system.
  3. Log into the iDRAC Firmware Update Tool: Armed with the “firmimg.d6” file, I logged into the iDRAC Firmware Update tool. This web-based interface manages iDRAC settings and updates.
  4. Initiate the iDRAC Update: Inside the Firmware Update tool, I navigated to the update section and uploaded the “firmimg.d6” file. Confirming the update initiated the process. This stage required patience as the iDRAC system updated itself and rebooted.
  5. Download and Unpack the “BDF_1.5.5_BIN-12.usc” file: While the iDRAC was updating, I concurrently downloaded another key file, “BDF_1.5.5_BIN-12.usc”, from the Dell support website. After downloading, I unpacked this file in preparation for the subsequent update.
  6. Run the Firmware Update: Once the iDRAC update completed, I returned to the Firmware Update tool and input the unpacked .usc file. This action updated the Lifecycle Controller and installed the new certificates, rendering the USC functional once more.
  7. Reboot the Server: The final step was to reboot the Dell PowerEdge R410 server. This reboot enabled the server to integrate the updates properly, and subsequently, I gained access to a fully functional USC.

The realisation that iDRAC could self-update and update the Lifecycle Controller marked a vital breakthrough amidst the challenges. This experience reaffirmed that in the world of homelabbing, there’s always a way to overcome seemingly insurmountable obstacles.

The Home Stretch: Updating the Remaining Firmware

With the USC operational again, it was time to update the remaining firmware. Since Dell had removed the update packages from its repositories, I turned to ‘updateyodell.net‘, a reliable third-party firmware repository for Dell’s legacy systems. Here’s the streamlined process:

  1. Boot the Dell PowerEdge R410 server and press F10 to access the Unified Server Configurator (USC).
  2. Within the USC, navigate to Platform Update.
  3. Choose ‘FTP Server’ as your repository location.
  4. When asked for the FTP Server details, enter the ones on ‘updateyodell.net‘.
  5. Proceed with the connection. The USC will now connect to the third-party repository and identify all applicable updates for your Dell R410 server.
  6. Once the updates are listed, select all that apply, and initiate the download and installation process.
  7. Reboot the server after the updates have been installed.

Following these steps, your legacy Dell PowerEdge R410 server should be fully updated, humming along smoothly in your homelab setup.

While firmware updates on legacy systems can feel like a Homeric odyssey, perseverance, patience, and a bit of ingenuity can lead to success. As we navigate our individual tech journeys, let’s remember to share our experiences and learnings along the way.

Sources:

  1. “The updates you are trying to apply are not Dell-authorized updates.” Frednotes. https://frednotes.wordpress.com/2012/11/21/the-updates-you-are-trying-to-apply-are-not-dell-authorized-updates/
  2. “iDRAC6 1.92 A00 Firmware Image.” Dell. https://www.dell.com/support/home/pt-pt/drivers/driversdetails?driverid=kg43r
  3. “BDF 1.5.5 BIN-12 USC File.” Dell. https://www.dell.com/support/home/pt-pt/drivers/driversdetails?driverid=g3g5f
  4. “Dell R410 BIOS Update.” Reddit. https://www.reddit.com/r/homelab/comments/yh75bk/dell_r410_bios_update/
  5. “Dell Firmware and BIOS Update Repository.” UpdateYoDell. https://updateyodell.net/

Self-Hosting Sentry? Here’s How to Deal with Disk Space Issues

If you’re hosting your own Sentry server for your side projects, you know the advantages it brings in terms of cost and control. However, with the control also comes the responsibility of maintaining the server and keeping an eye on the disk usage.

Here’s a simple solution to tackle disk space issues in Sentry:

Step 1: Cleanup Old Events

As time goes by, the events logged in Sentry accumulate, which may lead to an increase in disk usage. To deal with this, you can run the following command within the Sentry “worker” container:

sentry cleanup --days 30

This command will delete all the events that are older than 30 days, which will free up some space on your disk.

Step 2: Vacuum PostgreSQL

Sentry uses PostgreSQL as its database, and over time, the database can become fragmented, leading to disk space issues. To deal with this, you can vacuum PostgreSQL to reclaim some space.

To vacuum PostgreSQL, you’ll need to enter the PostgreSQL container by running:

docker exec -it <your-postgres-container-name> psql -U postgres

Once inside the container, run the following command to vacuum PostgreSQL:





VACUUM FULL;

This command will remove dead rows from the PostgreSQL database, which will free up some disk space. However, keep in mind that the database will be blocked while the vacuum is running.

By following these two simple steps, you can free up some disk space on your Sentry server and keep it running smoothly.

Self-hosting Sentry comes with its own set of challenges, but with the right solutions, you can keep your server up and running with ease. By following the steps outlined in this article, you can tackle disk space issues in Sentry and keep your server running smoothly.

How to Self-Host Matrix and Element (Docker Compose)

This is a complete guide on setting up Matrix (Synapse) and Element on a fresh Ubuntu 22.04.

Need a server? Try one of this two 🙂
Scaleway
Hetzner (20€ Free Credits)

What is Matrix?

Matrix is an open standard and communication protocol for real-time communication. It aims to make real-time communication work seamlessly between different service providers, just like standard Simple Mail Transfer Protocol email does now for store-and-forward email service, by allowing users with accounts at one communications service provider to communicate with users of a different service provider via online chat, voice over IP, and videotelephony. Such protocols have been around before such as XMPP but Matrix is not based on that or another communication protocol. From a technical perspective, it is an application layer communication protocol for federated real-time communication. It provides HTTP APIs and open source reference implementations for securely distributing and persisting messages in JSON format over an open federation of servers. It can integrate with standard web services via WebRTC, facilitating browser-to-browser applications.

Server Setup

Update your OS and make sure everything is ready

sudo apt update && sudo apt upgrade

UFW Firewall

Lets make sure ports 80 and 443 are open on UFW

sudo ufw allow 80/tcp
sudo ufw allow 443/tcp

Install Docker & Docker-Compose

curl -fsSL https://get.docker.com -o get-docker.sh
sh get-docker.sh
apt-get install python3 python3-pip -y
pip3 install docker-compose

Prepare our compose file

Create a matrix folder, in this folder create a docker-compose.yml file with the following contents:

version: '2.3'
services:
  postgres:
    image: postgres:14
    restart: unless-stopped
    volumes:
     - ./postgresdata:/var/lib/postgresql/data

    # These will be used in homeserver.yaml later on
    environment:
     - POSTGRES_DB=synapse
     - POSTGRES_USER=synapse
     - POSTGRES_PASSWORD=STRONGPASSWORD
     
  element:
    image: vectorim/element-web:latest
    restart: unless-stopped
    volumes:
      - ./element-config.json:/app/config.json
        
  synapse:
    image: matrixdotorg/synapse:latest
    restart: unless-stopped
    volumes:
     - ./synapse:/data

Don’t forget to change the Postgres password on this file to a secure password.

Now on the same folder create a config file for the elements clients, named element-config.json.

Copy the example file from Elements.io into your own config file, and adjust the following settings:

Add our own homeserver at the top of the file:

"default_server_config": {
        "m.homeserver": {
            "base_url": "https://matrix.example.com",
            "server_name": "matrix.example.com"
        },
        "m.identity_server": {
            "base_url": "https://vector.im"
        }
    },

Generate Synapse config

Now its time to generate the initial config of our synapse server, run the following command to do so:

sudo docker run -it --rm \
    -v "$HOME/matrix/synapse:/data" \
    -e SYNAPSE_SERVER_NAME=matrix.example.com \
    -e SYNAPSE_REPORT_STATS=yes \
    matrixdotorg/synapse:latest generate

Configuring Synapse

Now lets adjust the freshly generated synapse config to our needs

Edit the synapse/homeserver.yaml file as follows:

Comment-out the SQLite database:

#database:
#  name: sqlite3
#  args:
#    database: /data/homeserver.db

And now add our Postgres database:

database:
  name: psycopg2
  args:
    user: synapse
    password: STRONGPASSWORD
    database: synapse
    host: postgres
    cp_min: 5
    cp_max: 10

Spin it up!

Time to get things moving, spin-up your docker stack:

sudo docker-compose up -d

Create our first (admin) user

sudo docker exec -it matrix_synapse_1 bash
register_new_matrix_user -c /data/homeserver.yaml http://localhost:8008

Follow the on-screen instructions to create your first user

Install Caddy Reverse Proxy

Caddy will be used for the reverse proxy. This will handle incomming HTTPS connections and forward them to the correct docker containers. It a simple setup process and Caddy will automatically fetch and renew Let’s Encrypt certificates for us!

First head over to our user directory

cd ~

Install Caddy:

sudo apt install caddy -y

Create a Caddy file:

sudo nano Caddyfile

Paste the following config:

matrix.example.com {
  reverse_proxy /_matrix/* 10.10.10.4:8008
  reverse_proxy /_synapse/client/* 10.10.10.4:8008
  
  header {
    X-Content-Type-Options nosniff
    Referrer-Policy  strict-origin-when-cross-origin
    Strict-Transport-Security "max-age=63072000; includeSubDomains;"
    Permissions-Policy "accelerometer=(), camera=(), geolocation=(), gyroscope=(), magnetometer=(), microphone=(), payment=(), usb=(), interest-cohort=()"
    X-Frame-Options SAMEORIGIN
    X-XSS-Protection 1
    X-Robots-Tag none
    -server
  }
}

element.example.com {
  encode zstd gzip
  reverse_proxy 10.10.10.3:80

  header {
    X-Content-Type-Options nosniff
    Referrer-Policy  strict-origin-when-cross-origin
    Strict-Transport-Security "max-age=63072000; includeSubDomains;"
    Permissions-Policy "accelerometer=(), camera=(), geolocation=(), gyroscope=(), magnetometer=(), microphone=(), payment=(), usb=(), interest-cohort=()"
    X-Frame-Options SAMEORIGIN
    X-XSS-Protection 1
    X-Robots-Tag none
    -server
  }
}

Don’t forget to adjust the IPs and the domain names!

Reload Caddy

caddy reload

You are done, were you looking for more?

Head to your element domain and login!

Install Kubernetes on Ubuntu

Step 1: Install Docker

curl -fsSL https://get.docker.com -o get-docker.sh
sh get-docker.sh

Step 2: Make sure containerd dosen’t have CRI disabled

Edit the file /etc/containerd/config.toml and make sure that the disabled_plugins setting is blank

nano /etc/containerd/config.toml

Then restart the containerd service:

service containerd restart

Step 3: Install Kubernetes

You will start by installing the apt-transport-https package which enables working with http and https in Ubuntu’s repositories. Also, install curl as it will be necessary for the next steps. Execute the following command:

sudo apt install apt-transport-https curl

Then, add the Kubernetes signing key to both nodes by executing the command:

echo "deb https://apt.kubernetes.io/ kubernetes-xenial main" >> ~/kubernetes.list
sudo mv ~/kubernetes.list /etc/apt/sources.list.d
curl https://packages.cloud.google.com/apt/doc/apt-key.gpg | apt-key add -

After that, update the nodes:

sudo apt update

Once the update completes, we will install Kubernetes. This involves installing the various tools that make up Kubernetes: kubeadm, kubelet, kubectl, and kubernetes-cni

sudo apt-get install -y kubelet kubeadm kubectl kubernetes-cni

Step 4: Disabling Swap Memory

Kubernetes fails to function in a system that is using swap memory. Hence, it must be disabled in the master node and all worker nodes. Execute the following command to disable swap memory:

sudo swapoff -a

Step 5: Setting Unique Hostnames

Your nodes must have unique hostnames for easier identification. If you are deploying a cluster with many nodes, you can set it to identify names for your worker nodes such as node-1, node-2, etc. As we had mentioned earlier, we have named our nodes as kubernetes-master and kubernetes-worker. We have set them at the time of creating the server. However, you can adjust or set yours if you had not already done so from the command line. To adjust the hostname on the master node, run the following command:

sudo hostnamectl set-hostname kubernetes-master

Step 6: Changing Docker Cgroup Driver

On both master and worker nodes, update the cgroupdriver with the following commands:

sudo mkdir /etc/docker
cat <<EOF | sudo tee /etc/docker/daemon.json
{ "exec-opts": ["native.cgroupdriver=systemd"],
"log-driver": "json-file",
"log-opts":
{ "max-size": "100m" },
"storage-driver": "overlay2"
}
EOF

Then, execute the following commands to restart and enable Docker on system boot-up:

sudo systemctl enable docker
sudo systemctl daemon-reload
sudo systemctl restart docker

Step 7: Initializing the Kubernetes Master Node

The first step in deploying a Kubernetes cluster is to fire up the master node. While on the terminal of your master node, execute the following command to initialize the kubernetes-master:

kubeadm init --pod-network-cidr=10.244.0.0/16 --control-plane-endpoint <EXTERNAL-IP>

In the output, Kubernetes also displays some additional commands that you should run as a regular user on the master node before you start to use the cluster. Let’s run these commands:

mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config

We have now initialized the master node. However, we also have to set up the pod network on the master node before we join the worker nodes.

Step 8: Deploying a Pod Network

A pod network facilitates communication between servers and it’s necessary for the proper functioning of the Kubernetes cluster. You can read more about Kubernetes Cluster Networking from the official docs. We will be using the Flannel pod network for this tutorial. Flannel is a simple overlay network that satisfies the Kubernetes requirements.

kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml
kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/master/Documentation/k8s-manifests/kube-flannel-rbac.yml

Step 9: Joining Worker Nodes to the Kubernetes Cluster

First, log into your worker node on a separate terminal session. You will use your kubeadm join command that was shown in your terminal when we initialized the master node in Step 7, execute the command indicated there, it will look something like the following:

kubeadm join XXXX:6443 --token XXX \
	--discovery-token-ca-cert-hash sha256:XXXX

Once the joining process completes, switch the master node terminal and execute the following command to confirm that your worker node has joined the cluster:

kubectl get nodes