Nuno Facha

Introducing the Open Source Hydroponic NPK Calculator

If you are a hydroponic enthusiast or farmer, you know how crucial it is to ensure that your plants get the right nutrients to thrive. One of the essential nutrients required by plants is NPK (Nitrogen, Phosphorus, and Potassium). The correct ratio of these nutrients is crucial to maximize plant growth and yield. However, getting the right balance can be tricky, especially for novice growers.

To make things easier for you, we are excited to announce the open-source release of the Hydroponic NPK Calculator. This calculator simplifies the process of calculating the correct NPK ratio for your plants by providing you with a simple tool that automates the process.

The Hydroponic NPK Calculator is an easy-to-use web-based tool that allows hydroponic farmers to calculate the required amount of nutrient solution based on the desired concentration and water volume. The calculator is written in HTML, CSS, and JavaScript and is free to use and modify under the MIT License.

How to use the Hydroponic NPK Calculator

Using the Hydroponic NPK Calculator is very straightforward. All you need to do is:

  1. Enter the water volume in liters.
  2. Enter the NPK ratio in the provided fields. The NPK ratio is the percentage of Nitrogen, Phosphorus, and Potassium in your nutrient solution.
  3. Enter the desired concentration of NPK in ppm (parts per million).
  4. Click on the “Calculate” button to get the required amount of nutrient solution in milliliters.

The calculator does the rest and provides you with the required amount of nutrient solution in milliliters. You can then add the nutrient solution to your water tank to get the correct NPK ratio for your plants.

Why use the Hydroponic NPK Calculator?

The Hydroponic NPK Calculator simplifies the process of calculating the correct NPK ratio for your hydroponic plants. With this tool, you can ensure that your plants receive the right balance of nutrients to maximize growth and yield. The calculator also saves you time and minimizes the risk of over-fertilization, which can harm your plants.

Moreover, the Hydroponic NPK Calculator is open source, which means that you can modify and adapt it to your needs. You can also contribute to its development by reporting bugs, suggesting improvements, or submitting pull requests.

The Hydroponic NPK Calculator is a simple but powerful tool that simplifies the process of calculating the correct NPK ratio for hydroponic farmers. With this tool, you can ensure that your plants receive the right balance of nutrients to maximize growth and yield. Try it out and let us know what you think!

Open Hydroponic NPK Calculator

https://facha.dev/hydroponics/npk-calculator

Self-Hosting Sentry? Here’s How to Deal with Disk Space Issues

If you’re hosting your own Sentry server for your side projects, you know the advantages it brings in terms of cost and control. However, with the control also comes the responsibility of maintaining the server and keeping an eye on the disk usage.

Here’s a simple solution to tackle disk space issues in Sentry:

Step 1: Cleanup Old Events

As time goes by, the events logged in Sentry accumulate, which may lead to an increase in disk usage. To deal with this, you can run the following command within the Sentry “worker” container:

sentry cleanup --days 30

This command will delete all the events that are older than 30 days, which will free up some space on your disk.

Step 2: Vacuum PostgreSQL

Sentry uses PostgreSQL as its database, and over time, the database can become fragmented, leading to disk space issues. To deal with this, you can vacuum PostgreSQL to reclaim some space.

To vacuum PostgreSQL, you’ll need to enter the PostgreSQL container by running:

docker exec -it <your-postgres-container-name> psql -U postgres

Once inside the container, run the following command to vacuum PostgreSQL:





VACUUM FULL;

This command will remove dead rows from the PostgreSQL database, which will free up some disk space. However, keep in mind that the database will be blocked while the vacuum is running.

By following these two simple steps, you can free up some disk space on your Sentry server and keep it running smoothly.

Self-hosting Sentry comes with its own set of challenges, but with the right solutions, you can keep your server up and running with ease. By following the steps outlined in this article, you can tackle disk space issues in Sentry and keep your server running smoothly.

Playing with SDR, ADS-B (Plane Tracking), and what is next

What even is SDR?

SDR stands for Software Defined Radio. In short, it is a piece of software that runs on your computer and enables you to use your computer’s sound card as a receiver.

If you’re unfamiliar with SDR, then you may not know that it has been around since the 1980’s, and was originally known as direct conversion receivers (DCR). An example of what a modern SDR “kit” looks like is below:

SDRs are commonly used as a hobbyist application. Most people have at least heard of them, and know they do what their name suggests – define radios!

What can you do with an SDR?

You can do various different things using an SDR as follows:

  • Receive broadcast radio
  • Amateur radio
  • Radio astronomy
  • Track ships via AIS transmissions
  • Track aircraft via Mode S transponder (And this will be today’s project)
  • Listen in in “walkie talkies”

Project 1: ADS-B – Tracking Planes

Back in January, I started to play around with SDR, mostly in order to be able to track aircraft using their Mode-S Transponder signals.

Hardware

My ADS-B station is currently running the following hardware:

SDR Dongle: AirNav RadarBox FlightStick

Antenna: AirNav ADS-B 1090MHz External Antenna

Board: Raspberry Pi 3 Model B+

Enclosure: Random box bought at the local supermarket, and some DIY hot-glue stuff

Network connection is currently done via Wi-Fi

Software

This station is currently running the PiAware image, including dump1090-fa (that will out of the box feed data to https://flightaware.com/), with some additional software installed to feed into some other sites:

Some other websites/services do it the other way around though, by connecting (while having the relevant firewall exceptions) my station to obtain data, instead of the other way around, these services are:

And some other additional software for metrics purposes:

Results

I have been gathering aircraft tracking data since January 2022 with this setup, and these are some metrics of this data, but before sharing the numbers I’ll share my maximum theorical range (according to heywhatsthat.com) at my antenna location, taking terrain into consideration for airborne targets (where red represents the horizon visual range, orange for targets at 10k feet, and the blue line for targets at 30k feet):

Versus my actual range:

If you want to check the live feeds you can do so here:

ADSB Stats Logger:

Data range: 2022-03-08 13:14:53 - 2022-09-03 13:38:11
Unique Flights: 1600
Unique Operators: 522
Max Altitude: Flight SAT408 37.206644315757394 km at 2022-08-19 21:38:10
Max Speed: Flight MEDIC16 1836.0728000000001 kmh at 2022-05-29 22:10:01
Max Station Distance: Flight AFR457 664.5046626435085 km at 2022-07-21 03:33:57
Min Station Distance: Flight RYR2624 0.012642261112371317 km at 2022-05-15 19:38:21
Max Signal: Flight UAL216 -0.9 db at 2022-06-02 11:22:28
Min Signal: Flight AEA194 -28.6 db at 2022-08-04 07:52:23

graphs1090 (6 months period)

Whats next?

In the last few weeks I have been playing around with a couple new SDR things to make an upgrade to my SDR setup with a new project, these things are:

AIS Ship Tracking

AIS Ship Tracking

Listening to Air Traffic Control

Santa Maria ATIS
Ponta Delgada ATIS

I’m currently waiting for Amazon to deliver some adaptors and connectors, so I can connect further gear to my external antenna to develop both these two projects, so stay tuned for future posts about them 🙂

How to Self-Host Matrix and Element (Docker Compose)

This is a complete guide on setting up Matrix (Synapse) and Element on a fresh Ubuntu 22.04.

Need a server? Try one of this two 🙂
Scaleway
Hetzner (20€ Free Credits)

What is Matrix?

Matrix is an open standard and communication protocol for real-time communication. It aims to make real-time communication work seamlessly between different service providers, just like standard Simple Mail Transfer Protocol email does now for store-and-forward email service, by allowing users with accounts at one communications service provider to communicate with users of a different service provider via online chat, voice over IP, and videotelephony. Such protocols have been around before such as XMPP but Matrix is not based on that or another communication protocol. From a technical perspective, it is an application layer communication protocol for federated real-time communication. It provides HTTP APIs and open source reference implementations for securely distributing and persisting messages in JSON format over an open federation of servers. It can integrate with standard web services via WebRTC, facilitating browser-to-browser applications.

Server Setup

Update your OS and make sure everything is ready

sudo apt update && sudo apt upgrade

UFW Firewall

Lets make sure ports 80 and 443 are open on UFW

sudo ufw allow 80/tcp
sudo ufw allow 443/tcp

Install Docker & Docker-Compose

curl -fsSL https://get.docker.com -o get-docker.sh
sh get-docker.sh
apt-get install python3 python3-pip -y
pip3 install docker-compose

Prepare our compose file

Create a matrix folder, in this folder create a docker-compose.yml file with the following contents:

version: '2.3'
services:
  postgres:
    image: postgres:14
    restart: unless-stopped
    volumes:
     - ./postgresdata:/var/lib/postgresql/data

    # These will be used in homeserver.yaml later on
    environment:
     - POSTGRES_DB=synapse
     - POSTGRES_USER=synapse
     - POSTGRES_PASSWORD=STRONGPASSWORD
     
  element:
    image: vectorim/element-web:latest
    restart: unless-stopped
    volumes:
      - ./element-config.json:/app/config.json
        
  synapse:
    image: matrixdotorg/synapse:latest
    restart: unless-stopped
    volumes:
     - ./synapse:/data

Don’t forget to change the Postgres password on this file to a secure password.

Now on the same folder create a config file for the elements clients, named element-config.json.

Copy the example file from Elements.io into your own config file, and adjust the following settings:

Add our own homeserver at the top of the file:

"default_server_config": {
        "m.homeserver": {
            "base_url": "https://matrix.example.com",
            "server_name": "matrix.example.com"
        },
        "m.identity_server": {
            "base_url": "https://vector.im"
        }
    },

Generate Synapse config

Now its time to generate the initial config of our synapse server, run the following command to do so:

sudo docker run -it --rm \
    -v "$HOME/matrix/synapse:/data" \
    -e SYNAPSE_SERVER_NAME=matrix.example.com \
    -e SYNAPSE_REPORT_STATS=yes \
    matrixdotorg/synapse:latest generate

Configuring Synapse

Now lets adjust the freshly generated synapse config to our needs

Edit the synapse/homeserver.yaml file as follows:

Comment-out the SQLite database:

#database:
#  name: sqlite3
#  args:
#    database: /data/homeserver.db

And now add our Postgres database:

database:
  name: psycopg2
  args:
    user: synapse
    password: STRONGPASSWORD
    database: synapse
    host: postgres
    cp_min: 5
    cp_max: 10

Spin it up!

Time to get things moving, spin-up your docker stack:

sudo docker-compose up -d

Create our first (admin) user

sudo docker exec -it matrix_synapse_1 bash
register_new_matrix_user -c /data/homeserver.yaml http://localhost:8008

Follow the on-screen instructions to create your first user

Install Caddy Reverse Proxy

Caddy will be used for the reverse proxy. This will handle incomming HTTPS connections and forward them to the correct docker containers. It a simple setup process and Caddy will automatically fetch and renew Let’s Encrypt certificates for us!

First head over to our user directory

cd ~

Install Caddy:

sudo apt install caddy -y

Create a Caddy file:

sudo nano Caddyfile

Paste the following config:

matrix.example.com {
  reverse_proxy /_matrix/* 10.10.10.4:8008
  reverse_proxy /_synapse/client/* 10.10.10.4:8008
  
  header {
    X-Content-Type-Options nosniff
    Referrer-Policy  strict-origin-when-cross-origin
    Strict-Transport-Security "max-age=63072000; includeSubDomains;"
    Permissions-Policy "accelerometer=(), camera=(), geolocation=(), gyroscope=(), magnetometer=(), microphone=(), payment=(), usb=(), interest-cohort=()"
    X-Frame-Options SAMEORIGIN
    X-XSS-Protection 1
    X-Robots-Tag none
    -server
  }
}

element.example.com {
  encode zstd gzip
  reverse_proxy 10.10.10.3:80

  header {
    X-Content-Type-Options nosniff
    Referrer-Policy  strict-origin-when-cross-origin
    Strict-Transport-Security "max-age=63072000; includeSubDomains;"
    Permissions-Policy "accelerometer=(), camera=(), geolocation=(), gyroscope=(), magnetometer=(), microphone=(), payment=(), usb=(), interest-cohort=()"
    X-Frame-Options SAMEORIGIN
    X-XSS-Protection 1
    X-Robots-Tag none
    -server
  }
}

Don’t forget to adjust the IPs and the domain names!

Reload Caddy

caddy reload

You are done, were you looking for more?

Head to your element domain and login!

Install Kubernetes on Ubuntu

Step 1: Install Docker

curl -fsSL https://get.docker.com -o get-docker.sh
sh get-docker.sh

Step 2: Make sure containerd dosen’t have CRI disabled

Edit the file /etc/containerd/config.toml and make sure that the disabled_plugins setting is blank

nano /etc/containerd/config.toml

Then restart the containerd service:

service containerd restart

Step 3: Install Kubernetes

You will start by installing the apt-transport-https package which enables working with http and https in Ubuntu’s repositories. Also, install curl as it will be necessary for the next steps. Execute the following command:

sudo apt install apt-transport-https curl

Then, add the Kubernetes signing key to both nodes by executing the command:

echo "deb https://apt.kubernetes.io/ kubernetes-xenial main" >> ~/kubernetes.list
sudo mv ~/kubernetes.list /etc/apt/sources.list.d
curl https://packages.cloud.google.com/apt/doc/apt-key.gpg | apt-key add -

After that, update the nodes:

sudo apt update

Once the update completes, we will install Kubernetes. This involves installing the various tools that make up Kubernetes: kubeadm, kubelet, kubectl, and kubernetes-cni

sudo apt-get install -y kubelet kubeadm kubectl kubernetes-cni

Step 4: Disabling Swap Memory

Kubernetes fails to function in a system that is using swap memory. Hence, it must be disabled in the master node and all worker nodes. Execute the following command to disable swap memory:

sudo swapoff -a

Step 5: Setting Unique Hostnames

Your nodes must have unique hostnames for easier identification. If you are deploying a cluster with many nodes, you can set it to identify names for your worker nodes such as node-1, node-2, etc. As we had mentioned earlier, we have named our nodes as kubernetes-master and kubernetes-worker. We have set them at the time of creating the server. However, you can adjust or set yours if you had not already done so from the command line. To adjust the hostname on the master node, run the following command:

sudo hostnamectl set-hostname kubernetes-master

Step 6: Changing Docker Cgroup Driver

On both master and worker nodes, update the cgroupdriver with the following commands:

sudo mkdir /etc/docker
cat <<EOF | sudo tee /etc/docker/daemon.json
{ "exec-opts": ["native.cgroupdriver=systemd"],
"log-driver": "json-file",
"log-opts":
{ "max-size": "100m" },
"storage-driver": "overlay2"
}
EOF

Then, execute the following commands to restart and enable Docker on system boot-up:

sudo systemctl enable docker
sudo systemctl daemon-reload
sudo systemctl restart docker

Step 7: Initializing the Kubernetes Master Node

The first step in deploying a Kubernetes cluster is to fire up the master node. While on the terminal of your master node, execute the following command to initialize the kubernetes-master:

kubeadm init --pod-network-cidr=10.244.0.0/16 --control-plane-endpoint <EXTERNAL-IP>

In the output, Kubernetes also displays some additional commands that you should run as a regular user on the master node before you start to use the cluster. Let’s run these commands:

mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config

We have now initialized the master node. However, we also have to set up the pod network on the master node before we join the worker nodes.

Step 8: Deploying a Pod Network

A pod network facilitates communication between servers and it’s necessary for the proper functioning of the Kubernetes cluster. You can read more about Kubernetes Cluster Networking from the official docs. We will be using the Flannel pod network for this tutorial. Flannel is a simple overlay network that satisfies the Kubernetes requirements.

kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml
kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/master/Documentation/k8s-manifests/kube-flannel-rbac.yml

Step 9: Joining Worker Nodes to the Kubernetes Cluster

First, log into your worker node on a separate terminal session. You will use your kubeadm join command that was shown in your terminal when we initialized the master node in Step 7, execute the command indicated there, it will look something like the following:

kubeadm join XXXX:6443 --token XXX \
	--discovery-token-ca-cert-hash sha256:XXXX

Once the joining process completes, switch the master node terminal and execute the following command to confirm that your worker node has joined the cluster:

kubectl get nodes

Ender-3 ESteps Calibration

E-steps will means the number of steps a stepper motor takes to extrude one millimeter of filament, incorrectly calibrated e-steps might cause under extrusion and bad bed adhesion too.

The process to calibrate your ESteps is very easy

  1. Heat up the extruder and remove the filament
  2. Remove the bowden tube from the extruder pneumatic fitting, so you can see the filament coming out of it.
  3. Insert filament until it reaches the pneumatic fitting and snip it flush to the whole
  4. Manually on the printer extrude 100mm of filament
  5. When the printer finishes extruding the filament snip it again by the extruder pneumatic fitting
  6. Using calipers measure how much filament was extruded
  7. Go to your printer settings and check what is your current estep value (For Ender-3 Menu > Control > Motion > Steps/mm > Esteps/mm) and note down your current EStep value (for most Ender-3s the default value is 93mm)
  8. Calculate your new esteps using the following formula: OldValue x 100 / MeasuredFilament, this means that if your configured ESteps were 93 mm, and you measured 96.6 mm your new EStep value would be ~96.3mm (93x100/96.6=96,273291925)
  9. Set this new EStep value in your printer settings (For Ender-3 Menu > Control > Motion > Steps/mm > Esteps/mm)
  10. Dont forget to save the settings, of you will wonder why your prints are not sticking again on the next day!