The end to distro-hopping?

Since I have moved to running Linux basically full time, I have ran Pop!OS, Fedora, Nobara, back to Fedora KDE Spin and now OpenSUSE Tumbleweed. All of them are good distributions and none have any real deal-breakers for me not to want to use them.

The fact that I really, really like Tumbleweed was not a given. Quite the contrary. OpenSUSE does it’s own thing, so it is a bit of a learning curve if you are used to Debian-based or RedHat-based distributions. It is one of the oldest distributions still operating today, which puts it in the same company as Debian and Slackware. The latter actually being the initial origins for (then) SUSE Linux.

But I digress. My experience with SUSE in the past (=20 odd years ago) was less than favorable to the point that I abandoned it and never really looked at it again until now. Installation was painless and it let you choose your desired desktop environment as part of the installation procedure. I run KDE, which is the first option and let it do its thing.

After install I was back in business. Mind you: I have /home on a seperate disk, so all I have to do when reinstalling or using a new distribution is to make sure that drive is mounted on /home and I have all my (program) settings back and my desktop just looks the same as before. Saves me time to set everything up the way I want again and no data loss when doing a reinstall.

Some observations after installing OpenSUSE Tumbleweed:

  • Using OpenSUSE means using BTRFS right: proper subvolumes upon install, snapper configured out of the box to create snapshots when installing software and updates and grub configured to be able to boot from previously created snapshots so you can rollback if something happens to go wrong. No more worried about effing up your system
  • YaST2 is a very useful tool. It let me configure and add my iscsi targets in 2 minutes tops and it connects automatically on boot again, whereas most other distributions make that a painful, manual task that requires workarounds to mount your drives after a reboot. I don’t really need a GUI tool to configure my systems, but this tool lets you configure all aspects of the system and sometimes it is just faster doing it once in the GUI than to do it manually.
    Note: the original YaST is the reason I quite SUSE years ago and never came back. Different story.
  • It has a huge repository of programs, including non-FOSS if you add the packman repos. And if you install opi (openSUSE Package Installer) you have the entire OBS (OpenBuild System) at your finger tips, including user repositories. Basically you have the OpenSUSE equivalent of Arch’s AUR.
  • Installation of proprietary drivers and codecs is easy and painless, as with most distributions. That said, I think OpenSUSE is the only distro where the Nvidia drivers come straight from a repo at Nvidia.
  • It’s (Tumbleweed) a rolling distribution, so always the latest software but it is better tested than something like Arch. This means less issues are likely to arise.

Some inconveniences I ran into:

  • Both my laptop and desktop were unable to load Wayland and had to use Xorg. This is not a big thing. This was caused by a SDDM bug where the SHELL variable got messed up if you use the fish shell as you login shell. This has been fixed by the KDE team in ’21/’22, but the fix was never backported to OpenSUSE by the OpenSUSE team. Workaround was to change the login shell to standard bash, and just tell the terminal app to start with the fish shell. Now I can run Wayland on my laptop, while still using Xorg on my desktop as it work just works better with Nvidia graphics. But it will run Wayland if I want to.

Other things I did after installing Tumbleweed are not OpenSUSE specific things, but more generic Linux optimizations I have made.

  • I used tuned with tuned-adm to select and set an appropriate tuning profile for my systems.
  • I have set up a couple scripts for my laptop to switch profiles and CPU modes depending on if the laptop is running on AC power, or battery.

When running on AC power:

#!/bin/bash
#
# Set CPU governor to powersave
if [ -e /usr/bin/cpupower ]; then
	sudo cpupower frequency-set --governor performance
    sudo tuned-adm profile latency-performance
else
    echo performance | sudo tee /sys/devices/system/cpu/cpu*/cpufreq/scaling_governor
    sudo tuned-adm profile latency-performance
fi
# EOF

When running on battery power:

#!/bin/bash
#
# Set CPU governor to powersave
if [ -e /usr/bin/cpupower ]; then
	sudo cpupower frequency-set --governor powersave
	sudo tuned-adm profile powersave
else
	echo powersave | sudo tee /sys/devices/system/cpu/cpu*/cpufreq/scaling_governor
	sudo tuned-adm profile powersave
fi
# EOF

These are set in KDE settings under power management, see screenshot:

  • On the desktop, I just set the tuned profile to latency-performance using tuned-adm. This is a profile that aims to prioritize latency over throughput, which is great for desktop use with a GUI.
  • I also set some sysctl parameters as outlined in another post.

Like I said, these are not Tumbleweed or OpenSUSE specific optimizations or even necessary, but it is how I prefer to set up my systems. I feel it helps me to squeeze out the most performance out of my systems, and in the case of the laptop, preserve battery life when needed.

When all is said and done, I really like OpenSUSE Tumbleweed. I like the bootable snapshots. I like having the latest versions of different software without having to wait for a next release. I like the tools provided to admin the system. It’s a good’un. I think I’ll keep it. Then again, I said that of Fedora. And Nobara. But I do think it is true this time.

Running redundant AdGuard@Home DNS servers on Synology

This is a quick post on how to run multiple instances of AdGuard@Home on a single Synology host using Docker.

Problem: you cannot easily, using Docker, run multiple instances of the same program - or different program - while listening on the same port.

Solution: do not use host or bridge networking, but put the container on the same network as the host using macvlan.

To achieve this, we need to do the following:

Prerequisites

  1. Find the name of the network interface your Synology is using to connect to the network you want your Docker containers to be running on. This can be for instance eth0 for a single interface, or bond0 for when you use channel bonding. You can find this under Control Panel > Network > Network interface. In my case, this is bond0 which is what I will use in the examples below.

Configuring the interface

Now we have to configure the interface Docker can use. We do this by adding a bridge on top of the existing physical interface you use on your network.

  1. ip link add macvlan-br0 link bond0 type macvlan mode bridge
    This adds a bridge device on top of the physical interface with the name macvlan-br0
  2. ip addr add 192.168.0.254/32 dev macvlan-br0
    This adds an IP address on the bridge device so the host has an IP address in the range will give to Docker
  3. ip link set macvlan-br0 up
    This will activate the virtual bridge device
  4. ip route add 192.168.0.192/26 dev macvlan-br0
    This will add a route to the Docker network so it can be reached

You will have to put this in a script you can run at boot of your Synology device, as these settings will not retain over a reboot as we have to make them on the commandline and cannot make them in the Synology DSM.

#!/bin/bash
#
# Set timeout to wait host network is up and running
sleep 60
#
# Recreate the host macvlan bridge
ip link add macvlan-br0 link bond0 type macvlan mode bridge
ip addr add 192.168.0.254/32 dev macvlan-br0
ip link set macvlan-br0 up
ip route add 192.168.0.192/26 dev macvlan-br0

Docker

Now that we have set up the host, we can continue creating a new network in Docker that can be used by our containers. To do this, type:

docker network create -d macvlan \
  --subnet=192.168.0.0/24 \
  --gateway=192.168.0.1 \
  --ip-range=192.168.0.192/26 \
  --aux-address 'host=192.168.0.50' \
  -o parent=bond0 macvlan-br0

This will create a network in Docker on the subnet of your network in a dedicated range of IP addresses using your physical interface and virtual bridge device.

Do make sure the IP range you specify for Docker is not part of your DHCP scope, if you are also running DHCP or you will get IP conflicts. Docker does not use DHCP, and instead will just hand out IP addresses from this range in order to each container.

Now you can create your Docker containers as usual and configure them to use this network, instead of the standard host or bridge networks Docker uses by default. You can also assign this network to existing containers if you want.

Install container

To install AdGuard@Home on your Synology, open Docker, go to “Registry” and search for “adguard”

Double click to download the latest version.

When it is done downloading, you can go to the “Container” section, and hit create. A window will popup where you can select your freshly downloaded image.

Select it and click next to follow the instructions. Configure what you want, but make sure to select the macvlan network when you get to the screen to pick the network.

Congratulations! Test your new container that is now present on your local network.

Important note: if you are running the DNS server on your Synology for your local network, and you want to keep using that, make sure you configure your AdGuard as DNS for your clients, and add the following to your AdGuard DNS configuration under upstream servers:

[/domain.local/]192.168.0.254:53

This will instruct AdGuard to use the IP address you added to your bridge device at the beginning as the source for resolving domain.local hosts. 

The complete upstream section for me looks something like (I use Cloudfare DNS for internet, and Synology DNS for local addresses):
tls://1.1.1.1
tls://1.0.0.1
[/domain.local/]192.168.0.254:53

Do not use the real IP address of your Synology host; this is not reachable for Docker and will not work! Use the bridge device address instead.

New laptop, new Linux distro

I finally got fed up with my Macbook Pro as a Linux laptop. It was a 2012 model, no longer supported by Apple. But the quirky implementation of Nvidia discrete graphics on the Macbook wasn’t the best for reliable Linux operation. Mind you, Nvidia is always a bit of a challenge, but Apple being Apple certainly does not help in that regard.

Anyway, I am currently sporting a Lenovo T460 from 2016. Still not really a new laptop, but fast enough for Linux and most things I do with it. It has a i5 6300U from the Skylake family of processors, 8GB memory (for now, supports up to 32GB) and a 1TB SSD. For graphics it used the Intel 520

For the OS, I am now running Nobara Linux 37. This is a Fedora derivative with tweaks for performance and gaming. While I am not a gamer, the improvements do help making it a snappy experience. Contrary to my desktop computer, I am now using the KDE version.

Current Nobara 37 desktop on my laptop

Getting used to KDE after having used Gnome for a long time can cause some frustrations. Not because what you want isn’t possible (typically quite the opposite) but because you cannot find it or know where to look. Where Gnome is has more of a ‘our way or the highway’ mentality and is more concerned with their vision than how the users are perceiving or using it, KDE is all about customization and providing the user with the ultimate choice. If you can dream it, KDE can probably be configured to do it. And that can be a bit overwhelming. It has so many options, sometimes you just have no idea where to look to achieve something.

So far, I am very happy. Both with the laptop, as well as Nobara Linux and KDE. I may have to switch my desktop to the same configuration soon… Now that I am getting the hang of KDE again, I remember why it was my default and go to desktop environment in the past, and the quirks and annoyances from Gnome seem to be more and more stuff I don’t really want to deal with no more. You shouldn’t have to hack together your environment around the limitations imposed by its maintainers. It’s Linux, not Windows or MacOS…

Upgraded laptop to Fedora 36

Fedora 36 with a few extension to make it the way I like it.

Upgrade was smooth as is to be expected with Fedora and everything is running smooth. Libadwaita is not as bad as I thought it to be, and it actually looks really good. I wish they had taken the time and effort to create a GTK3 theme for applications that are not libadwaita aware, but fortunately someone else did. It does look horribly inconsistent without. Bad move.

Other than this, I do not miss the theming at the moment now that I added some transparency in the top bar and dock.

Optimizing Linux for desktop performance

My daily driver is currently Pop!_OS, which is a desktop Linux distribution. It’s a very nice distribution, really good with Nvidia hardware (which isn’t a given on Linux) and, to me, a Gnome look that is very close to what I want so my GUI changes are minimal.

What’s less, and that is a more generic Linux problem, is that particularly the Linux kernel is optimized for server use and not desktop. It prioritizes throughput over latency, which is great for raw performance but less if you expect a smooth, fast GUI.

We can fix that.

Kernel

This first one is optional and controversial. Many will say a custom kernel is not needed and does not add anything. On my computer however, using the Xanmod kernel does make the GUI significantly faster and smoother. Installation instruction are on their page.

Second, we want to pass two boot parameters to the kernel when booting. If using systemd-boot, like Pop! does, open the corresponding file under /boot/efi/loader/entries and add:

nvme_core.default_ps_max_latency_us=0 pcie_aspm=off

to the line starting with options.

If using grub, add the same to the line GRUB_CMDLINE_LINUX_DEFAULT under /etc/default/grub and do a update-grub to activate.

Options

Part two is modifying the sysctl parameters. Under /etc/sysctl.d you will find files that set certain parameters on how your system works. Create a new file, and add the following:

# These are settings from /etc/sysctl.d/ and can be activated by running sysctl --system as root
# Save this file in that location.
#
# These settings set the disk caching for the system
#
vm.dirty_bytes = 33554432
vm.dirty_background_bytes = 8388608
vm.dirty_writeback_centisecs = 100
vm.dirty_expire_centisecs = 300
#
# We need to either use *_ratio, or we need to use *_bytes. We cannot use both. Currently
# using _bytes, so disabling _ratio
#
# vm.dirty_background_ratio = 10
# vm.dirty_ratio = 80
#
vm.page-cluster = 0
# Increased to improve random IO performance
fs.aio-max-nr = 1048576

This will set certain parameters pertaining to disk caching and IO performance. You can activate by running sysctl –system as root, or by rebooting your system.

Disk

To optimize your disks, if you are using SSD, it’s worth it to make some changes to your /etc/fstab. There’s two parts to this:

  1. Mount the root filesystem with settings optimized for SSD’s
  2. Ensure temporary directories are running from memory by mounting them in a tmpfs to limit disk writes and extend the life of your SSD.

For the first one, I mount my root device in /etc/fstab like:

<device>  /  btrfs  defaults,ssd,noatime  0  0

For the second, add these lines to /etc/fstab

# SSD tweak: temporary directories as tmpfs
tmpfs   /tmp       tmpfs   defaults,noatime,mode=1777   0 0
tmpfs   /var/tmp   tmpfs   defaults,noatime,mode=1777   0 0
tmpfs	/var/log	tmpfs	defaults,noatime,mode=0755	0	0
tmpfs	/var/spool	tmpfs	defaults,noatime,mode=1777	0	0

DISCLAIMER: Putting anything other than /tmp into memory, can produce unpredictable results in specific circumstances. It should be ok on desktop machines and helps to extend the life of your SSD by limiting writes. Do not enable on servers. Actually most of what is on this page may have an adverse effect on server performance.

Activate by rebooting your system. Enjoy a faster, more responsive system.

[FEB 8/22 UPDATE]: Since publishing this article I have moved from Pop!_OS to Fedora. Fedora is a cutting edge distribution, which does not require all of these tune-ups to make a snappy OS out of the box.

Let’s elaborate.

  • The kernel parameters mentioned above do not need to be updated on Fedora
  • The sysctl.d modifications are not required on Fedora, but they are done simply because I have more than plenty memory anyway. Out of the box default settings on Fedora are better than those of Pop!_OS
  • The disk optimizations in /etc/fstab are set by default

Fedora Update

Fedora uses a different package management system than Pop!_OS, which is Ubuntu based. While Debian derivatives like Ubuntu and Pop!_OS use apt as their package manager, Fedora is RedHat based. RedHat uses rpm files which are managed by yum or dnf (depending on the version of the OS).

By default, dnf is quite slow compared to apt but this is easily fixed by adding some parameters to the configuration file.

[main]
gpgcheck=1
installonly_limit=3
clean_requirements_on_remove=True
best=False
skip_if_unavailable=True
max_parallel_downloads=10
fastestmirror=True

The two bottom bold lines need to be added to /etc/dnf/dnf.conf. The first one increases the number of simultaneous download connections to 20, which increases download speed. The second one looks for the fastest mirror from your location, which ensures you will get the maximum possible download speed. Combined, these make dnf operate as fast or faster than apt on most systems.

Due to the nature of a bleeding edge distribution like Fedora, it can sometimes be tricky to update. Especially the kernel and / or kernel drivers. To avoid such problems, I run updates with the --exclude=kernel* flag. In fact, I wrote a function for my Fishshell to get and install updates without kernel, like so:

function up2date
  sudo dnf upgrade -y --refresh --exclude=kernel\*
end

And saved it as up2date.fish under $HOME/.config/fish/functions

Moving a Linux install to a new disk

Recently I had to move my Linux install from one drive to another, as I was experiencing some issues with a WD SN550 nvme drive causing some short random freezes of the GUI with IO intensive tasks. Since I also have a Samsung nvme drive installed, I decided to see if the problem persists on the other drive.

But… Having a fully configured and customized Linux install is a pain in the behind to redo. I did not want to clone, because I made a mistake during install the previous time and it installed in legacy MBR mode, so I wanted to do a proper install using UEFI mode. But preferably not having to re-do all the setup and customizations.

And I didn’t have to. Apt to the rescue.

$ apt-mark showauto > pkgs_auto.lst
$ apt-mark showmanual > pkgs_manual.lst

This will generate a list of .deb packages installed on the system. The first one with all the automatically installed packages, the second one all the manually installed packages from the commandline.

I also made a backup of /etc/apt/sources.list.d and /etc/apt/trusted.gpg.d. The first directory contains all the repositories I had in use on the original install, and the second directory holds all the GPG keys that go with these repositories. Important!

First install the system on the new drive, and make sure all updates are installed. You don’t need to install or setup anything but the base system. Now you can continue with the back ups and files you created earlier.

After I moved the two directories above to their respective place on the new install, and of course doing an sudo apt update && sudo apt upgrade to make sure all packages are still up to date, I loaded up the list of packages I created earlier.

$ sudo apt install $(cat pkgs_auto.lst)
$ sudo apt install $(cat pkgs_manual.lst)

This will create some errors due to packages that cannot be installed like this, or packages that were installed from a .deb file and aren’t located in any repository. Clean up those entries, and try again and let it run.

When it is finished, copy your /home/<user> from your old drive to your new drive and when you reboot and log back in as your user, everything should be just as it was before.

Success!

Linux and a little rant

Allow me to start with the rant bit first. I took my first steps in the world of Linux around 1993, 1994. Back then, Linux was still pretty new and very few distributions existed. Debian just spawned into life. RedHat wouldn’t come to life until mid 1995. There were no big Internet forums or user communities. Most people using Linux were computer engineers that knew how to build a Linux system from scratch and didn’t take newbie kid questions all too well. N00b questions were typically quickly and swiftly dealt with by a grump “RTFM” and you were left to do a lot of reading and figuring it out yourself. None to hold your hand and do the hard work for you. Annoying at times, but when you did figure it out it gave you a real sense of accomplishment. This is how I learned my way around Linux.

Fast forward to today, and you have hundreds of distributions. Some are touted as ‘beginner distros’, others as ‘advanced distros’. Let me shoot that down immediately. There are only user-friendly distros and user-unfriendly distros. Some extremely user-unfriendly. Functionally, they can all to the same. There is not some magical advanced functionality in these so-called ‘advanced distros’ not available in the ‘beginner distros’. The only difference between the two is that the ‘beginner’ distros are as easy to initially set up and run as your typical Windows or MacOS install and don’t offer too many customization choices to confuse you and provide you with a default set of applications you may or may not use, to get you started, while the ‘advanced’ distros pretty much require you to be a masochist and think and decide about every step. Once they are up and running, they work pretty much the same and can do the same. There is literally nothing one of these ‘advanced’ distros can do you cannot also do on one of these ‘beginner’ distros.

This gets me to the users… Some of these ‘advanced’ distro users feel they are sooo smart. They consider everybody that uses a ‘beginner’ distro to be a n00b that should be pitied as they cannot begin to grasp the advancedness of their knowledge and their choice of distribution. This holds particularly true to a significant portion of the Arch Linux community. They think they are so smart for using a distribution that only installs a basic system and dumps you to the command-line so you can use their package manager to install the stuff you actually want on your system. Don’t get me wrong, it is a very valid philosophy that avoids cluttering up your disk with GB’s of stuff you’re never going to be looking at. But it is not advanced.

Back in the day, you had to download the sources for just about everything because if you wanted something that wasn’t installed by default, it probably didn’t have a binary available for your system. You’d have go through the documentation, find and fix all the dependencies yourself (often also by compiling and installing the right versions, in order) and then compile your application from source, hoping it actually would compile without errors you’d then have to debug and fix before trying again. And when it was finally installed, it probably didn’t work until you build your configuration files manually.

Telling a package manager which software you want to install only for the package manager to download it for you from a central repository, fix any dependencies automatically and install your selected package so it works is not anything advanced. It’s just manual work. It’s typing dumb commands on the command prompt. It does not make you some Linux Guru. It doesn’t even give you any usable extra knowledge. If you enjoy doing things that way, all the more power to you. But don’t be some cocky arrogant SOB that belittles others for not wanting to do that. It doesn’t make your install better, just leaner, it doesn’t make you smarter and behaving like that only makes you a prick.

There are a lot of people that have heard enough about Linux to be curious and wanting to try it, that are permanently put-off by the arrogance and belittlement of the Linux community. People should try to remember they too had to learn at some point and realize there are no stupid questions, only stupid answers. More competition means more and better choices for us as users, but for that to happen new users that are willing to learn something new should be encouraged, not belittled for being new and put down until they give up to never return.

If you want to try Linux, just do it. Pick a distro you like and stick with it. Don’t be fooled into believing the nonsense about beginner and advanced distributions, thinking at some point you need to upgrade to get a more advanced version. They can all be customized to look like whatever you want, they can all have the same functionality, there is not a single one that is better than the rest. There is only personal preference of the users using them. That’s not to say it cannot be fun to do some distro hopping to find the one that really suits you. Just as long as you remember its preference, not functionality that makes the difference.

Accessing WSL on Windows 11

WSL filesystems in Windows Explorer on Windows 11

Moving files around between WSL instances and Windows just got a whole lot easier on Windows 11. All you need to do is scroll down in the left pane until you see Linux and can access the filesystem from any WSL instance to copy files to it, grab files from it and whatever it is you need.

Previously on Windows 10, you could access the special UNC path \\wsl$ to access the filesystem of a running WSL instance, but in Windows 11 you can access any WSL instance without the need to have it running.

Windows 11 on a unsupported computer

Yes, you can run the new Windows 11 beta or development versions on unsupported hardware. Yes, you will have heard all about it already. No, not every method is the same or works as well.

That said, most of what you see online are in my opinion methods that are not guaranteed to work or can in some cases be problematic.

This method described below always works, on every computer that can run and install Windows 10. Most likely, it will work with the release version of Windows 11, but we will have to wait to be sure obviously.

Step 1: Creating a Windows 10 ISO

Download the Media Creation Tool from the Microsoft site, run it and accept the license terms. It will get some things ready, or so it says so just wait.

When it is done, you are presented with two options:
o Upgrade this PC
o Create installation media (USB flash drive, DVD, or ISO file) for another PC

Select the second option and click next.

Select your language, edition and architecture and click next.

Select ISO image and click next. It will prompt you for a filename and location to save to. Then it should start downloading and generating your ISO file. Just let it run its course.

Step 2: Creating a Windows 11 ISO

Go to the UUP Dump website. This site will help you generate scripts to download and compile an ISO for the Windows software of your choice.

Note: These scripts download the necessary files directly from Microsoft, so it is safe and not some unknown source.

As of this writing, the latest version is “Windows 11 Insider Preview 10.0.22000.100 (co_release) amd64“. Click the release of your choice.

Select your language and click next.

Select your edition(s) and click next. I would choice just Windows 11 Pro here, however it is up to you.

Select Download and convert to ISO and below that, check all but component cleanup under conversion options. Click Create download package.

It will create and download a small zip file of a few kilobytes that contains scripts to create your ISO on Windows, MacOS or Linux.

Open an command prompt as Administrator and run the appropriate script. On Windows that would be uup_download_windows.cmd. Running this will take a while. Let it run. Go have a coffee. When it is done, you will have a Windows 11 ISO.

This ISO can be used on a supported machine to upgrade or install as normal.

Step 3: Creating a hybrid W10/W11 ISO

First we need to extract the Windows 10 ISO we created under Step 1. If you have a program like WinRAR, you can just right click the ISO file and extract or you can double click the ISO file to mount it, and then just copy all files and directories to a place on your hard disk.

When you have the contents of the Windows 10 ISO on your hard disk, navigate to the sources folder in the location you extracted or copied the ISO files to and find the file called install.esd. Delete it.

Now double click the Windows 11 ISO you created and navigate to the sources folder here as well. Again, find the file called install.esd. Copy it, and paste inside the sources folder of your Windows 10 files on your hard disk.

Now you basically have a Windows 10 DVD structure on your hard disk with Windows 11 content. We can now convert this file structure into a new, hybrid W10/W11 ISO.

First, you will need to download and install ImgBurn. Start the program and select Create image file from files/folders. It’s the middle option on the right side.

Select your source by selecting the location of your extracted or copied Windows 10 ISO on your hard disk where we just copied the Windows 11 install.esd to.

Select a destination where you want to save your newly created hybrid ISO to.

Under the advanced tab on the right side of the window, select “Bootable Disc” and make sure to check “Make Image Bootable”.

For the Boot image, browse to boot folder where you extracted your ISO on your hard disk and select the file etfsboot.com.

Lastly, under “Sectors To Load”, change the default of 4 to 8.

Now click the big button on the left bottom to create your new Hybrid ISO. This will be very quick.

Final words

What you have just created is a Windows 10 install disk that contains the Windows 11 files. Because of this, no pre-install checks on TPM, Secure boot or CPU are done and Windows 11 will be installed just like you are used to for Windows 10. After the install is finish, Windows does not do any post-install checks and it just boots and runs.

However, if you are like me and don’t meet most of the hardware requirements, you cannot turn on the Windows Insider option so you can download and install updates as they become available. To fix this, visit this GitHub site to download a small script to enable the Insider channel of your choice (mine is Beta) without even having to log in.

Monitoring your network or homelab using Zabbix and Grafana

For the longest time, I have been monitoring my network and homelab using Observium. This worked and does work very well. Observium is very good at what it does. However, there are a couple of things that do not work so well for me using Observium:

  • Observium does not let me add applications to monitor very easily or at all
  • Observium is limited to what can be offered through SNMP
  • Observium is not open source and as such it cannot be modified or changed to your needs
  • Observium is not an application I have come across in my professional life, so knowing how it works does not help me professionally.

That last bit is obviously not a necessity, however I do feel it’s always a nice thing to be able to apply things you have learned in your homelab into your professional environment.

After lots of investigations and trial & error, I have settled on using Zabbix for my monitoring needs. Zabbix is open source product that is used a lot in corporate environments and it is very flexible and extensible. Obviously you could add or change code as it is open source, but you don’t really need to. As Zabbix is template driven, its functionality can be extended by adding templates and there is a plethora of templates available for Zabbix, both on the Zabbix site itself (Zabbix Share) as well as places like GitHub. Also, Zabbix can use an agent installed on the system to collect the data you want to monitor, or you can use SNMP if you can’t or don’t want to install an agent on a device (for instance a network router).

The one thing I do not like about Zabbix is that the historic view is not easy to get to, nor as pretty displayed in a dashboard-like view as it is in Observium. However, that is not a blocker as we can use Grafana, another open source tool that is used quite a bit in Corporate Land to create dashboards and display relevant historic data.

Installing Zabbix

Installing Zabbix is no more complicated than installing most other software on Linux. I installed Zabbix on my Raspberry Pi 2B and the short version is this:

a. Install the Zabbix repository:
# wget https://repo.zabbix.com/zabbix/5.4/raspbian/pool/main/z/zabbix-release/zabbix-release_5.4-1+debian10_all.deb
# dpkg -i zabbix-release_5.4-1+debian10_all.deb
# apt update

b. Install the Zabbix server, frontend and agent on the server machine
# apt install zabbix-server-mysql zabbix-frontend-php zabbix-apache-conf zabbix-sql-scripts zabbix-agent

c. Create the database for Zabbix
# mysql -uroot -p
password
mysql> create database zabbix character set utf8 collate utf8_bin;
mysql> create user zabbix@localhost identified by 'password';
mysql> grant all privileges on zabbix.* to zabbix@localhost;
mysql> quit;

And import the database schema
# zcat /usr/share/doc/zabbix-sql-scripts/mysql/create.sql.gz | mysql -uzabbix -p zabbix
If your database is not residing on your Zabbix server, you can add -h <server> to the above to make sure you are connecting to the remote database server. Also, when creating the user and you are using a remote database, make sure the Zabbix user is allowed to connect over the network to the database.

d. Configure the Zabbix server to use the database you have created by filling out the relevant fields in /etc/zabbix/zabbix_server.conf

e. Start your brand-new Zabbix server:
# systemctl restart zabbix-server zabbix-agent apache2
# systemctl enable zabbix-server zabbix-agent apache2

f. You are done. You can now login to your server on http://<server>:3000 and log in with user Admin and password zabbix and configure everything else using the web frontend.

Installing Grafana

Installing Grafana is equally simple.
a. Install the pre-requisites and add the repository key
sudo apt-get install -y apt-transport-https
sudo apt-get install -y software-properties-common wget
wget -q -O - https://packages.grafana.com/gpg.key | sudo apt-key add -

b. Add the repository
echo "deb https://packages.grafana.com/oss/deb stable main" | sudo tee -a /etc/apt/sources.list.d/grafana.list

c. Install Grafana
apt update
apt install grafana

d. Then it is just a matter of starting the Grafana server and making sure it starts at boot.
systemctl daemon-reload
systemctl start grafana-server
systemctl status grafana-server
systemctl enable grafan-server

That’s it! You can now log in using the default username/password of admin/admin.

Note: if you want to apply specific configurations, like for instance database (mysql, postgres, sqlite3) beyond the default, you should refer the Grafana manuals as that would be a bit beyond the scope of this page.

Tying things together…

First you will need to install the Zabbix app into Grafana. We can do this using the commandline interface for Grafana:

grafana-cli plugins install alexanderzobnin-zabbix-app

After you do this, you can configure the Zabbix datasource. Go to Configuration -> Data sources and click “add data source”. Scroll down to the bottom, and you will see one called ‘Zabbix’.

Fill in the details of your Zabbix installation, and make sure you add the api_jsonrpc.php to the end of your URL. Check ‘With credentials’ under auth and under Zabbix API details add your username and password. Click “Save & test” and if all is ok, it will give you a green checkmark while saying data source updated.

You are now ready to add a dashboard to your Grafana and start monitoring your Zabbix data. I use this dashboard from Paulo Paim. You can add it by going to Dashboards -> Manage and then clicking the import button. In the box saying import from grafana.com, type the ID of the dashboard. In this case 5363 and click load.

That’s it!

Links:
Zabbix manual
Grafana documentation
Zabbix templates and add-ons
Grafana dashboards and plugins