Noteworthy Debian Trixie/Sid changes July 8 2023

Here is a late quick overview of important changes in Debian Sid during the last two weeks.

glibc was updated from version 2.36 to version 2.37. This version mostly contains bug fixes and mintor improvements. An important regression by this update was fixed in the Debian package 2.37-4, which is not yet in testing/trixie, so you might want to update immediately to the sid version of glibc if you are on testing.

Linux 6.3.11 fixes the so-called StackRot security vulnerability (CVE-2023-3269). An exploit wil be made public soon and will allow any local user to get root access rights, so make sure you are running this kernel on all your systems. This kernel also re-enables CONFIG_VIRTIO_MEM which got disabled by mistake in Debian 12 Bookworm. Both the StackRot security vulnerability as the CONFIG_VIRTIO_MEM regression are now also fixed in the latest kernel release in bookworm-security.

Server software

PowerDNS Authorative Server 4.8.0 adds Lightning Stream support, while PowerDNS Recursor 4.9.0 has some performance improvements, amongst others.

Cyrus IMAP 3.8.0 adds support for some IMAP RFCs and implements new JMAP features.

NGinx 1.24.0 enables TLSv1.3 by default.

Debian’s Slurm package was updated to version 23.02.3.

Desktop software

Firefox 115 is now available in sid. Keep in mind that the firefox package never moves to testing and stable, only the firefox-esr package does. Firefox 115 will become the future Firefox ESR release though. Most important change in Firefox 115 is that it enables hardware video decoding for Intel GPUs with VA-API on Linux by default.

Remmina in sid was updated from 1.4.29 to 1.4.31. This version brings back the remmina-plugin-spice package which was also disabled in Debian 12.

Digikam 8.0.0 has improved file format support, new OCR tool and other improvements.

The Kdenlive video editor version 23.04 adds nested timelines, new effects and transitions, improvements to subtile handling and integrates the Whisper speech recognition system.

Shotwell 0.32 also brings improved file format support and various other improvements.

Phosh, the GNOME Phone Shell was updated to version 0.29. Improvements include call notification on the lock screen and audio device selection to the settings.

Noteworthy Debian Trixie/Sid changes June 24 2023

As expected, the second week of Trixie development was a lot more quiet than the first week.

KDE Frameworks was updated from version 5.103 to 5.107. While this has little visible changes, these are libraries which lay the foundations for improvements and bug fixes in KDE applications.

KDE Gear apps Neochat (a Matrix client), Elisa (a music player), Dragon Player (a movie player), Filelight (a disk space visualizer) and Spectacle (a screenshot application) were updated to the 23.04 release. Spectacle has gotten a complete redesign of the user interface and supports screen recording on Wayland.

The QT based display manager sddm version 0.20 now has experimental Wayland support and has enabled HiDPI scaling by default.

The AV1 encoder and decoder svt-av1 was updated to version 1.6.0. This brings once again performance and quality improvements.

Other upgraded packages in sid include Homebank 5.6.5, Deluge 2.1, GNOME Music 44 and many others.

Noteworthy Debian Trixie/Sid changes week 1 (June 11 – June 17 2023)

A long time ago, I used to regularly post an overview of noteworthy changes in the Mandriva development version. For years now I am using Debian testing though. With the release of Debian 12 Bookworm, I though it could be interesting to keep track of noteworthy changes in the upcoming Debian version, Trixie.

I will be tracking sid trough the debian-devel-changes maling list. Usually about 10 days after a package entered sid, it should move to testing, at least if there are not important bugs in the package. The selection of which packages I mention here, is very personal. I will try to cover important changes for both desktop and server packages, but this list will never be complete. If you noticed an interesting change not mentioned, feel free to add a comment to this article.

I’m not sure whether I will make this kind of post regularly without interruption, but let’s see where this goes.

The first week of development, saw a huge amount of packages updated to the latest upstream versions. Some of these were already available in Experimental for some time. Let’s dive in.

Kernel, hardware support, low-level libraries

The Linux kernel was updated to the 6.3 series, coming from 6.1 in Bookworm. I refer to kernelnewbies.org for a complete overview of what’s new in Linux 6.2 and Linux 6.3, but I can mention BTRFS performance improvements in both versions (including discard=async being default on SSDs with TRIM support, performance improvements if you are using an Intel Skylake CPU and add retbleed=stuff to the kernel options and the usual driver improvements which improve hardware support, for example for the current Intel Arc GPUs. If you have an AMD processor with at least the Zen2 microarchitecture, you can enable the new amd_pstate_epp frequency scaling driver by adding the kernel option amd_pstate=active.

btrfs-progs has been updated to version 6.3.1. The major change in the 6.3 series is that block-group-tree is out of experimental mode. This will reduce the mount time of BTRFS file systems. You can enable this on an existing file system with the command

btrfstune --convert-to-block-group-tree <device>

Developers warn to be careful, because there might still be bugs.

The Mesa 3D drivers were upgraded from version 22.3.6 to 23.1.2. You will want to upgrade to this version if you are using an Intel Arc GPU because there have been many bug fixes. Also new in Mesa 23.1 is OpenCL support for AMD GPUs using rusticl.

power-profies-daemon version 0.13 entered Debian Sid. It adds support for the amd_pstate_epp driver which can be activated in Linux 6.3.

Tthe LLVM based Fortran compiler Flang is now available in Debian as the package flang-15. LLVM 16 is available in sid. Clang 16 and libc++ 16 are only available in experimental at this time and version 15 is still the default version in sid.

Server and virtualization

Samba 4.18.3 brings some performance improvements relative to version 4.17.8 in Bookworm.

Qemu was updated to 8.0.2. The 8 series brings various improvements, but maybe the most important thing to mention is that virtiofsd, a daemon which allows you to share directories on the host with guests, is not included in the qemu package any more. If you use this, you will need to install the new virtiofsd package which contains a new implementation in Rust.

Desktop

LibreOffice is now at version 7.5.4. The 7.5 series bings improved dark mode support, new application icons, nicer default table styles in Draw and Impress and other various improvements. See the release notes and the New Features in Libreoffice 7.5 video for more information.

This week we saw the first GNOME 44 packages enter sid. gnome-backgrounds 44 brings you new desktop wallpapers and evolution 3.48 brings lay-out improvements. If you don’t like the headerbar layout, you can disable it and switch back to the traditional toolbar.

New in Debian is the gdm-settings package. It lets you configure the GDM login manager and change its appearance through a user friendly user interface.

NetworkManager-openconnect 1.2.10 finally adds support for Single Sign-on implementations using SAML on the Cisco AnnyConnect and Palo Alto GlobalProtect VPNs. Unfortunately it does not seem to work for Pulse/Juniper. I have opened an issue for that, and in the meantime I use openconnect-pulse-gui.

The Transmission Bittorrent client 4 has moved from C to C++, uses less CPU and memory and has support for the Bittorrent v2 protocol, amongst other improvements.

The first KDE Gear 23.04 applications are now being uploaded to sid. Now in the repositories is the KDE Mastodon client Tokodon. Gwenview (supports pinch gestures to zoom in Wayland mode), Ghostwriter (automatic language detection for the spellchecker) and the Falkon web browser (dark colour scheme support) were updated to 23.04.

Tellico, the application which helps you to keep track of your music, movie and other collections has support for new data sources and improved support for existing data sources and has reports with an image grid.

Thhe dav1d AV1 video decoder was update d from 1.0 to version 1.2.1. This version brings many performance improvements thanks to new SIMD code. Also the svt-av1 encoder and decoder was updated to version 1.5.0, also adding some optimizations.

There are plans to remove the old SDL 1.2 packages, replacing them by SDL12-compat (Debian package libsdl1.2-compat-shim) which implements SDL 1.2 using the SDL 2. Version 1.64, which entered sid, added compatibility for some old games, such as Sid Meier’s Alpha Centauri and others.

Science, education and technical tools

GNU Octave 8.2 brings improved Matlab compatibility, the GUI has dark mode support and various other improvements.

The R statistical computing programming (package r-base) 4.3.1 (was 4.2.2 in Bookworm) brings many new features.

Labplot 2.10 comes with many new features, improvements and performance optimizations in different areas, as well as with support for new data formats and visualization types.

People who are into geographic information systems will by happy with the QGIS update. Version 3.28 introduces many improvements. Look at the changelog for more details.

The Electronics Design Automation Suite kicad has been updated to version 7. I don’t know anything of this kind of software, but the release announcement lists a large number major of improvements.

Others improvements

There are too many changes to list here in detail. I mention updated Pipewire 0.3.71, Wireplumer 0.4.14, GStreamer 1.22.3, Opus 1.4, Gajim 1.8.0, OpenJDK 20, Phosh 0.28.0 and much more.

Conclusion

The first week of Trixie development saw a huge amount of software enter Debian sid. This is of course due to the backlog of all new upstream versions which could not be submitted during the Bookworm freeze, are now all trickling in now. Some of these packages were already in Experimental.

My personal favourites are Linux 6.3 which now allows me to use the amd_state_epp driver and Evolution 3.48, which has an some improvements to the UI which I like a lot.

Now that the first flood of new packages have arrived, things will probably calm down a bit, also because of the upcoming summer and holidays in the northern hemisphere. But I guess we will see more of GNOME 44 and KDE Gear 23.04 entering sid soon.

Upgrading from Debian 11 Bullseye to Debian 12 Bookworm

Debian 12 Bookworm will be released very soon, on June 10 2023. The Debian Testing tree is now very close to the final release, so now is a good moment to start testing Bookworm if you did not do so. I already upgraded some of my server systems to Bookworm and I’m also running on all my desktop systems, so here are some notes of the upgrade process. Keep in mind that upgrading to Bookworm is only supported if you are running Bullseye. If you are running an older version of Debian (Buster), you will need to upgrade to Bullseye first and after that upgrade to Bookworm.

First of all, start with reading the release notes, it contains a very detailed howto guide describing all steps to upgrade your system to Bookworm. It also lists all major changes and important things to know before you upgrade.

First check which packages you have installed which do not come from the official Debian repositories with this command:

# apt list '?narrow(?installed, ?not(?origin(Debian)))'

Because these are not official Debian packages, Debian developers cannot guarantee that they will work correctly and will not conflict or cause compatibility problems when upgrading your system. For that reason, you should seriously consider uninstalling them during the upgrade process.

On one system I had a locally built snuffleupagus package installed. This package was built against a particular PHP version and because a newer Debian release will also include a newer PHP version, this could break things. So i removed this package:

# apt remove snuffleupagus

Then you need to verify whether you have put any packages on hold. Packages on hold will never be upgraded, so this can prevent a correct upgrade. Check all held packages with this command:

# apt-mark showhold

You can unhold them with this command:

# apt-mark unhold packagename

Then we need to adapt our apt sources.list and preferences.

You should have this in /etc/apt/sources.list (or in a .list file /etc/apt/sources.list.d):

deb [arch=amd64,i386] http://deb.debian.org/debian/ bookworm main contrib non-free non-free-firmware
deb-src http://deb.debian.org/debian/ bookworm main contrib non-free non-free-firmware

deb [arch=amd64,i386] http://security.debian.org/debian-security bookworm-security main contrib non-free non-free-firmware
deb-src http://security.debian.org/debian-security bookworm-security main contrib non-free non-free-firmware

deb [arch=amd64,i386] http://deb.debian.org/debian bookworm-backports main contrib non-free non-free-firmware
deb-src http://deb.debian.org/debian bookworm-backports main contrib non-free non-free-firmware

deb [arch=amd64,i386] http://deb.debian.org/debian/ bookworm-proposed-updates main contrib non-free non-free-firmware
deb-src http://deb.debian.org/debian/ bookworm-proposed-updates main contrib non-free non-free-firmware

Note the new non-free-firmware repository: non-free firmware used to be included in the non-free repository, but now they are in a new separate repository, so you will need to add that.

Then we need to set up the priorities of the different repositories in /etc/apt/preferences (or a ;pref file in /etc/apt/preferences.d):

Package: *
Pin: release n=bookworm
Pin-Priority: 810

Package: *
Pin: release n=bookworm-security
Pin-Priority: 810

Package: *
Pin: release n=bookworm-proposed-updates
Pin-Priority: 809

Package: *
Pin: release n=bookworm-backports
Pin-Priority: 808


Package: *
Pin: release n=bullseye
Pin-Priority: 710

Package: *
Pin: release n=bullseye-security
Pin-Priority: 710

Package: *
Pin: release n=bullseye-proposed-updates
Pin-Priority: 709

Package: *
Pin: release n=bullseye-backports
Pin-Priority: 708


Package: *
Pin: release n=trixie
Pin-Priority: 310


Package: *
Pin: release a=unstable
Pin-Priority: 200


Package: *
Pin: release a=experimental
Pin-Priority: 160

This gives the highest priority to all packages in Bookworm and the security updates, with a lower priority to the Bookworm proposed updates and then Bookworm backports. I added Bullseye in case you still need the Bullseye repositories for some reason. I also add Trixie (code name for what will become testing when Bullseye gets released) and sid (unstable) and experimental on the lowest priorities. Of course you can remove them from your preferences file if you don’t have set up these repositories.

I strongly recommend installing apt-listchanges, because it will give you information about important changes which might affect you before packages are upgraded:

# apt install apt-listchanges

I upgrade dpkg and apt first. I personally prefer to take advantages of eventual new improvements and bug fixes during the upgrade process.

# apt install -t bookworm apt dpkg

In Bookworm the systemd-resolved service now is in a seperate package. If you are currently using systemd-resolved, this can cause failures in DNS resolution. Before upgrading, make sure you know the addresses of your DNS servers, so that you can set them up manually if required; You can run the resolvectl command to find them. If DNS resolution breaks during the upgrade process later on, you can add them to /etc/resolv.conf manually to fix the problem. But I prefer to immediately install the new systemd-resolved package before upgrading everything else to take care of this problem:

# apt install -t bookworm systemd-resolved

Then we can upgrade all packages which can be upgraded without installing new packages:

# apt upgrade -t bookworm --without-new-pkgs

Once that’s done we proceed with the upgrade of all remaining packages, which will also install new dependencies:

# apt full-upgrade

During these two steps pay attention to which packages are going to be removed. It’s expected that old unused libraries and other packages (old PHP and Perl versions for example) will be removed, but you might to check this.

When the upgrade is done, I remove all unneeded packages with this command:

# apt autoremove --purge

Then run this command to remove all library packages which have no other packages depending on them any more:

# deborphan | xargs dpkg --purge

A package which often stays behind is libssl1.1. Normally you don’t need it any more so you can remove it safely:

# apt remove libssl1.1

Finally I also prefer to remove rsyslog. It is not installed any more by default on Debian Bookworm and everything is already logged to the systemd journal and I don’t want any double logging.

# apt remove rsyslog

Then personally I also install dbus-broker on Debian Bookworm. It replaces the traditional dbus implementation and is supposed to be more performant.

# apt install dbus-broker

I always recommend to verify that the metapackage linux-image-amd64 is installed, so that you are really running the latest kernel version.

# apt install linux-image-amd64

After upgrading all packages, reboot your system.

Debian 12 Bookworm and OpenLDAP

One major change in Debian 12 Bookworm is that it ships with OpenLDAP 2.5 which has removed the BDB and HDB back-ends. If your LDAP directory is still using this backend, you will have to convert it to the new MDB backend. There are some instructions in /usr/share/doc/slapd/README.Debian.gz and I might write some post here in the future about this. In any case, make sure you have recent backups of your LDAP directory in the form of LDIF gerenated with slapcat.

Is Debian Bookworm stable?

I’m permanently running Debian Testing on my laptop, and now I have also installed Bookworm on some servers. I strongly recommend using Testing for all desktop usage (even after Bookworm has been released), and to start using Bookworm on any new server installations. For upgrades of critical and more complicated server systems, I generally recommend to wait until at least the first point release.

At the moment I can think of two problems I am encountering on my systems. On my HP Elitebook 845 G8 when suspending the system (s2idle), Linux fails to read the current time from the RTC, resulting in the clock jumping years into the future. At the next resume you can fix the clock by restarting systemd-timesyncd, but the uptime command will continue to give a wrong output. I’m currently testing version 6.3 of the Linux kernel from experimental to see whether this bug also happens with this version.

Another problem is that in Debian 12 Bookworm the Spice plugin of Remmina is disabled. There is a work-around using packages from sid and experimental: upgrade to libspice-client-glib-2.0-8 version 0.42-2, which is currently in sid and then install remmina and remmina-plugin-spice from experimental. Maybe this problem will be fixed in a point release for Debian Bookworm, but that remains to be seen.

The security risks of Flathub

This week a big refresh of the Flathub website came online and there was quite some buzz around this in the Linux world. However this same week I noticed a worrying thing about Flathub: it is distributing different applications with known security problems. I am really worried about this because many people will unknowingly install these flatpaks, thinking that they are safe because they installed them from a reliable source.

The most striking example of this is Adobe Reader. This application was last updated by Adobe in 2013, so that means it’s 10 years old. Adobe does not support this software any more since 26 June 2013. While the Github Readme of the project mentions that this application is not supported any more, has know security vulnerabilities and is unstable, nothing of this is mentioned on the Flathub page itself. This means that many people who stumble upon this page, will install this flatpak without being aware of these risks. At the moment of writing, Adobe Reader is listed on the Flathub homepage as the third application, because it’s a new package and after a couple of days it had already 1666 installations. I’m wondering how many of these people are aware of the fact that they are installing a no longer supported application with known security bugs.

Unfortunately, Adobe Reader is not the only example. Let’s take a look at Visual Studio Code. I see three different variants on Flathub: two open-source builds Code – OSS and VSCodium and then the proprietary Microsoft build Visual Studio Code. Of these three, only one is up to date at the time of writng: VSCodium. Version 1.77.2 fixed a security problem, but neither the Code – OSS nor the Visual Studio Code flatpak have this version. The latter is the most popular one with 1.3 million installations.

Fortunately security sensitive flatpaks like Firefox, Chromium, Brave and Thunderbird are up to date, so it looks like this is not a bigger, more general problem. Still I think it’s unacceptable that several packages of vulnerable software are offered in the default Flathub repository.

But flatpak packages run in a sandbox so the security risk is only theoretical, isn’t it? Sorry, that ‘s not a serious way of dealing with security. You just need a security vulnerability in flatpak or in the Linux kernel and your software can escape the sandbox. At least two sandbox escape bugs have been found in flatpak in the past (CVE-2021-21261 and CVE-2019-10063). For sure more of these bugs will be discovered in the future, especially if flatpak becomes more popular. Combine this with a vulnerability in the packaged software, such as the Adobe Reader of Visual Studio Code, and opening a file downloaded from the Internet can be enough to get your system compromised.

In practice, we see such sandbox escape bugs being exploited in Chromium/Google Chrome: it has a built-in sandbox to protect the system from security vulnerabilities, yet it often has updates for zero-day vulnerabilities. Up to now already 2 different security fixes were published in 2023 which were already being exploited in the wild. Despite the sandbox. Sandbox escape is explicitly mentioned in the security advisory from a few days ago. Not relying on a single layer of defense against security breaches is called defense in depth and this is simply an essential practice if you care about security.

A PDF viewer is definitely at risk because you often open files downloaded from the Internet with it. But even though a programming editor/lightweight IDE like Visual Studio Code does not appear the most security sensitive application, make no mistake: they can also be targeted by people with bad intentions. I’m thinking of the case uncovered two years ago, where security researchers (!) were successfully targeted by North-Korean hackers who abused a feature in Microsoft’s fully fledged Visual Studio IDE. A security vulnerability in your IDE will only make such abuse easier. Think also of teachers who need to open (untrusted) code from students, which are at risk when their IDE has known security vulnerabilities.

One of the new features of the new website, is that flatpaks by the original developers of the software, are now marked as verified. But I don’t think that’s very useful because it does not say anything about how well it’s maintained and whether there are known security problems. Software which was not packaged by the original author, but which is well maintained, is by far preferred over software which was packaged by its original developer but who has now abandoned maintenance. Compare this to Linux distributions: the software is usually not packaged by the original developer, but by the distribution’s maintainers. That does not make these packages unreliable.

Windows does actually have much more security features enabled by default than Linux: files which originate from the Internet, are marked as such (mark-of-the-web) and these files will then undergo more security protections by the OS and by applications (Protected View in Office for example), there is an integrated malware scanner (Defender), Windows has a firewall enabled by default and it does automatic updates. Many of these things are not the case in Linux. Yet we hear of ransomware attacks on Windows users on a daily basis. It should make us realize that Linux will not be immune to these problems. The first thing we should do, is at least not run software with know security problems.

One thing that has to be done is, is that in the description on Flathub there is a warning in bold explaining that the software has known security vulnerabilities and it should clearly discourage users to install it. But I think that is not enough. People will just search for PDF, will recognize Adobe and won’t even read the description because they know the Adobe PDF reader. And then they will be surprised to discover during usage that the software is unstable and insecure. The same is true for Visual Studio Code: most people installing the flatpak simply won’t be aware that the packaged version has known vulnerabilities.

I think there is only one reasonable solution: these software packages should be moved to a separate repository which is not enabled by default. This repository should be called “unsupported”. If people do the effort of enabling this repository, then they should clearly get an extra warning that the software can be unstable, insecure and that they cannot expect any support. When searching, people should not get such software at the top between other well-maintained software. It should be shown in a separate unsupported category at the bottom. If we don’t do these things, then I’m afraid security incidents will happen one day, possibly destroying all trust in Flathub and Linux in general. And that is something which we should really avoid.

Alternatives for the net-tools utilities

Many modern distributions, like for example the upcoming Debian 12 Bookworm, do not install the package net-tools by default. This package contains popular utilities like ifconfig, route, netstat, arp and mii-tool. In this post I give alternatives for these utilities. You can of course just install the net-tools package if you prefer to keep using these commands.

ifconfig

To see the current network configuration:

$ ip addr

To see the currrent configuration for one specific interface, for example enp25s0:

$ ip addr show enp25s0

To add a static IP address to a network interface

$ ip addr add 192.168.10.2/24 dev enp25s0

Replace add by del to remove an IP address.

route

To see the current route table:

$ ip route

To set the default gateway:

$ ip route add default via 192.168.10.1 dev enp25s0

netstat

The ss command lists all open sockets. Some interesting options:

-ashow both open and listening sockets
-lonly show listening sockets
-pshows the process using the socket
-tshow only TCP sockets
-ushow only UDP sockets
-rresolve all IP addresses

To see all open and listening sockets on the system:

$ ss -a

To see all listening TPC and UDP ports:

$ ss -plut

arp

Display the contents of the ARP table:

$ ip neigh

mii-tool

Show the status of an Ethernet interface:

$ ethtool enp25s0

The rise of Mastodon and the fall of Twitter

Since Elon Musk has taken over Twitter and has fired thousands of employees, the Mastodon social media platform has seen a huge influx of new users: more than 1 million people people joined Mastodon since Musk took over Twitter. Stephen Fry is one of the most famous people who made the move. People cite fear that Twitter will become even more toxic because of the more tolerant moderation announced by Musk. Indeed, many newcomers on Mastodon praise the more relaxed atmosphere with less attacks and controversies.

All the general media have written articles about Mastodon and the increasing popularity of it (CNN, BBC, Le Monde, Tagesschau, VRTNws,…) fueling the stream of people creating a Mastodon account even more.

For many people this is probably the first time that they realize what the risks are of one huge single commercial platform and now discover the advantages of an independent open network based on free and open-source software.

Mastodon dealing with the growth

In the meantime Mastodon server administrators are scrambling to increase the capacity of their servers. Fortunately also new administrators have stepped in and set up new servers to deal with the growth of the network. Even some organizations, like MIT, have already set up their own Mastodon server. There have been temporary slowdowns and outages of Mastodon servers, but all in all no fundamental problems. It looks like the network can deal with the huge influx and scales well on a technical level.

The question remains whether the social network will be able to prevent the problems that plagued Twitter and which at times made it an unfriendly and hostile environment where fake news and attacks were common. First of all, Mastodon has a few features which should help in preventing it in becoming a toxic environment. First of all, the timeline in Mastodon is purely chronological. Unlike Twitter, messages from people you are not following but which are liked by others, will not be pushed in your timeline by some vague algorithm. Only the people you follow, will determine what you get to see in your timeline, so you have much more control over this than on Twitter. Mastodon does not have a quote feature like Twitter. The Mastodon creator deliberately decided not to implement a quote feature, because it stimulates toxic behaviour. And Mastodon does not prominently show in the timeline how many times a toot has been boosted, marked as favourite and how many answers it got, which reduces the drive to maximize boosts, favourites and answers. By default search engines will not index your messages on Mastodon, making it much harder for people with bad intentions to search your old messages. The usage of Content Warnings is pretty common on Mastodon and helps hiding controversial content from people not interested in them.

Will this be enough to prevent trolls from dominating the platform? I dare to doubt that. Up to now, very little news media and politicians are present on Mastodon. Often messages from these accounts, trigger negative reactions and toxic behaviour. I can imagine things will become more difficult once more politicians and their fan base and news media start posting on Mastodon.

Much will depend of the instance moderators and also of the choices all people on Mastodon make themselves. First of all, every instance has its own set of rules and its up to the moderators to enforce them. For example see the rules for some Mastodon servers: Fosstodon, mastodon.social, mastodon.art, mastodon-belgium.be. Some instances have more strict rules, while others have more vague rules. I think in the future we will see some instances adapting their rules in order to deal with certain problematic situations. People joining Mastodon should properly take into account the rules when choosing an instance: if you want to minimize the risk of trolling, choose an instance which has an explicit zero-tolerance policy against such behaviour. If you are not happy with how your instance deals with annoying behaviour, you can always move to another Mastodon instance.

Then of course the question remains whether moderators will be able to enforce the policy in practice. Will there be enough moderators to do all the work and will they dare to intervene and block people? And what will they do with people who stay in the grey zone, who strictly speaking don’t break the rules but create a negative atmosphere?

Server moderators are only responsible for their own server, so trolling can still happen from other instances. However, the administrator of an instance has the option to completely block other instances. On the web pages with the server rules mentioned above, you can also see which servers are blocked by this instance and for which reason. So there’s again a big responsibility here for the instance admins: will they dare to completely block instances which have a lax policy and are too tolerant against abuse?

But the administrators and moderators will not be able to solve all problems. A huge responsibility falls also on the shoulders of the users themselves. How will people react when politicians start launching controversial ideas on Mastodon? If people start fueling controversial discussions, for example by posting screenshots of controversial ideas, together with indignant remarks, this will only spread the message to others, including to people who don’t want these messages and will only annoying them. People on Mastodon do have the tools to protect them against such annoyances: just like on Twitter, you can mute and block people, and there are also filter options which allow you to hide all posts containing certain words.

It will be up to all the people on Mastodon to use all these options wisely to prevent heated discussions, attacks and annoyances leading to a negative atmosphere like on Twitter.

The future of Twitter

In the meantime, some people on Twitter closed their account after they became active on Mastodon. Several companies decided to pause advertising on Twitter because of uncertainty and fear that Twitter would become a platform with problematic content due to Musk scaling back content moderation. Twitter Blue, a plan where anyone could get a Verified check for $8 per month, caused a dumpster fire. It lead to a flood of fake verified accounts impersonating famous people (including Musk), politicians and companies. This even has consequences outside of Twitter: the stock of the pharmaceutical company Eli Lilly and Company went down after a fake account posted on Twitter that insulin would become free and it lost 15 billion in valuation. Because of all the confusion, Twitter Blue has been suspended. Some people complain about an increase of spam on Twitter. Thousands of people have been fired and top executives responsible for security and moderation on the platform, have resigned. It is clear that Twitter is in a deep crisis. The current mess will only make companies doubt even more about spending money on the platform.

It’s hard to say how this will end. Twitter has huge problems, but in spite of this, I think that it is too big to fail. At least for now. What’s for sure is that many people now have a real choice: a proprietary platform which is ruled by someone whose impulsiveness causes havoc and which is at times unfriendly and hostile, or an open platform in hands of the community and with stricter moderation and a more friendly atmosphere. Mastodon is growing and it looks like the growth is sustainable, even though currently it is still much smaller than Twitter.

I would not be surprised that somewhere in the future Twitter would implement the ActivityPub protocol and so it would become part of the Fediverse. That way they could give companies access to people who fled to Mastodon. Many companies however have invested a lot in Twitter and use it to promote content of their own websites to the public or for customer support. For this they often use specialized software, often integrating in their own website CMS. Unless these software tools are adapted for Mastodon, it will be hard for such companies to move to Mastodon. So for big brands, Twitter definitely still has an advantage. However the question is whether companies will still have faith in Twitter.

How to follow this blog

Now that Mastodon and the Fediverse are getting a big influx of new users, I thought it would be worthwhile mentioning that you can follow this blog on the Fediverse. Just search for @frederik on your favourite Fediverse server and you should be able to follow this blog.

If you are not a fan of social media, you can subscribe to the RSS feed of this site: https://blog.frehi.be/feed/. You will need an RSS aggregator on your system, such as Thunderbird, NewsFlash (GNOME), Akregator (KDE), Feeder (Android), NetNewsWire (macOS, iOS). There are also web-based solutions available. Web-based feed aggregators which you can install on your own server are TT-RSS, Nextcloud News. Feedly is a popular proprietary web-based solution, but as usual the problem is that this limits your privacy, and makes you depend on a commercial party.

Increasing PHP security with Snuffleupagus

In a previous article, I discussed how to set up ModSecurity with the Core Rule Set on Debian. This can be considered as a first line of defense against malicious HTTP traffic. In a defense in depth strategy of course we want to add additional layers of protection to your web servers. One such layer is Snuffleupagus. Snuffleupagus is a PHP module which protects your web applications against various attacks. Some of the hardening features it offers are encryption of cookies, disabling XML External Entity (XXE) processing, a white or blacklist for the functions which can be used in the eval() function and the possibility to selectively disable PHP functions with specific arguments (virtual-patching).

Installing Snuffleupagus on Debian

Unfortunately there is no package for Snuffleupagus included in Debian, but it is not too difficult to build one yourself:

$ apt install php-dev
$ mkdir snuffleupagus
$ cd snuffleupagus
$ git clone https://github.com/jvoisin/snuffleupagus
$ cd snuffleupagus
$ make debian

This will build the latest development code from the master branch. If you want to build the latest stable release, before running make debian, use these commands to view all tags and to checkout the latest table tag, which in this case was v0.8.2:

$ git tag
$ git checkout v0.8.2

If all went well, you should now have a file snuffleupagus_0.8.2_amd64.deb in the above directory, which you can install:

$ cd ..
$ apt install ./snuffleupagus_0.8.2_amd64.deb

Configuring Snuffleupagus

First we take the example configuration file and put it in PHP’s configuration directory. For example for PHP 7.4:

# zcat /usr/share/doc/snuffleupagus/examples/default.rules.gz > /etc/php/7.4/snuffleupagus.rules

Also take a look at the config subdirectory in the source tree for more example rules.

Edit the file /etc/php/7.4/fpm/conf.d/20-snuffleupagus.ini so that it looks like this:

extension=snuffleupagus.so
sp.configuration_file=/etc/php/7.4/snuffleupagus.rules

Now we will edit the file /etc/php/7.4/snuffleupagus.rules.

We need to set a secret key, which will be used for various cryptographic features:

sp.global.secret_key("YOU _DO_ NEED TO CHANGE THIS WITH SOME RANDOM CHARACTERS.");

You can generate a random key with this shell command:

$ echo $(head -c 512 /dev/urandom | tr -dc 'a-zA-Z0-9')

Simulation mode

Snuffleupagus can run rules in simulation mode. In this mode, the rule will not block further execution of the PHP file, but will just output a warning message in your log. Unfortunately there is no global simulation mode, but it has to be set per rule. You can run a rule in simulation mode by appending .simulation() to it. For example to run INI protection in simulation mode:

sp.ini_protection.simulation();

INI protection

To prevent PHP applications from modifying php.ini settings, you can set this in snuffleupagus.rules:

sp.ini_protection.enable();
sp.ini_protection.policy_readonly();

Cookie protection

The following configuration options sets the SameSite attribute to Lax on session cookies, which offers protection against CSFR on this cookie. We enforce setting the secure option on cookies, which instructs the web browser to only send them over an encrypted HTTPS connection and also enable encryption of the content of the session on the server. The encryption key being used is derived of the value of the global secret key you have set, the client’s user agent and the environment variable SSL_SESSION_ID.

sp.cookie.name("PHPSESSID").samesite("lax");

sp.auto_cookie_secure.enable();
sp.global.cookie_env_var("SSL_SESSION_ID");
sp.session.encrypt();

Note that the definition of cookie_env_var needs to happen before sp.session.encrypt(); which enables the encryption.

You have to make sure the variable SSL_SESSION_ID is passed to PHP. In Apache you can do so by having this in your virtualhost:

<FilesMatch "\.(cgi|shtml|phtml|php)$">
    SSLOptions +StdEnvVars
</FilesMatch>

eval white- or blacklist

eval() is used to evaluate PHP content, for example in a variable. This is very dangerous if the PHP code to be evaluated can contain user provided data. Therefore it is strongly recommended that you create a whitelist of functions which can be called by code evaluated by eval().

Start by putting this in snuffleupagus.rules and restart PHP:

sp.eval_whitelist.list().simulation();

Then test your websites and see which errors you get in the logs, and add them separated by commas to the eval_whitelist.list(). After that you need to remove .simulation() and restart PHP in order to activate this protection. For example

sp.eval_whitelist.list("array_pop,array_push");

You can also use a blacklist, which only blocks certain functions. For example:

sp.eval_blacklist.list("system,exec,shell_exec,proc_open");

Limit execution to read-only PHP files

The read_only_exec() feature of Snuffleupagus will prevent PHP from execution of PHP files on which the PHP process has write permissions. This will block any attacks where an attacker manages to upload a malicious PHP file via a bug in your website, and then attempts to execute this malicious PHP script.

It is a good practice to let your PHP scripts be owned by a different user than the PHP user, and give PHP only read-only permissions on your PHP files.

To test this feature, add this to snuffleupagus.rules:

sp.readonly_exec.simulation();

If you are sure all goes well, enable it:

sp.readonly_exec.enable();

Virtual patching

One of the main features of Snuffleupagus is virtual patching. Thjs feature will disable functions, depending on the parameters or and values they are given. The example rules file contains a good set of generic rules which blocks all kinds of dangerous behaviour. You might need to fine-tune the rules if your PHP applications hits certain rules.

Some examples of virtual-patching rules:

sp.disable_function.function("chmod").param("mode").value("438").drop();
sp.disable_function.function("chmod").param("mode").value("511").drop();

These rules will drop calls to the chmod function with octal values 438 and 511, which correspond to the dangerous 0666 and 0777 decimal permissions.

sp.disable_function.function("include_once").value_r(".(inc|phtml|php)$").allow();
sp.disable_function.function("include_once").drop();

These two rules will only allow the include_once function to include files which file name are ending with inc, phtml or php. All other include_once calls will be dropped.

Using generate-rules.php to automatically site-specific hardening rules

In the scripts subdirectoy of the Snuffleupagus source tree, there is a file named generate_rules.php. You can run this script from the command line, giving it a path to a directory with PHP files, and it will automatically generate rules which specifically allow all needed dangerous function calls, and then disable them globally. For example to generate rules for the /usr/share/tt-rss/www and /var/www directories:

# php generate_rules.php /usr/share/tt-rss/www/ /var/www/

This will generate rules:

sp.disable_function.function("function_exists").filename("/usr/share/tt-rss/www/api/index.php").hash("fa02a93e2724d7e818c5c13f4ba8b110c47bbe7fb65b74c0aad9cff2ed39cf7d").allow();
sp.disable_function.function("function_exists").filename("/usr/share/tt-rss/www/classes/pref/prefs.php").hash("43926a95303bc4e7adefe9d2f290dd8b66c9292be836908081e3f2bd8a198642").allow();
sp.disable_function.function("function_exists").drop();

The first two rules allow these two files to call function_exists and the last rule drops all requests to function_exists from any other files. Note that the first two rules limit the rules not only to the specified file name, but also define the SHA256 of the file. This way, if the file is changed, the function call will be dropped. This is the safest way, but it can be annoying if the files are often or automatically updated because it will break the site. In this case, you can call generate_rules.php with the --without-hash option:

# php generate_rules.php --without-hash /usr/share/tt-rss/www/ /var/www/

After you have generated the rules, you will have to add them to your snuffleupagus.rules file and restart PHP-FPM.

File Upload protection

The default Snuffleupagus rule file contains 2 rule which will block any attempts uploading a html or PHP file. However, I noticed that they were not working with PHP 7.4 and these rules would cause this error message:

PHP Warning: [snuffleupagus][0.0.0.0][config][log] It seems that you are filtering on a parameter 'destination' of the function 'move_uploaded_file', but the parameter does not exists. in /var/www/html/foobar.php on line 15PHP message: PHP Warning: [snuffleupagus][0.0.0.0][config][log] - 0 parameter's name: 'path' in /var/www/html/foobar.php on line 15PHP message: PHP Warning: [snuffleupagus][0.0.0.0][config][log] - 1 parameter's name: 'new_path' in /var/www/html/foobar.php on line 15'

The snuffleupagus rules use the parameter destination for the move_uploaded_file instead of the parameter new_path. You will have to change the rules like this:

sp.disable_function.function("move_uploaded_file").param("new_path").value_r("\.ph").drop();<br />sp.disable_function.function("move_uploaded_file").param("new_path").value_r("\.ht").drop();

Note that on PHP 8, the parameter name is to instead of new_path.

Enabling Snuffleupagus

To enable Snuffleupagus in PHP 7.4, link the configuration file to /etc/php/7.4/fpm/conf.d:

# cd /etc/php/7.4/fpm/conf.d
# ln -s ../../mods-available/snuffleupagus.ini 20-snuffleupagus.ini
# systemctl restart php7.4-fpm

After restarting PHP-FPM, always check the logs to see whether snuffleupagus does not give any warning or messages for example because of a syntax error in your configuration:

# journalctl -u php7.4-fpm -n 50

Snuffleupagus logs

By default Snuffleupagus logs via PHP. Then if you are using Apache with PHP-FPM, you will find Snuffleupagus logs, just like any PHP warnings and errors in the Apache error_log, for example /var/log/apache/error.log. If you encounter any problems with your website, go check this log to see what is wrong.

Snuffleupagus can also be configured to log via syslog, and actually even recommends this, because PHP’s logging system can be manipulated at runtime by malicious scripts. To log via syslog, add this to snuffleupagus.rules:

sp.log_media("syslog");

I give a few examples of errors you can encounter in the logs and how to fix them:

[snuffleupagus][0.0.0.0][xxe][log] A call to libxml_disable_entity_loader was tried and nopped in /usr/share/tt-rss/www/include/functions.php on line 22

tt-rss calls the function libxml_disable_entity_loader but this is blocked by the XXE protection. Commenting this in snuffleupagus.rules should fix this:

sp.xxe_protection.enable();

Another example:

[snuffleupagus][0.0.0.0][disabled_function][drop] Aborted execution on call of the function 'ini_set', because its argument '$varname' content (display_errors) matched a rule in /usr/share/tt-rss/www/include/functions.php on line 37'

Modifying the PHP INI option display_errors is not allowed by this rule:

sp.disable_function.function("ini_set").param("varname").value_r("display_errors").drop();

You can completely remove (or comment) this rule in order to disable it. But a better way is to add a rule before this rule which allows it for specially that PHP file. So add this rule before:

sp.disable_function.function("ini_set").filename("/usr/share/tt-rss/www/include/functions.php").param("varname").value_r("display_errors").allow();

If you get something like this:

[snuffleupagus][0.0.0.0][disabled_function][drop] Aborted execution on call of the function 'function_exists', because its argument '$function_name' content (exec) matched a rule in /var/www/wordpress/wp-content/plugins/webp-express/vendor/rosell-dk/exec-with-fallback/src/ExecWithFallback.php on line 35', referer: wp-admin/media-new.php

It’s caused by this rule:

sp.disable_function.function("function_exists").param("function_name").value("exec").drop();

You can add this rule before to allow this:

sp.disable_function.function("function_exists").filename("/var/www/wordpress/wp-admin/media-new.php").param("function_name").value("exec").allow();

More information

Snuffleupagus documentation

Snuffleupagus on Github

Julien Voisin blog archives

Using the Solo V2 FIDO2 security key

Last year I supported the Solo V2 Kickstarter camaign. Solo is a completely open source FIDO2 security key. You can use it for Two-Factor Authentication (2FA) on web sites, for protecting your private SSH keys and other things. The Solo2 is similar to keys such as the Yubikey from Yubico, the Google Titan Security Key, the Kensington Verimark or Nitrokey. Because all these keys implement the standards of the FIDO2 project, many of the examples here work with these keys too.

The Kickstarter campagin has ended, however now you can buy Solo V2 security keys via their Indiegogo campaign. If you decide to buy a security key, then I strongly recommend buying at least two of them so that you can use the second key as a back-up key in case the first key breaks or gets lost.

It appears that the firmware of the Solo V2 currently has some problems, preventing it to work correctly on some sites and there are some complaints about the lack of progress in this matter and a lack of communication. There is hope that these problems will be fixed in the near future though. A new firmware version 2:20220822.0 is available fixing some important problems. Make sure to update the firmware before starting to use the key, because updating to this version will erase all existing credentials, forcing you to re-register your key everywhere.

There is also little documentation, which can make it a bit difficult to get started if you are new to FIDO2 security keys. That’s why I decided to create this guide, to serve as a tutorial explaining how to use the Solo2.

Installing software

For basic usage of the key, you actually do not need to install any software. However there are some utilities available which allow you to update the firmware, set a PIN, view all credentials stored on the key, etc. First of all, we will install the solo2 CLI. It’s not yet packaged in Debian, so we need to download it from Github. I check the sha256sum to ensure I get the right files. This utility written in Rust does not support all functionality yet and for that reason I also install the Python based Solo1 CLI, which is packaged in Debian. The fido2-tools package finally contains some utilities which work on all FIDO2 keys.

# apt install solo-python fido2-tools
$ curl -L -O https://github.com/solokeys/solo2-cli/releases/download/v0.2.0/70-solo2.rules
$ curl -L -O https://github.com/solokeys/solo2-cli/releases/download/v0.2.0/solo2-v0.2.0-x86_64-unknown-linux-gnu
$ curl -L -O https://github.com/solokeys/solo2-cli/releases/download/v0.2.0/solo2.completions.bash
$ sha256sum 70-solo2.rules solo2-v0.2.0-x86_64-unknown-linux-gnu solo2.completions.bash
4133644b12a4e938f04e19e3059f9aec08f1c36b1b33b2f729b5815c88099fe3  70-solo2.rules
d03b20e2ba3be5f9d67f7a7fc1361104960243ebbe44289224f92b513479ed9b  solo2-v0.2.0-x86_64-unknown-linux-gnu
a892afc3c71eb09c1d8e57745dabbbe415f6cfd3f8b49ee6084518a07b73d9a8  solo2.completions.bash
# mv 70-solo2.rules /etc/udev/rules.d/
# mv solo2-v0.2.0-x86_64-unknown-linux-gnu /usr/local/bin/solo2
# chmod 755 /usr/local/bin/solo2
# mv solo2.completions.bash /etc/bash_completion.d/

Touching the Solo2

When authenticating to websites or doing other operations, you will be asked to tap or touch the security key. The Solo2, unlike the Solo1 or Yubikey, does not have a physical button which needs to be depressed, but has 3 touch areas. These are the 3 gold coloured areas at both sides and at the back of the key. You do not need to press them, gently touching one of them is enough. In practice, I had most success touching the two touch zones at both sides of the key simultaneously with 2 fingers.

Updating the Solo V2 firmware version

You can check which version of the firmware is currently installed on your key with this command:

$ solo2 app admin version

At the moment of writing, the most recent version is 1:20200101.9 2:20220822.0.

To update the firmware version, run this command:

$ solo2 update

Setting a PIN on the Solo V2 key

I strongly recommend setting up a PIN on your FIDO2 key. It will be required to do any administrative tasks on your key, such as adding or removing credentials such as SSH keys,

You cannot set a pin with the solo2 CLI, but you can simply use the solo1 CLI:

$ solo key set-pin

If a PIN has already been set and you want to modify it, run:

$ solo key change-pin

You can also use any Chromium based browser (such as Google Chrome), and go the the URI: chrome://settings/securityKeys . There click on Create a PIN.

Yet another alternative is to use the fido2-token utility, part of fido2-tools. First you need to get the device path of the key:

$ fido2-token -L
/dev/hidraw4: vendor=0x1209, product=0xbeee (SoloKeys Solo 2 Security Key)

So in my case it’s /dev/hidraw4. Then change the PIN like this:

$ fido2-token -C /dev/hidraw4

Do not forget your PIN, otherwise you cannot use your key any more to authenticate to registered sites!

In case you forgot your FIDO2 PIN, you will need to completely reset your key. This will erase all keys and generate new ones, so you will need to have an alternative way to authenticate to websites where you registered this key.

$ solo key reset

FIDO2 Two-Factor Authentication

Usually you go the security settings on the website and there you can enable 2FA. For some sites, you will be required to set up TOTP first before you can register a security key. So make sure you have a TOTP application such as FreeOTP+ for Android or Raivo OTP on iOS. TOTP is then a back-up method for 2FA in case you loose access to your key. If you have multiple FIDO2 keys, don’t forget to register them all.

A side note: don’t use SMS as a second factor for authentication. SMS 2FA is insecure because these messages are transferred in clear text and there are various ways they can be intercepted.

2FA with the Solo2 on Android

You can connect the Solo2 to your Android device by USB, or you can use NFC. When a web application tries to authenticate your key, you will get a pop-up message where you can choose whether you want to connect it via USB or use NFC. In the case of USB, connect your key to the USB port and tap it, just like you would do on your PC. If you chose NFC, just bring your Solo2 key to the back of your phone and it should authenticate.

This all works fine in Chromium based browsers, however I was not able to successfully authenticate with the Solo2 in Firefox. I managed to get it working with Firefox Nightly though. You will need to go to about:config and set security.webauthn.ctap2 and security.webauth.webauthn_enable_usb_token both to true in order to get it working.

Enabling 2FA on well-know websites

Google

Go to https://myaccount.google.com/ and in the left menu click on Security. Under Signing in to Google click on 2-Step Verification. There click on Enable two-factor authentication.In the wizard that appears, you will have to click on Security Key and follow to instructions to add your key.

Github

In the right top corner, click on your avatar and choose Settings. Then in the left menu click on Password & Authentication where you can enable Two-Factor Authentication. You will have to set up TOTP first, and after that, you can register your security key.

Gitlab

In the right top corner, click on your avatar and choose Preferences. Then in the left menu click on Account where you can enable Two-Factor Authentication. You will have to register a Two-Factor Authenticator (TOTP) first, and after that, you can register your WebAuthn devices.

Masstodon

Click on the Preferences icon then choose AccountTwo-Factor Auth. You will need to set up TOTP first, and after that you can add a security key.

Nextcloud

The app Two-Factor WebAuthn needs to be installed on your Nextcloud instance.

Click on your avator in the top right corner and choose Settings. Then choose Security in the left menu and there you can add Webauthn devices.

Microsoft personal account

Make sure you are using the newest firmware because this is not working with older firmware versions.

You need to go with a Chromium based browser to https://account.microsoft.com/ (Firefox does not work at the moment). There click on SecuritySecurity DashboardAdvanced security optionsAdd a new way to sign in or verifyUse a security key.

Microsoft Azure Directory account

If you have a Microsoft Directory Azure account, for example if you are using Office 365 in your organization, then it might be possible to use your Solo2, however this depends on the settings made by your administrator. if Enforce key restrictions is enabled, certain keys can be blocked, or only specific keys are allowed. Also the option Enforce attestation needs to be disabled, because otherwise only keys which have been tested by Microsoft are allowed. Unfortunately Solo keys have not been validated by Microsoft (also Google’s Titan security keys are in this case). Note that this attestation does not include an evaluation of the security of the key.

Currently there is no news about applying this attestation for the Solo keys.

Twitter

On the website in the left menu, click on MoreSettings and privacy and then on Security and account access – SecurityTwo-factor authentication. There choose Security key.

https://help.twitter.com/en/managing-your-account/two-factor-authentication

Facebook

Really? You should not be using Facebook.

If you really must use Facebook: https://www.facebook.com/help/148233965247823/

LinkedIn

It appears that at the moment of writing LinkedIn does not support 2FA with FIDO2. You can set up TOTP though, which I recommend doing. Click on Me in the top menu and choose Settings & Privacy. Then in the left menu choose Sign in & security and click on Two-Step verification.

WordPress

To enable FIDO2 two-factor authentication in WordPress, install the plugins two-factor and two-factor-provider-webauthn. Enable both modules and then in the WordPress administration menu go to SettingsTwoFactor WebAuthn. Use the option: Disable old U2F provider. the two-factor plugin includes U2F by default, but this is not supported any more by Chromium based browsers, so you want to use the more modern webauthn instead. Then you can set up 2FA in the menu UsersProfile: enable WebAuthn . Then under Security Keys (WebAuthn) click on Register New Key, tap your key and give it a unique name. Do this for both your security keys.

If you have set up Modsecurity with the Core Rule Set, you will end up with a HTTP 403 Forbidden error when trying to register your key or try to authenticate with it. Create /etc/modsecurity/99-wordpress-webauthn.conf with this content:

SecRule REQUEST_FILENAME "@endsWith /wp-admin/profile.php" \
    "id:1100,\
    phase:2,\
    pass,\
    t:none,\
    nolog,\
    chain"
    SecRule ARGS:action "@streq update" \
        "t:none,\
        chain"
        SecRule &ARGS:action "@eq 1" \
            "t:none,\
            ctl:ruleRemoveTargetByTag=OWASP_CRS;ARGS:u2f_response,\
            ctl:ruleRemoveTargetByTag=OWASP_CRS;ARGS:webauthn_response"

SecRule REQUEST_FILENAME "@endsWith /wp-login.php" \
    "id:1101,\
    phase:2,\
    pass,\
    t:none,\
    nolog,\
    ctl:ruleRemoveTargetByTag=OWASP_CRS;ARGS:u2f_response,\
    ctl:ruleRemoveTargetByTag=OWASP_CRS;ARGS:webauthn_response,\

SecRule REQUEST_FILENAME "@endsWith /wp-admin/admin-ajax.php" \
    "id:1102,\
    phase:2,\
    pass,\
    t:none,\
    nolog,\
    chain"
    SecRule ARGS:action "@streq webauthn_register" \
        "t:none,\
        chain"
        SecRule &ARGS:action "@eq 1" \
            "t:none,\
            ctl:ruleRemoveTargetByTag=OWASP_CRS;ARGS:credential"

and reload your Apache configuration. It should now work.

What if it does not work?

If registering your key or authenticating with your key fails on a website, try with a Chromium based browser. Firefox does not support CTAP2 yet, and this can cause trouble on sites which require verification of a PIN. Firefox has CTAP2 support now, but it’s disabled by default. Make sure you use the latest version of Firefox (109 at the time of writing) and activate CTAP2 support by going to about:config and setting security.webauthn.ctap2 to true.

OpenSSH

To use your Solo2 key for OpenSSH authentication, you will at least version 8.2p1 on both server and client. OpenSSH 8.3p1 adds support for discoverable credentials or resident keys: with discoverable credentials, the FIDO2 security key itself is enough to do SSH public key authentication. This has a slight security risk though if people get access to your Solo2 key because now the only protection is the PIN you have set on the key. Non-discoverable keys don’t have this security risk, because you also need the private key stored on your computer to authenticate.

SSHD configuration for FIDO2 keys

As written before, you need at least version 8.2p1 or 8.3p1 of OpenSSH. The default settings as provided by Debian should be OK, but I strongly recommend to add this option to sshd_config if you only use FIDO2 keys for interactive login:

PubkeyAuthOptions verify-required

I prefer doing this by creating a file /etc/ssh/sshd_config.d/fido2.conf with this line.

This options ensures that only keys which require a PIN can be used, at least adding some protection against theft of a FIDO2 key which contains discoverable credentials.

You can also add the option touch-required to PubkeyAuthOptions in order to require touching the key when authenticating. This will make it impossible to authenticate with keys which were created with the no-touch-required option.

Setting up FIDO2 credentials for SSH

To generate credentials for SSH with your FIDO2, you basically use this command

$ ssh-keygen -t ed25519-sk

There are diffferent options available which you can add:

  • -O resident: You want to create discoverable credentials.
  • -O no-touch-required: You want to disable the requirement of touching the key for authenticating.
  • -O verify-required: You require that the PIN is entered when authenticating. I strongly recommend this option.
  • -O application=ssh:SomeUniqueName: In case you want to store different SSH keys on your Solo2, you will have to give each of them a different application name starting with ssh:
  • -f ~/.ssh/id_ed25519_sk_solo2_blue: If you use multiple FIDO2 keys, you may want to store the key in a unique file for every FIDO2 key. Replace the file name of this example by the name of your choice.

You can verify that the credentials are correctly stored on your Solo2 using this command:

$ solo key credential ls

In case you would want to remove the credentials stored on your key, you can do so by using this command:

$ solo key credential rm CREDENTIALID

Replace CREDENTIALID by the value you found with the previous command.

After creating the key, you need to copy the public key to the authorized_keys file on your server. You can use ssh-copy-id for that:

$ ssh-copy-id -i ~/.ssh/id_ed25519_sk_solo2_blue.pub username@server.example.org

Of course use the correct file name for the public key.

If you used the option no-touch-required when generating the key, you will have to edit the ~/.ssh/authorized_keys file on your server so that this options precedes the key. For example if authorized_keys contains this:

sk-ssh-ed25519@openssh.com AAAA....= username@host

Change it to this:

no-touch-required sk-ssh-ed25519@openssh.com AAAA....= username@host

Now it should be possible to log in to the server using this command:

$ ssh -i ~/.ssh/id_ed25519_sk_solo2_blue -o IdentitiesOnly=yes username@server.example.org

You will be asked to enter the PIN of your key and to touch it, depending on the options you used when creating the key. I add -o IdentitiesOnly=yes because otherwise ssh will first try to authenticate using the keys loaded in your SSH agent. With this option we enforce it to use only the private key we have specified with the -i parameter.

You can make this default by editing ~/.ssh/config, so that you don’t need to repeat the -i and -o parameters every time when connecting:

Host server.example.org
    User myusername
    IdentityFile ~/.ssh/id_ed25519_sk_solo2_blue
    IdentitiesOnly yes

Importing discoverable credentials on another system

When you use discoverable credentials, all information needed for authentication is stored on the key itself, in contrast to non-discoverable credentials, where part of that information is also stored in the private key file on the computer. For this reason, with discoverable credentials, it is easy to import them on any computer.

$ cd ~/.ssh/ 
$ ssh-keygen -K

The public and private key will be written in the .ssh directory, and then you can authenticate again using the ssh -o IdentiesOnly=yes -i command just like on the system where you generated the key.

Troubleshooting FIDO2 SSH authentication

On the server check the sshd logs, which can be found in /var/log/auth.log or in the ssh journal:

# journalctl -u ssh

Successful authentication with your FIDO2 key, should be logged like this:

Accepted publickey for username from xxx.xxx.xxx.xxx port zzzzzz ssh2: ED25519-SK SHA256:....

Notice the ED25519-SK part which indicates that the credentials on your FIDO2 key were used.

If you see this:

error: public key ED25519-SK SHA256:... signature for username from xxx.xxx.xxx.xxx port zzzzz rejected: user presence (authenticator touch) requirement not met

This means that you have created a key with the no-touch-required options not set. Try adding no-touch-required to the authorized_keys on the server, as noted above, at least if your server does not have PubkeyAuthOptions touch-required set.

On the client-side, you can add the -v parameter to debug what happens:

$ ssh -v -o IdentitiesOnly=yes -i ~/.ssh/id_ed25519_sk_solo2_blue username@server.example.org

If you are using GNOME with gnome-keyring as ssh-agent, you will encounter this problem:

debug1: Offering public key: /home/username/.ssh/id_ed25519_sk_solo2_blue ED25519-SK SHA256:... explicit authenticator agent
debug1: Server accepts key: /home/username/.ssh/id_ed25519_sk_solo2_blue ED25519-SK SHA256:... explicit authenticator agent
sign_and_send_pubkey: signing failed for ED25519-SK "/home/username/.ssh/id_ed25519_sk_solo2_blue" from agent: agent refused operation

This is because of the lack of support of verify-required credentials in ssh-agent/gnome-keyring.

A work-around is to rename the public key, so that gnome-keyring will ignore it:

$ mv ~/.ssh/id_ed25519_sk_solo2_blue.pub ~/.ssh/id_ed25519_sk_solo2_blue.public

you will need to log out and login again after making this change.

Sources and more information

Securing SSH with FIDO2

Solo2 discussions