Here is a late quick overview of important changes in Debian Sid during the last two weeks.
glibc was updated from version 2.36 to version 2.37. This version mostly contains bug fixes and mintor improvements. An important regression by this update was fixed in the Debian package 2.37-4, which is not yet in testing/trixie, so you might want to update immediately to the sid version of glibc if you are on testing.
Linux 6.3.11 fixes the so-called StackRot security vulnerability (CVE-2023-3269). An exploit wil be made public soon and will allow any local user to get root access rights, so make sure you are running this kernel on all your systems. This kernel also re-enables CONFIG_VIRTIO_MEM which got disabled by mistake in Debian 12 Bookworm. Both the StackRot security vulnerability as the CONFIG_VIRTIO_MEM regression are now also fixed in the latest kernel release in bookworm-security.
Debian’s Slurm package was updated to version 23.02.3.
Desktop software
Firefox 115 is now available in sid. Keep in mind that the firefox package never moves to testing and stable, only the firefox-esr package does. Firefox 115 will become the future Firefox ESR release though. Most important change in Firefox 115 is that it enables hardware video decoding for Intel GPUs with VA-API on Linux by default.
Remmina in sid was updated from 1.4.29 to 1.4.31. This version brings back the remmina-plugin-spice package which was also disabled in Debian 12.
Debian 12 Bookworm will be released very soon, on June 10 2023. The Debian Testing tree is now very close to the final release, so now is a good moment to start testing Bookworm if you did not do so. I already upgraded some of my server systems to Bookworm and I’m also running on all my desktop systems, so here are some notes of the upgrade process. Keep in mind that upgrading to Bookworm is only supported if you are running Bullseye. If you are running an older version of Debian (Buster), you will need to upgrade to Bullseye first and after that upgrade to Bookworm.
First of all, start with reading the release notes, it contains a very detailed howto guide describing all steps to upgrade your system to Bookworm. It also lists all major changes and important things to know before you upgrade.
First check which packages you have installed which do not come from the official Debian repositories with this command:
# apt list '?narrow(?installed, ?not(?origin(Debian)))'
Because these are not official Debian packages, Debian developers cannot guarantee that they will work correctly and will not conflict or cause compatibility problems when upgrading your system. For that reason, you should seriously consider uninstalling them during the upgrade process.
On one system I had a locally built snuffleupagus package installed. This package was built against a particular PHP version and because a newer Debian release will also include a newer PHP version, this could break things. So i removed this package:
# apt remove snuffleupagus
Then you need to verify whether you have put any packages on hold. Packages on hold will never be upgraded, so this can prevent a correct upgrade. Check all held packages with this command:
# apt-mark showhold
You can unhold them with this command:
# apt-mark unhold packagename
Then we need to adapt our apt sources.list and preferences.
You should have this in /etc/apt/sources.list (or in a .list file /etc/apt/sources.list.d):
deb [arch=amd64,i386] http://deb.debian.org/debian/ bookworm main contrib non-free non-free-firmware
deb-src http://deb.debian.org/debian/ bookworm main contrib non-free non-free-firmware
deb [arch=amd64,i386] http://security.debian.org/debian-security bookworm-security main contrib non-free non-free-firmware
deb-src http://security.debian.org/debian-security bookworm-security main contrib non-free non-free-firmware
deb [arch=amd64,i386] http://deb.debian.org/debian bookworm-backports main contrib non-free non-free-firmware
deb-src http://deb.debian.org/debian bookworm-backports main contrib non-free non-free-firmware
deb [arch=amd64,i386] http://deb.debian.org/debian/ bookworm-proposed-updates main contrib non-free non-free-firmware
deb-src http://deb.debian.org/debian/ bookworm-proposed-updates main contrib non-free non-free-firmware
Note the new non-free-firmware repository: non-free firmware used to be included in the non-free repository, but now they are in a new separate repository, so you will need to add that.
Then we need to set up the priorities of the different repositories in /etc/apt/preferences (or a ;pref file in /etc/apt/preferences.d):
This gives the highest priority to all packages in Bookworm and the security updates, with a lower priority to the Bookworm proposed updates and then Bookworm backports. I added Bullseye in case you still need the Bullseye repositories for some reason. I also add Trixie (code name for what will become testing when Bullseye gets released) and sid (unstable) and experimental on the lowest priorities. Of course you can remove them from your preferences file if you don’t have set up these repositories.
I strongly recommend installing apt-listchanges, because it will give you information about important changes which might affect you before packages are upgraded:
# apt install apt-listchanges
I upgrade dpkg and apt first. I personally prefer to take advantages of eventual new improvements and bug fixes during the upgrade process.
# apt install -t bookworm apt dpkg
In Bookworm the systemd-resolved service now is in a seperate package. If you are currently using systemd-resolved, this can cause failures in DNS resolution. Before upgrading, make sure you know the addresses of your DNS servers, so that you can set them up manually if required; You can run the resolvectl command to find them. If DNS resolution breaks during the upgrade process later on, you can add them to /etc/resolv.conf manually to fix the problem. But I prefer to immediately install the new systemd-resolved package before upgrading everything else to take care of this problem:
# apt install -t bookworm systemd-resolved
Then we can upgrade all packages which can be upgraded without installing new packages:
# apt upgrade -t bookworm --without-new-pkgs
Once that’s done we proceed with the upgrade of all remaining packages, which will also install new dependencies:
# apt full-upgrade
During these two steps pay attention to which packages are going to be removed. It’s expected that old unused libraries and other packages (old PHP and Perl versions for example) will be removed, but you might to check this.
When the upgrade is done, I remove all unneeded packages with this command:
# apt autoremove --purge
Then run this command to remove all library packages which have no other packages depending on them any more:
# deborphan | xargs dpkg --purge
A package which often stays behind is libssl1.1. Normally you don’t need it any more so you can remove it safely:
# apt remove libssl1.1
Finally I also prefer to remove rsyslog. It is not installed any more by default on Debian Bookworm and everything is already logged to the systemd journal and I don’t want any double logging.
# apt remove rsyslog
Then personally I also install dbus-broker on Debian Bookworm. It replaces the traditional dbus implementation and is supposed to be more performant.
# apt install dbus-broker
I always recommend to verify that the metapackage linux-image-amd64 is installed, so that you are really running the latest kernel version.
# apt install linux-image-amd64
After upgrading all packages, reboot your system.
Debian 12 Bookworm and OpenLDAP
One major change in Debian 12 Bookworm is that it ships with OpenLDAP 2.5 which has removed the BDB and HDB back-ends. If your LDAP directory is still using this backend, you will have to convert it to the new MDB backend. There are some instructions in /usr/share/doc/slapd/README.Debian.gz and I might write some post here in the future about this. In any case, make sure you have recent backups of your LDAP directory in the form of LDIF gerenated with slapcat.
Is Debian Bookworm stable?
I’m permanently running Debian Testing on my laptop, and now I have also installed Bookworm on some servers. I strongly recommend using Testing for all desktop usage (even after Bookworm has been released), and to start using Bookworm on any new server installations. For upgrades of critical and more complicated server systems, I generally recommend to wait until at least the first point release.
At the moment I can think of two problems I am encountering on my systems. On my HP Elitebook 845 G8 when suspending the system (s2idle), Linux fails to read the current time from the RTC, resulting in the clock jumping years into the future. At the next resume you can fix the clock by restarting systemd-timesyncd, but the uptime command will continue to give a wrong output. I’m currently testing version 6.3 of the Linux kernel from experimental to see whether this bug also happens with this version.
Another problem is that in Debian 12 Bookworm the Spice plugin of Remmina is disabled. There is a work-around using packages from sid and experimental: upgrade to libspice-client-glib-2.0-8 version 0.42-2, which is currently in sid and then install remmina and remmina-plugin-spice from experimental. Maybe this problem will be fixed in a point release for Debian Bookworm, but that remains to be seen.
This week a big refresh of the Flathub website came online and there was quite some buzz around this in the Linux world. However this same week I noticed a worrying thing about Flathub: it is distributing different applications with known security problems. I am really worried about this because many people will unknowingly install these flatpaks, thinking that they are safe because they installed them from a reliable source.
The most striking example of this is Adobe Reader. This application was last updated by Adobe in 2013, so that means it’s 10 years old. Adobe does not support this software any more since 26 June 2013. While the Github Readme of the project mentions that this application is not supported any more, has know security vulnerabilities and is unstable, nothing of this is mentioned on the Flathub page itself. This means that many people who stumble upon this page, will install this flatpak without being aware of these risks. At the moment of writing, Adobe Reader is listed on the Flathub homepage as the third application, because it’s a new package and after a couple of days it had already 1666 installations. I’m wondering how many of these people are aware of the fact that they are installing a no longer supported application with known security bugs.
Unfortunately, Adobe Reader is not the only example. Let’s take a look at Visual Studio Code. I see three different variants on Flathub: two open-source builds Code – OSS and VSCodium and then the proprietary Microsoft build Visual Studio Code. Of these three, only one is up to date at the time of writng: VSCodium. Version 1.77.2 fixed a security problem, but neither the Code – OSS nor the Visual Studio Code flatpak have this version. The latter is the most popular one with 1.3 million installations.
Fortunately security sensitive flatpaks like Firefox, Chromium, Brave and Thunderbird are up to date, so it looks like this is not a bigger, more general problem. Still I think it’s unacceptable that several packages of vulnerable software are offered in the default Flathub repository.
But flatpak packages run in a sandbox so the security risk is only theoretical, isn’t it? Sorry, that ‘s not a serious way of dealing with security. You just need a security vulnerability in flatpak or in the Linux kernel and your software can escape the sandbox. At least two sandbox escape bugs have been found in flatpak in the past (CVE-2021-21261 and CVE-2019-10063). For sure more of these bugs will be discovered in the future, especially if flatpak becomes more popular. Combine this with a vulnerability in the packaged software, such as the Adobe Reader of Visual Studio Code, and opening a file downloaded from the Internet can be enough to get your system compromised.
In practice, we see such sandbox escape bugs being exploited in Chromium/Google Chrome: it has a built-in sandbox to protect the system from security vulnerabilities, yet it often has updates for zero-day vulnerabilities. Up to now already 2 different security fixes were published in 2023 which were already being exploited in the wild. Despite the sandbox. Sandbox escape is explicitly mentioned in the security advisory from a few days ago. Not relying on a single layer of defense against security breaches is called defense in depth and this is simply an essential practice if you care about security.
A PDF viewer is definitely at risk because you often open files downloaded from the Internet with it. But even though a programming editor/lightweight IDE like Visual Studio Code does not appear the most security sensitive application, make no mistake: they can also be targeted by people with bad intentions. I’m thinking of the case uncovered two years ago, where security researchers (!) were successfully targeted by North-Korean hackers who abused a feature in Microsoft’s fully fledged Visual Studio IDE. A security vulnerability in your IDE will only make such abuse easier. Think also of teachers who need to open (untrusted) code from students, which are at risk when their IDE has known security vulnerabilities.
One of the new features of the new website, is that flatpaks by the original developers of the software, are now marked as verified. But I don’t think that’s very useful because it does not say anything about how well it’s maintained and whether there are known security problems. Software which was not packaged by the original author, but which is well maintained, is by far preferred over software which was packaged by its original developer but who has now abandoned maintenance. Compare this to Linux distributions: the software is usually not packaged by the original developer, but by the distribution’s maintainers. That does not make these packages unreliable.
Windows does actually have much more security features enabled by default than Linux: files which originate from the Internet, are marked as such (mark-of-the-web) and these files will then undergo more security protections by the OS and by applications (Protected View in Office for example), there is an integrated malware scanner (Defender), Windows has a firewall enabled by default and it does automatic updates. Many of these things are not the case in Linux. Yet we hear of ransomware attacks on Windows users on a daily basis. It should make us realize that Linux will not be immune to these problems. The first thing we should do, is at least not run software with know security problems.
One thing that has to be done is, is that in the description on Flathub there is a warning in bold explaining that the software has known security vulnerabilities and it should clearly discourage users to install it. But I think that is not enough. People will just search for PDF, will recognize Adobe and won’t even read the description because they know the Adobe PDF reader. And then they will be surprised to discover during usage that the software is unstable and insecure. The same is true for Visual Studio Code: most people installing the flatpak simply won’t be aware that the packaged version has known vulnerabilities.
I think there is only one reasonable solution: these software packages should be moved to a separate repository which is not enabled by default. This repository should be called “unsupported”. If people do the effort of enabling this repository, then they should clearly get an extra warning that the software can be unstable, insecure and that they cannot expect any support. When searching, people should not get such software at the top between other well-maintained software. It should be shown in a separate unsupported category at the bottom. If we don’t do these things, then I’m afraid security incidents will happen one day, possibly destroying all trust in Flathub and Linux in general. And that is something which we should really avoid.
Many modern distributions, like for example the upcoming Debian 12 Bookworm, do not install the package net-tools by default. This package contains popular utilities like ifconfig, route, netstat, arp and mii-tool. In this post I give alternatives for these utilities. You can of course just install the net-tools package if you prefer to keep using these commands.
ifconfig
To see the current network configuration:
$ ip addr
To see the currrent configuration for one specific interface, for example enp25s0:
$ ip addr show enp25s0
To add a static IP address to a network interface
$ ip addr add 192.168.10.2/24 dev enp25s0
Replace add by del to remove an IP address.
route
To see the current route table:
$ ip route
To set the default gateway:
$ ip route add default via 192.168.10.1 dev enp25s0
netstat
The ss command lists all open sockets. Some interesting options:
-a
show both open and listening sockets
-l
only show listening sockets
-p
shows the process using the socket
-t
show only TCP sockets
-u
show only UDP sockets
-r
resolve all IP addresses
To see all open and listening sockets on the system:
In a previous article, I discussed how to set up ModSecurity with the Core Rule Set on Debian. This can be considered as a first line of defense against malicious HTTP traffic. In a defense in depth strategy of course we want to add additional layers of protection to your web servers. One such layer is Snuffleupagus. Snuffleupagus is a PHP module which protects your web applications against various attacks. Some of the hardening features it offers are encryption of cookies, disabling XML External Entity (XXE) processing, a white or blacklist for the functions which can be used in the eval() function and the possibility to selectively disable PHP functions with specific arguments (virtual-patching).
Installing Snuffleupagus on Debian
Unfortunately there is no package for Snuffleupagus included in Debian, but it is not too difficult to build one yourself:
$ apt install php-dev
$ mkdir snuffleupagus
$ cd snuffleupagus
$ git clone https://github.com/jvoisin/snuffleupagus
$ cd snuffleupagus
$ make debian
This will build the latest development code from the master branch. If you want to build the latest stable release, before running make debian, use these commands to view all tags and to checkout the latest table tag, which in this case was v0.8.2:
$ git tag
$ git checkout v0.8.2
If all went well, you should now have a file snuffleupagus_0.8.2_amd64.deb in the above directory, which you can install:
$ cd ..
$ apt install ./snuffleupagus_0.8.2_amd64.deb
Configuring Snuffleupagus
First we take the example configuration file and put it in PHP’s configuration directory. For example for PHP 7.4:
Snuffleupagus can run rules in simulation mode. In this mode, the rule will not block further execution of the PHP file, but will just output a warning message in your log. Unfortunately there is no global simulation mode, but it has to be set per rule. You can run a rule in simulation mode by appending .simulation() to it. For example to run INI protection in simulation mode:
sp.ini_protection.simulation();
INI protection
To prevent PHP applications from modifying php.ini settings, you can set this in snuffleupagus.rules:
The following configuration options sets the SameSite attribute to Lax on session cookies, which offers protection against CSFR on this cookie. We enforce setting the secure option on cookies, which instructs the web browser to only send them over an encrypted HTTPS connection and also enable encryption of the content of the session on the server. The encryption key being used is derived of the value of the global secret key you have set, the client’s user agent and the environment variable SSL_SESSION_ID.
eval() is used to evaluate PHP content, for example in a variable. This is very dangerous if the PHP code to be evaluated can contain user provided data. Therefore it is strongly recommended that you create a whitelist of functions which can be called by code evaluated by eval().
Start by putting this in snuffleupagus.rules and restart PHP:
sp.eval_whitelist.list().simulation();
Then test your websites and see which errors you get in the logs, and add them separated by commas to the eval_whitelist.list(). After that you need to remove .simulation() and restart PHP in order to activate this protection. For example
sp.eval_whitelist.list("array_pop,array_push");
You can also use a blacklist, which only blocks certain functions. For example:
The read_only_exec() feature of Snuffleupagus will prevent PHP from execution of PHP files on which the PHP process has write permissions. This will block any attacks where an attacker manages to upload a malicious PHP file via a bug in your website, and then attempts to execute this malicious PHP script.
It is a good practice to let your PHP scripts be owned by a different user than the PHP user, and give PHP only read-only permissions on your PHP files.
To test this feature, add this to snuffleupagus.rules:
sp.readonly_exec.simulation();
If you are sure all goes well, enable it:
sp.readonly_exec.enable();
Virtual patching
One of the main features of Snuffleupagus is virtual patching. Thjs feature will disable functions, depending on the parameters or and values they are given. The example rules file contains a good set of generic rules which blocks all kinds of dangerous behaviour. You might need to fine-tune the rules if your PHP applications hits certain rules.
These two rules will only allow the include_once function to include files which file name are ending with inc, phtml or php. All other include_once calls will be dropped.
Using generate-rules.php to automatically site-specific hardening rules
In the scripts subdirectoy of the Snuffleupagus source tree, there is a file named generate_rules.php. You can run this script from the command line, giving it a path to a directory with PHP files, and it will automatically generate rules which specifically allow all needed dangerous function calls, and then disable them globally. For example to generate rules for the /usr/share/tt-rss/www and /var/www directories:
The first two rules allow these two files to call function_exists and the last rule drops all requests to function_exists from any other files. Note that the first two rules limit the rules not only to the specified file name, but also define the SHA256 of the file. This way, if the file is changed, the function call will be dropped. This is the safest way, but it can be annoying if the files are often or automatically updated because it will break the site. In this case, you can call generate_rules.php with the --without-hash option:
After you have generated the rules, you will have to add them to your snuffleupagus.rules file and restart PHP-FPM.
File Upload protection
The default Snuffleupagus rule file contains 2 rule which will block any attempts uploading a html or PHP file. However, I noticed that they were not working with PHP 7.4 and these rules would cause this error message:
PHP Warning: [snuffleupagus][0.0.0.0][config][log] It seems that you are filtering on a parameter 'destination' of the function 'move_uploaded_file', but the parameter does not exists. in /var/www/html/foobar.php on line 15PHP message: PHP Warning: [snuffleupagus][0.0.0.0][config][log] - 0 parameter's name: 'path' in /var/www/html/foobar.php on line 15PHP message: PHP Warning: [snuffleupagus][0.0.0.0][config][log] - 1 parameter's name: 'new_path' in /var/www/html/foobar.php on line 15'
The snuffleupagus rules use the parameter destination for the move_uploaded_file instead of the parameter new_path. You will have to change the rules like this:
After restarting PHP-FPM, always check the logs to see whether snuffleupagus does not give any warning or messages for example because of a syntax error in your configuration:
# journalctl -u php7.4-fpm -n 50
Snuffleupagus logs
By default Snuffleupagus logs via PHP. Then if you are using Apache with PHP-FPM, you will find Snuffleupagus logs, just like any PHP warnings and errors in the Apache error_log, for example /var/log/apache/error.log. If you encounter any problems with your website, go check this log to see what is wrong.
Snuffleupagus can also be configured to log via syslog, and actually even recommends this, because PHP’s logging system can be manipulated at runtime by malicious scripts. To log via syslog, add this to snuffleupagus.rules:
sp.log_media("syslog");
I give a few examples of errors you can encounter in the logs and how to fix them:
[snuffleupagus][0.0.0.0][xxe][log] A call to libxml_disable_entity_loader was tried and nopped in /usr/share/tt-rss/www/include/functions.php on line 22
tt-rss calls the function libxml_disable_entity_loader but this is blocked by the XXE protection. Commenting this in snuffleupagus.rules should fix this:
sp.xxe_protection.enable();
Another example:
[snuffleupagus][0.0.0.0][disabled_function][drop] Aborted execution on call of the function 'ini_set', because its argument '$varname' content (display_errors) matched a rule in /usr/share/tt-rss/www/include/functions.php on line 37'
Modifying the PHP INI option display_errors is not allowed by this rule:
You can completely remove (or comment) this rule in order to disable it. But a better way is to add a rule before this rule which allows it for specially that PHP file. So add this rule before:
[snuffleupagus][0.0.0.0][disabled_function][drop] Aborted execution on call of the function 'function_exists', because its argument '$function_name' content (exec) matched a rule in /var/www/wordpress/wp-content/plugins/webp-express/vendor/rosell-dk/exec-with-fallback/src/ExecWithFallback.php on line 35', referer: wp-admin/media-new.php
Last year I supported the Solo V2 Kickstarter camaign. Solo is a completely open source FIDO2 security key. You can use it for Two-Factor Authentication (2FA) on web sites, for protecting your private SSH keys and other things. The Solo2 is similar to keys such as the Yubikey from Yubico, the Google Titan Security Key, the Kensington Verimark or Nitrokey. Because all these keys implement the standards of the FIDO2 project, many of the examples here work with these keys too.
The Kickstarter campagin has ended, however now you can buy Solo V2 security keys via their Indiegogo campaign. If you decide to buy a security key, then I strongly recommend buying at least two of them so that you can use the second key as a back-up key in case the first key breaks or gets lost.
There is also little documentation, which can make it a bit difficult to get started if you are new to FIDO2 security keys. That’s why I decided to create this guide, to serve as a tutorial explaining how to use the Solo2.
Installing software
For basic usage of the key, you actually do not need to install any software. However there are some utilities available which allow you to update the firmware, set a PIN, view all credentials stored on the key, etc. First of all, we will install the solo2 CLI. It’s not yet packaged in Debian, so we need to download it from Github. I check the sha256sum to ensure I get the right files. This utility written in Rust does not support all functionality yet and for that reason I also install the Python based Solo1 CLI, which is packaged in Debian. The fido2-tools package finally contains some utilities which work on all FIDO2 keys.
When authenticating to websites or doing other operations, you will be asked to tap or touch the security key. The Solo2, unlike the Solo1 or Yubikey, does not have a physical button which needs to be depressed, but has 3 touch areas. These are the 3 gold coloured areas at both sides and at the back of the key. You do not need to press them, gently touching one of them is enough. In practice, I had most success touching the two touch zones at both sides of the key simultaneously with 2 fingers.
Updating the Solo V2 firmware version
You can check which version of the firmware is currently installed on your key with this command:
$ solo2 app admin version
At the moment of writing, the most recent version is 1:20200101.9 2:20220822.0.
To update the firmware version, run this command:
$ solo2 update
Setting a PIN on the Solo V2 key
I strongly recommend setting up a PIN on your FIDO2 key. It will be required to do any administrative tasks on your key, such as adding or removing credentials such as SSH keys,
So in my case it’s /dev/hidraw4. Then change the PIN like this:
$ fido2-token -C /dev/hidraw4
Do not forget your PIN, otherwise you cannot use your key any more to authenticate to registered sites!
In case you forgot your FIDO2 PIN, you will need to completely reset your key. This will erase all keys and generate new ones, so you will need to have an alternative way to authenticate to websites where you registered this key.
$ solo key reset
FIDO2 Two-Factor Authentication
Usually you go the security settings on the website and there you can enable 2FA. For some sites, you will be required to set up TOTP first before you can register a security key. So make sure you have a TOTP application such as FreeOTP+ for Android or Raivo OTP on iOS. TOTP is then a back-up method for 2FA in case you loose access to your key. If you have multiple FIDO2 keys, don’t forget to register them all.
A side note: don’t use SMS as a second factor for authentication. SMS 2FA is insecure because these messages are transferred in clear text and there are various ways they can be intercepted.
2FA with the Solo2 on Android
You can connect the Solo2 to your Android device by USB, or you can use NFC. When a web application tries to authenticate your key, you will get a pop-up message where you can choose whether you want to connect it via USB or use NFC. In the case of USB, connect your key to the USB port and tap it, just like you would do on your PC. If you chose NFC, just bring your Solo2 key to the back of your phone and it should authenticate.
This all works fine in Chromium based browsers, however I was not able to successfully authenticate with the Solo2 in Firefox. I managed to get it working with Firefox Nightly though. You will need to go to about:config and set security.webauthn.ctap2 and security.webauth.webauthn_enable_usb_token both to true in order to get it working.
Enabling 2FA on well-know websites
Google
Go to https://myaccount.google.com/ and in the left menu click on Security. Under Signing in to Google click on 2-Step Verification. There click on Enable two-factor authentication.In the wizard that appears, you will have to click on Security Key and follow to instructions to add your key.
Github
In the right top corner, click on your avatar and choose Settings. Then in the left menu click on Password & Authentication where you can enable Two-Factor Authentication. You will have to set up TOTP first, and after that, you can register your security key.
Gitlab
In the right top corner, click on your avatar and choose Preferences. Then in the left menu click on Account where you can enable Two-Factor Authentication. You will have to register a Two-Factor Authenticator (TOTP) first, and after that, you can register your WebAuthn devices.
Masstodon
Click on the Preferences icon then choose Account – Two-Factor Auth. You will need to set up TOTP first, and after that you can add a security key.
Nextcloud
The app Two-Factor WebAuthn needs to be installed on your Nextcloud instance.
Click on your avator in the top right corner and choose Settings. Then choose Security in the left menu and there you can add Webauthn devices.
Microsoft personal account
Make sure you are using the newest firmware because this is not working with older firmware versions.
You need to go with a Chromium based browser to https://account.microsoft.com/ (Firefox does not work at the moment). There click on Security – Security Dashboard – Advanced security options – Add a new way to sign in or verify – Use a security key.
Microsoft Azure Directory account
If you have a Microsoft Directory Azure account, for example if you are using Office 365 in your organization, then it might be possible to use your Solo2, however this depends on the settings made by your administrator. if Enforce key restrictions is enabled, certain keys can be blocked, or only specific keys are allowed. Also the option Enforce attestation needs to be disabled, because otherwise only keys which have been tested by Microsoft are allowed. Unfortunately Solo keys have not been validated by Microsoft (also Google’s Titan security keys are in this case). Note that this attestation does not include an evaluation of the security of the key.
On the website in the left menu, click on More – Settings and privacy and then on Security and account access – Security – Two-factor authentication. There choose Security key.
It appears that at the moment of writing LinkedIn does not support 2FA with FIDO2. You can set up TOTP though, which I recommend doing. Click on Me in the top menu and choose Settings & Privacy. Then in the left menu choose Sign in & security and click on Two-Step verification.
WordPress
To enable FIDO2 two-factor authentication in WordPress, install the plugins two-factor and two-factor-provider-webauthn. Enable both modules and then in the WordPress administration menu go to Settings – TwoFactor WebAuthn. Use the option: Disable old U2F provider. the two-factor plugin includes U2F by default, but this is not supported any more by Chromium based browsers, so you want to use the more modern webauthn instead. Then you can set up 2FA in the menu Users – Profile: enable WebAuthn . Then under Security Keys (WebAuthn) click on Register New Key, tap your key and give it a unique name. Do this for both your security keys.
If you have set up Modsecurity with the Core Rule Set, you will end up with a HTTP 403 Forbidden error when trying to register your key or try to authenticate with it. Create /etc/modsecurity/99-wordpress-webauthn.conf with this content:
and reload your Apache configuration. It should now work.
What if it does not work?
If registering your key or authenticating with your key fails on a website, try with a Chromium based browser. Firefox does not support CTAP2 yet, and this can cause trouble on sites which require verification of a PIN. Firefox has CTAP2 support now, but it’s disabled by default. Make sure you use the latest version of Firefox (109 at the time of writing) and activate CTAP2 support by going to about:config and setting security.webauthn.ctap2 to true.
OpenSSH
To use your Solo2 key for OpenSSH authentication, you will at least version 8.2p1 on both server and client. OpenSSH 8.3p1 adds support for discoverable credentials or resident keys: with discoverable credentials, the FIDO2 security key itself is enough to do SSH public key authentication. This has a slight security risk though if people get access to your Solo2 key because now the only protection is the PIN you have set on the key. Non-discoverable keys don’t have this security risk, because you also need the private key stored on your computer to authenticate.
SSHD configuration for FIDO2 keys
As written before, you need at least version 8.2p1 or 8.3p1 of OpenSSH. The default settings as provided by Debian should be OK, but I strongly recommend to add this option to sshd_config if you only use FIDO2 keys for interactive login:
PubkeyAuthOptions verify-required
I prefer doing this by creating a file /etc/ssh/sshd_config.d/fido2.conf with this line.
This options ensures that only keys which require a PIN can be used, at least adding some protection against theft of a FIDO2 key which contains discoverable credentials.
You can also add the option touch-required to PubkeyAuthOptions in order to require touching the key when authenticating. This will make it impossible to authenticate with keys which were created with the no-touch-required option.
Setting up FIDO2 credentials for SSH
To generate credentials for SSH with your FIDO2, you basically use this command
$ ssh-keygen -t ed25519-sk
There are diffferent options available which you can add:
-O resident: You want to create discoverable credentials.
-O no-touch-required: You want to disable the requirement of touching the key for authenticating.
-O verify-required: You require that the PIN is entered when authenticating. I strongly recommend this option.
-O application=ssh:SomeUniqueName: In case you want to store different SSH keys on your Solo2, you will have to give each of them a different application name starting with ssh:
-f ~/.ssh/id_ed25519_sk_solo2_blue: If you use multiple FIDO2 keys, you may want to store the key in a unique file for every FIDO2 key. Replace the file name of this example by the name of your choice.
You can verify that the credentials are correctly stored on your Solo2 using this command:
$ solo key credential ls
In case you would want to remove the credentials stored on your key, you can do so by using this command:
$ solo key credential rm CREDENTIALID
Replace CREDENTIALID by the value you found with the previous command.
After creating the key, you need to copy the public key to the authorized_keys file on your server. You can use ssh-copy-id for that:
Of course use the correct file name for the public key.
If you used the option no-touch-required when generating the key, you will have to edit the ~/.ssh/authorized_keys file on your server so that this options precedes the key. For example if authorized_keys contains this:
You will be asked to enter the PIN of your key and to touch it, depending on the options you used when creating the key. I add -o IdentitiesOnly=yes because otherwise ssh will first try to authenticate using the keys loaded in your SSH agent. With this option we enforce it to use only the private key we have specified with the -i parameter.
You can make this default by editing ~/.ssh/config, so that you don’t need to repeat the -i and -o parameters every time when connecting:
Host server.example.org
User myusername
IdentityFile ~/.ssh/id_ed25519_sk_solo2_blue
IdentitiesOnly yes
Importing discoverable credentials on another system
When you use discoverable credentials, all information needed for authentication is stored on the key itself, in contrast to non-discoverable credentials, where part of that information is also stored in the private key file on the computer. For this reason, with discoverable credentials, it is easy to import them on any computer.
$ cd ~/.ssh/
$ ssh-keygen -K
The public and private key will be written in the .ssh directory, and then you can authenticate again using the ssh -o IdentiesOnly=yes -i command just like on the system where you generated the key.
Troubleshooting FIDO2 SSH authentication
On the server check the sshd logs, which can be found in /var/log/auth.log or in the ssh journal:
# journalctl -u ssh
Successful authentication with your FIDO2 key, should be logged like this:
Accepted publickey for username from xxx.xxx.xxx.xxx port zzzzzz ssh2: ED25519-SK SHA256:....
Notice the ED25519-SK part which indicates that the credentials on your FIDO2 key were used.
If you see this:
error: public key ED25519-SK SHA256:... signature for username from xxx.xxx.xxx.xxx port zzzzz rejected: user presence (authenticator touch) requirement not met
This means that you have created a key with the no-touch-required options not set. Try adding no-touch-required to the authorized_keys on the server, as noted above, at least if your server does not have PubkeyAuthOptions touch-required set.
On the client-side, you can add the -v parameter to debug what happens:
Here is a short reference sheet for resizing block devices and file systems.
Resizing block devices
Logical volumes
Add 100 GiB to the logical volume with name logicalvolume in volume group volumegroup.
# lvextend -L +100G volumegroup/logicalvolume
QEMU block device
If you resized a block device which is used to store a virtual disk for a QEMU VM, you will need to expand the virtual disk itself. First we need to know the name of the virtual disk. If you are managing your QEMU VMs via libvirt, you can use this command to see all virtual disks:
# virsh qemu-monitor-command VMname --hmp "info block"
drive-virtio-disk0 (#block108): /dev/vm/web-www (raw)
Attached to: /machine/peripheral/virtio-disk0/virtio-backend
Cache mode: writeback, direct
drive-virtio-disk1 (#block302): /dev/vm/web-logs (raw)
Attached to: /machine/peripheral/virtio-disk1/virtio-backend
Cache mode: writeback, direct
Then if you resized the logical volume /dev/vm/web-www from 100 to 200 GiB using the command mentioned before, you can resize the corresponding QEMU virtual disk drive-virtio-disk0 using this command:
I wanted to set up Wireguard on a VPS, not only to tunnel IPv4 traffic, but also allowing me to tunnel IPv6 traffic. As this is IPv6 of course I preferred not to use NAT, but to assign a public IP address to the client. I read some documentation and blog posts, but I struggled getting it to work. Most tutorials I found on the Internet, create a separate IPv6 subnet for the VPN but I could not get this to work. For some reason, IPv6 traffic successfully went through the VPN tunnel and then exited the VPN gateway, but then any response never reached my VPN gateway and hence also not the client.
I decided to try another way: using an NDP proxy. NDP or the Neighbour Discovery Protocol, is similar to ARP which is used in IPv4. Using this protocol, network devices can discover where on the network a certain IP is located. By letting the VPN gateway answer NDP requests for the VPN client, the gateway would correctly send back all responses to the VPN gateway, which then forwards it to the VPN clients.
Configuring the network on the VPN gateway
I use systemd-networkd to set up the network. It’s the most modern way of network configuration and works the same on all distributions using systemd, but of course you can make the same settings in /etc/network/interfaces or whatever your distribution uses. Of course when making changes to a remote server, make sure you can access a console without needing a working network connection on the server, in case things go wrong and the network connection breaks.
On my VPN server, the public network interface is named ens192 (use the command $ ip addr to find it on your system). My public IPv4 address is www.xxx.yyy.zzz with subnet 255.255.255.0 and gateway ww.xx.yy.1. I have the 64 bit IPV6 prefix aaaa:bbbb:cccc:dddd and the IPv6 gateway is fe80::1.
Add a [Peer] section for every client, and change the both the IPv4 and IPv6 address in AllowedIPs so that they are unique (replace 2 by 3 and so on) .
On the clients, create /etc/wireguard/wg0.conf with these contents:
In the [Interface] section make sure to use the same IP addresses as the ones you have set in the corresponding [Peer] section on the VPN gateway. Set the DNS name (or IP address) of the VPN gateway as Endpoint in the [Peer] section. The hostname’s DNS entry can have both an A and AAAA record. You can replace your DNS servers by your preferred ones. You can also consider running your own DNS server on the VPN gateway.
Make sure that all wg*.conf files on client and server are only readable by root, because they contain private keys.
Make sure you have shorewall and shorewall6 installed:
# apt install shorewall shorewall6
Shorewall6
First we create a separate zone for our VPN in /etc/shorewall6/zones:
fw firewall
net ipv6
vpn ipv6
Then we configure the network interfaces and assign it to the right zone in /etc/shorewall6/interfaces:
net NET_IF tcpflags,routeback,proxyndp,physical=ens192
vpn wg0 tcpflags,routeback,optional
Then we allow connections from the VPN to the firewall and to the Internet in /etc/shorewall6/policy:
$FW net ACCEPT
vpn net ACCEPT
vpn $FW ACCEPT
net all DROP $LOG_LEVEL
# The FOLLOWING POLICY MUST BE LAST
all all REJECT $LOG_LEVEL
Keep in mind that your VPN client will have a public IPv6 address, which is accessible from the Internet. The rule net all DROP protects your VPN clients against access from the Internet.
Then we create some rules which allows access to the SSH server and the Wireguard VPN server from the Internet in /etc/shorewall6/rules:
Invalid(DROP) net $FW tcp
Ping(DROP) net $FW
ACCEPT $FW net ipv6-icmp
AllowICMPs(ACCEPT) all all
ACCEPT all all ipv6-icmp echo-request
SSH(ACCEPT) net $FW
ACCEPT net $FW udp 51820 # Wireguard
For IPv4 we configure Shorewall to use NAT to provide Internet access to the VPN clients.
/etc/shorewall/zones:
fw firewall
net ipv4
vpn ipv4
/etc/shorewall/interfaces:
net NET_IF dhcp,tcpflags,logmartians,nosmurfs,sourceroute=0,routefilter,routeback,physical=ens192
vpn wg0 tcpflags,logmartians,nosmurfs,sourceroute=0,optional,routefilter,routeback
/etc/shorewall/policy:
$FW net ACCEPT
vpn net ACCEPT
vpn $FW ACCEPT
net all DROP $LOG_LEVEL
# The FOLLOWING POLICY MUST BE LAST
all all REJECT $LOG_LEVEL
/etc/shorewall/rules:
# Drop packets in the INVALID state
Invalid(DROP) net $FW tcp
# Drop Ping from the "bad" net zone.. and prevent your log from being flooded..
Ping(DROP) net $FW
SSH(ACCEPT) net $FW
ACCEPT net $FW udp 51820
/etc/shorewall/snat:
MASQUERADE 192.168.7.0/24 NET_IF
/etc/shorewall/shorewall.conf:
IP_FORWARDING=Yes
Compile and load the rules and enable Shorewall permanently:
Then in order to make sure that the gateway knows that the VPN client aaa:bbb:cccc:dddd::2 is reachable via the VPN gateway, we need to set up NDP proxying. The Neighbor Discovery Protocol is similar to ARP in IPv6.
In a previous version of this guide, I configured NDP proxying in Shorewall6. However, we can directly set this up with systemd-networkd, so this will also work if you don’t use Shorewall6 but another firewall like Firewalld. Furthermore I also experienced problems with NDP proxy settings being lost after some time, requiring a restart of Shorewall6 to make the IPv6 connection over Wireguard work again. I hope this will be fixed by settings this up in systemd-networkd.
Edit again the file /etc/systemd/network/internet.net and in the [NETWORK] section add this
Some time ago, I received a new laptop, the HP Elitebook 845 G8. This is a 14″ laptop with an AMD CPU of the Renoir family, in my case an AMD Ryzen 7 PRO 5850U. As always, I run Debian GNU/Linux testing (currently Bookworm) on it. In this post, I will explain how to get all hardware working. This guide probably also works for other G8 Elitebooks, such as the Elitebook 835 G8 and Elitebook 855 G8, because they are all quite similar.
You can find detailed logs and reports of people running Linux on the Elitebook 845 G8 in the Hardware for Linux database.
Installation
I used a USB disk to boot the installer and a USB-C dock with an Ethernet interface to do a network installation. If you use the Debian installer with non-free firmware, you can also do the installation over wifi. I have not tried the current stable release Debian 11 Bullseye on this system. For best compatibility I strongly recommend testing because it has a more recent kernel and drivers.
Now we need to write it to a USB disk. Make sure there is no data on the drive you want to keep, because this process will completely wipe the disk.
To write the ISO image to a USB disk, Windows users can use the application Rufus, MacOS users can use Balena Etcher. If you are using Linux, you can dd the ISO image on your USB disk, or use a GUI like Fedora Media Writer.
Reboot the system and press F10 when the HP logo appears to load the BIOS/UEFI setup. Go to the Advanced page and select Boot Options. There make sure that USB Storage Boot is enabled. If you want to work with custom kernels, it can be handy to disable Secure Boot in Security – Secure Boot Configuration, but it’s not needed to install and use Debian.
Save the changes you made (if any) and reboot the system and press F9 at the HP logo to get the boot menu. In the boot menu, select your USB drive to start the Debian installer.
Enabling non-free repositories
We will need to configure the non-free repositories for apt because we need several firmware packages from non-free. Edit /etc/apt/sources.list and check whether any deb line has maincontrib and non-free at the end. If not add it, and then run
# apt update
Updating the BIOS/UEFI firmware
If you have a Windows installation, you can update the firmware from there, even before you install Linux. But you can also update the firmware without Windows. Follow the instructions in that blog post. It’s important to do this, not only because this gives you essential security fixes, but also bug fixes, some of which specifically for Linux compatibility.
Updating CPU firmware
Install the package amd64-microcode to ensure your AMD CPU is always running the latest microcode, which includes security fixes:
# apt-get install amd64-microcode
Flashing other firmware
The fwupd utility can download and install firmware from the LVFS. The firmware of the fingerprint reader of the Elitebook 845 G8, can be updated like this, and may be necessary to get the fingerprint reader working in Linux. First make sure fwupd is installed:
# apt install fwupd fwupd-amd64-signed
Now update all available firemware:
# fwupdtool update
If you have a HP USB-C Dock G5, then new firmware is also available in the LVFS, but it’s in the testing repository. To enable this repository, run this command:
Now add this to /etc/environment so that it uses the correct VDPAU driver:
VDPAU_DRIVER=radeonsi
In order to get GStreamer based players to use VA-API, you need to install this package:
# apt install gstreamer1.0-vaapi
After installing the firmware and editing /etc/environment you will need to reboot your system.
Unfortunately most video players and web browsers still don’t use VA-API hardware acceleration by default, but this needs to be configured manually. I will write a separate article about that later.
Realtek wifi adapter
The wifi adapter is a Realtek RTL8822CE according to lspci:
This laptop can also be delivered with an Intel AX200 Wi-Fi 6 adapter (which is actually a better option than this one from Realtek). If you have this one, you will need to install the firmware-iwlwifi package instead.
Smartcard reader
lsusb identifies this smartcard reader as an Alcor AU9540:
Bus 005 Device 004: ID 058f:9540 Alcor Micro Corp. AU9540 Smartcard Reader
Note that it only sees the smartcard reader when a card has been inserted.
You will need pscd with the CCID driver to use this smartcard reader:
# apt install pscsd
Fingerprint reader
The fingerprint reader can be seen like this in lsusb:
Bus 003 Device 003: ID 06cb:00df Synaptics, Inc.
Make sure you have installed all firmware updates with fwupd and then you need to install these packages:
# apt install fprintd libpam-fprintd
In GNOME, under Settings – Users you can enable login on fingerprint and add your fingerprints.
Sound
lspci:
04:00.5 Multimedia controller: Advanced Micro Devices, Inc. [AMD] ACP/ACP3X/ACP6x Audio Coprocessor (rev 01)<br />04:00.6 Audio device: Advanced Micro Devices, Inc. [AMD] Family 17h/19h HD Audio Controller
Bus 001 Device 002: ID 0408:5348 Quanta Computer, Inc. HP HD Camera
The webcam works out of the box. Many applications will see the IR camera as a second camera.
Bluetooth
lsusb:
Bus 003 Device 002: ID 0bda:b00c Realtek Semiconductor Corp. 802.11ac WLAN Adapter
If you use lsusb -v you will see that this is actually the Bluetooth Radio adapter. It is combined with the wifi adapter, hence the confusion.
Suspend/resume
HP does not support S3 (traditional suspend-to-ram/standby) in its recent Elitebooks any more, but instead uses s0ix (s2idle/suspend-to-idle/modern standby). S2idle support for AMD CPU’s was only added in Linux 5.11 with the amd_pmc driver. I recommend a very recent kernel, because later kernel versions had bug fixes in this regard too. However suspend regressed in stable update 5.17.3 (and others), a bug which was fixed in 5.17.5. I’m using a custom-built 5.17.5 kernel, but a fixed kernel will appear soon in Debian.
If you have HP Drivelock enabled, then your system will fail to resume. Drivelock is a security feature which can be set up in the BIOS and requires you to enter a password when starting up the system in order to access the contents of the disk. When trying to resume the system, fans start running, the keyboard backlight reacts to key presses, but the screen remains blank, nothing is written to logs and also network does not come up. Apparently this is a bug in HP’s BIOS/UEFI firmware which can be worked around by adding iommu=pt to the kernel command line. To do so, edit /etc/default/grub and add this to the variable GRUB_CMDLINE_LINUX_DEFAULT. For example:
GRUB_CMDLINE_LINUX_DEFAULT="quiet iommu=pt"
Then update the GRUB configuration:
# update-grub
Install isenkram to help install drivers when plugging in hardware
Isenkram is a utitliy which will show a message when you connect hardware to your system and extra software or firmware is available for that hardware.
# apt install isenkram
Enabling trimming of the NVME SSD
Enable the fstrim timer to make sure the SSD is trimmed on regular intervals:
If you are using Linux 6.3 you don’t need to do this, but you will have to add to the the GRUB_CMDLINE_LINUX_DEFAULT options in /etc/default/grub.conf:
amd_pstate=active
and run update-grub.
Set up TLP
TLP is a tool which optimizes power consumption of your system in order to increase battery time. TLP also has an options Radio Device Wizard, which I will use here to automatically disable wifi when the system is connected via an Ethernet cable.
# apt install tlp tlp-rdw
Configure the Radio Device Wizard by creating the file /etc/tlp.d/10-tlp-rdw.conf:
# tlp-rdw - Parameters for the radio device wizard
# Possible devices: bluetooth, wifi, wwan.
# Separate multiple radio devices with spaces.
# Default: <none> (for all parameters below)
DEVICES_TO_DISABLE_ON_LAN_CONNECT="wifi wwan"
DEVICES_TO_DISABLE_ON_WIFI_CONNECT="wwan"
DEVICES_TO_DISABLE_ON_WWAN_CONNECT="wifi"
# Radio devices to enable on disconnect.
DEVICES_TO_ENABLE_ON_LAN_DISCONNECT="wifi wwan"
DEVICES_TO_ENABLE_ON_WIFI_DISCONNECT=""
DEVICES_TO_ENABLE_ON_WWAN_DISCONNECT=""
# Radio devices to enable/disable when docked.
DEVICES_TO_ENABLE_ON_DOCK=""
DEVICES_TO_DISABLE_ON_DOCK=""
# Radio devices to enable/disable when undocked.
DEVICES_TO_ENABLE_ON_UNDOCK="wifi"
DEVICES_TO_DISABLE_ON_UNDOCK=""
To enable best ASPM power saving features when on battery, create /etc/tlp.d/20-aspm.conf:
Submitting your system to the Linux Hardware database
The Linux Hardware database is a useful tool where users searching for hardware, can check the compatibility of systems with Linux. I recommend running this tool on all your Linux systems. After submission, you will get a link where you can view the data and indicate whether all hardware works and which work-arounds you had to apply. Click on the Review button on the page to do so.
# apt install hw-probe
# hwprobe --all --upload
Conclusion
Actually Linux compatibility of the HP Elitebook 845 is actually in good shape. It’s not perfect, but all hardware can be made to work. On distros like Ubuntu, which install non-free firmware by default, it should even be easier to make everything work. Still HP lags behind Dell and Lenovo in Linux support, because they don’t make it possible to flash the BIOS/UEFI firmware using fwupd, while all recent Dell and Lenovo business laptops have their firmware available in the LVFS. Also the problem that iommu=pt needs to be used to successfully resume the laptop when Drivelock is enabled, is a problem that HP should address in a BIOS update.
A web application firewall (WAF) filters HTTP traffic. By integrating this in your web server, you can make sure potentially dangerous requests are blocked before they arrive to your web application or sensitive data leaks out of your web server. This way you add an extra defensive layer potentially offering extra protection against zero-day vulnerabilities in your web server or web applications. In this blog post, I give a tutorial how to install and configure ModSecurity web application firewall and the Core Rule Set on Debian. With some minor adaptions you can also use this guide for setting up ModSecurity on Ubuntu or other distributions.
ModSecurity is the most well-known open source web application firewall. The future of ModSecurity does not look too bright but fortunately with Coraza WAF an alternative which is completely compatible with ModSecurity is in development. At this moment Coraza only integrates with the Caddy web server, and does not have a connector for Apache or NGinx so for that reason it is currently not yet usable as a replacement for ModSecurity.
While ModSecurity provides the framework for filtering HTTP traffic, you also need rules which define what to bloc and that’s where the Core Rule Set (CRS) comes in. CRS is a set of generic rules winch offer protection to a various range of common attacks via HTTP, such as SQL injection, code injection and cross-site scripting (XSS) attacks.
Install ModSecurity and the Core Rule Set on Debian
I install the Apache module for ModSecurity, the geoip-database, which can be used for blocking all requests from certain countries, and modsecurity-crs, which contains the Core Rule Set. I take this package from testing, because it has a newer version (version 3.3.2 at the time of writing). There is no risk in taking this package from testing, because it only contains the rules and does not depend on any other packages from testing/unstable. If you prefer faster updates, you can also use unstable.
Now edit /etc/modsecurity/modsecurity.conf. I highlight some of the options:
SecRuleEngine on
SecRequestBodyLimit 536870912
SecRequestBodyNoFilesLimit 131072
SecAuditLog /var/log/apache2/modsec_audit.log
#SecRule MULTIPART_UNMATCHED_BOUNDARY "!@eq 0" \
#"id:'200004',phase:2,t:none,log,deny,msg:'Multipart parser detected a possible unmatched boundary.'"
SecPcreMatchLimit 500000
SecPcreMatchLimitRecursion 500000
SecStatusEngine Off
The SecRuleEngine option controls whether rules should be processed. If set to Off, you completely disable all rules, with On you enable them and it will block malicious actions. If set to DetectionOnly, ModSecurity will only log potential malicious activity flagged by your rules, but will not block them. DetectionOnly can be useful for temporary trying out the rules in order to find false positives before you really start blocking potential malicious activity.
The SecAuditLog option defines a file which contains audit logs. This file will contain detailed logs about every request triggering a ModSecurity rule.
The SecPcreMatchLimit and SecPcreMatchLimitRecursion set the match limit and match limit recursion for the regular expression library PCRE. Setting this high enough will prevent errors that the PCRE limits were exceeded while analyzing data, but setting it too high can make ModSecurity vulnerable to a Denial of Service (DoS) attack. A Core Rule Set developer recommends a value of 50000 so that’s what I use here.
Set SecStatusEngine to Off to prevent ModSecurity sending version information back its developers.
After changing any configuration related to ModSecurity or the Core Rule Set, reload your Apache web server:
# systemctl reload apache2
Configuring the Core Rule Set
The Core Rule Set can be configured via the file /etc/modsecurity/crs/crs-setup.conf.
Anomaly Scoring
By default the Core Rule Set is using anomaly scoring mode. This means that individual rules add to a so called anomaly score, which at the end is evaluated. If the anomaly score exceeds a certain threshold, then the traffic is blocked. You can read more about this configuration in crs-setup.conf but the default configuration should be fine for most people.
Setting the paranoia level
The paranoia level is a number from 1 to 4 which determines which rules are active and contribute to the anomaly scoring. The higher the paranoia level, the more rules are activated and hence the more aggressive the Core Rule Set is, offering more protection but potentially also causing more false positives. By default the paranoia level is set to 1. If you work with sensitive data, it is recommended to increase the paranoia level.
The executing paranoia level defines the rules which will be executed but their score will not be added to the anomaly scoring. When HTTP traffic hits rules of the executing paranoia level, this traffic will only be logged but not be blocked. It is a especially useful to prepare for increasing the paranoia level and finding false positives on this higher level, without causing any disruption for your users.
To set the paranoia level to 1 and the executing paranoia level to 2, make sure you have these rules set in crs-setup.conf:
Once you have fixed all false positives, you can raise the paranoia level to 2 to increase security.
Defining the allowed HTTP methods
By default the Core Rule Set only allows the GET, HEAD, POST and OPTIONS HTTP methods. For many standard sites this will be enough but if your web applications also use restful APIs or WebDAV, then you will need to add the required methods. Change rule 900200, and add the HTTP methods mentioned in the comments in crs-setup.conf.
SecAction \
"id:900200,\
phase:1,\
nolog,\
pass,\
t:none,\
setvar:'tx.allowed_methods=GET HEAD POST OPTIONS'"
Disallowing old HTTP versions
There is a rule which determines which HTTP versions you allow in HTTP requests. I uncomment it and modify it to only allow HTTP versions 1.1 and 2.0. Legitimate browsers and bots always use one of these modern HTTP versions and older versions usually are a sign of malicious activity.
Personally I’m not a fan of completely blocking all traffic from a whole country, because you will also block legitimate visitors to your site, but in case you want to this, you can configure this in crs-setup.conf:
Add the two-letter country codes you want to block to the last line (before the two quotes), multiple country codes separated by a space.
Make sure you have the package geoip-database installed.
Core Rule Set Exclusion rules for well-known web applications
The Core Rule Set contains some rule exclusions for some well-known web applications like WordPress, Drupal and NextCloud which reduces the number of false positives. I add the following section to crs-setup.conf which will allow me to enable the exclusions in the Apache configuration by setting the WEBAPPID variable in the Apache configuration whenever I need them.
Adding rules for Log4Shell and Spring4Shell detection
At the end of 2021 a critical vulnerability CVE-2021-44228, named Log4Shell, was detected in Log4j, which allows remote attackers to run code on a server with the vulnerable Log4j version. While the Core Rule Set offered some mitigation of this vulnerability out of the box, this protection was not complete. New improved detection rules against Log4Shell were developed. Because of the severity of this bug and the fact that it’s being exploited in the wild, I strongly recommend adding this protection manually when using ModSecurity version 3.3.2 (or older). Newer, not yet released versions, should have complete protection out of the box.
First modify /etc/apache2/mods-enabled/security2.conf so that it looks like this:
<IfModule security2_module>
# Default Debian dir for modsecurity's persistent data
SecDataDir /var/cache/modsecurity
# Include all the *.conf files in /etc/modsecurity.
# Keeping your local configuration in that directory
# will allow for an easy upgrade of THIS file and
# make your life easier
IncludeOptional /etc/modsecurity/*.conf
# Include OWASP ModSecurity CRS rules if installed
IncludeOptional /usr/share/modsecurity-crs/*.load
SecRuleUpdateTargetById 932130 "REQUEST_HEADERS"
</IfModule>
Then create the file /etc/modsecurity/99-CVE-2021-44228.conf with this content:
Whenever something hits your ModSecurity rules, this will be logged in your Apache error log. The above request has created these messages in the error log:
In the first 3 lines we see that we hit different filters which check for XSS vulnerabilities, more specifically rules 941100, 941110 and 941160 all of them having the tag paranoia-level/1.
Then the fourth line shows that we hit rule 949110 which caused the web server to return the HTTP 403 Forbidden response because the inbound anomaly score, 15, is higher than 5. Then rule 980130 gives us some more information about the scoring: we hit a score of 15 at the paranoia level 1, while rules at the other paranoia levels rules contributed 0 to the total score. We also see the scores for individual types of attack: in this case all 15 points where scored by rules detecting XSS attacks. This is the meaning of the different abbreviations used:
SQLI
SQL injection
XSS
cross-site scripting
RFI
remote file inclusion
LFI
local file inclusion
RCE
remote code execution
PHPI
PHP injection
HTTP
HTTP violation
SESS
session fixation
More detailed logs about the traffic hitting the rules can be found in the file /var/log/apache2/modsec_audit.log.
Fixing false positives
First of all, in order to minimize the amount of false positives, you should set the WEBAPPID variable if you are using one of the known web applications for which the Core Rule Set has a default exclusion set. These web applications are currently WordPress, Drupal, Dokuwiki, Nextcloud, Xenforo and cPanel. You can do so by using the <a href="https://github.com/SpiderLabs/ModSecurity/wiki/Reference-Manual-(v2.x)#SecWebAppId">SecWebAppId</a> option in a VirtualHost of Location definition in the Apache configuration. For example if you have a VirtualHost which is used by Nextcloud, set this within the VirtualHost definition:
If you have multiple WordPress sites, give each of them a unique WEBAPPID which name starts with wordpress. Add a different suffix for every instance so that each one run its in own application namespace in ModSecurity.
If you still encounter false positives, you can completely disable rules by using the configuration directive SecRuleRemoveById. I strongly recommend not disabling rules globally, but limiting its removal to the specific location from which you want them to be removed, for example by putting them with <Location> or <LocationMatch> tags in the Apache configuration. For example:
Pay attention not to disable any of the 949*, 959*, and 980* rules: disabling the 949* and 959* rules would disable all the blocking rules, while disabling the 980* rules would give you less information about what is happening in the logs.
Conclusion
ModSecurity and the Core Rule Set offer an additional security layer for web servers in your defence in depth strategy. I strongly recommend implementing this on your servers because it makes it harder to abuse security vulnerabilities.
Keep an eye on the Core Rule Set blog and Twitter account: sometimes they post new rules for specific new critical vulnerabilities, which can be worthwhile to add to your configuration.
Close
Ad-blocker not detected
It looks like you are not using an adblocker. Advertisements are often abused to lead you to malicious sites and to track you, invading your private. For this reason I strongly recommend you to install Ublock Origin. By preference use the Firefox web browser because Ublock Origin works best in this browser.