Protecting your server from known bad IPs with Foomuuri iplists

On the Internet we can find (usually crowdsourced) lists of malicious IP addresses responsible for attacks. We can easily integrate them in Foomuuri in order to block connections from these bad hosts. Not only does this improve security, it is also a performance win, because our daemons don’t don’t have to waste any more time dealing with these malicious connections.

The blocklists

  • blocklist.de: a crowdsourced list of IP addresses involved in all kinds of attacks
  • techmdw blacklist: this list is compiled by the Swedish company TechMDW AB and is based on Crowdsec with their own additions
  • Greensow
  • Spamhaus DROP: The Don’t Route or Peer List by Spamhaus contains netblocks which you should never interact with because they are leased or stolen by criminal organisations
  • Emerging Threats: compilation of Spamhaus DROP list, the top attackers list by DShield and lists by abuse.ch
  • Interserver: list consisting of IP addresses attacking servers of the web hosting company Interserver
  • Stopforumspam: list of toxic IP addresses which are believed to be used only for spamming websites

Blocking incoming connections from malicious IPs

Create file/etc/foomuuri/iplist.conf with these contents:

iplist {
	@blocklist_de https://lists.blocklist.de/lists/all.txt refresh=15m
        @techmdw https://blacklist.techmdw.com/ refresh=30m
	@greensnow https://blocklist.greensnow.co/greensnow.txt refresh=30m
	@et https://rules.emergingthreats.net/fwrules/emerging-Block-IPs.txt refresh=24h
	@interserver https://rbldata.interserver.net/ip.txt refresh=15m
	@stopforumspam https://www.stopforumspam.com/downloads/toxic_ip_cidr.txt refresh=2h
        @spamhausdrop https://www.spamhaus.org/drop/drop.txt refresh=24h
        @spamhausdropv6 https://www.spamhaus.org/drop/dropv6.txt refresh=24h
}

Then in the public-localhost section (which I usually put in /etc/foomuuri/public-localhost.conf), add this at the top:

        saddr @blocklist_de drop counter "blocklist.de"
        saddr @techmdw drop counter "techmdw" 
        saddr @greensnow drop counter "greensnow"
        saddr @spamhausdrop drop counter "spamhausdrop"
        saddr @spamhausdropv6 drop counter "spamhausdropv6"
        saddr @et drop counter "et"
        saddr @interserver drop counter "interserver"
        saddr @stopforumspam drop counter "stopforumspam"

In order to prevent getting locked out of your system yourself because of a false positive or an error in one of the lists, I recommend adding one rule which allows you to always access your system before these rules. For example to make sure you can always connect from IP xxx.xxx.xxx.xxx put this as first rule in public-localhost section:

        ssh saddr xxx.xxx.xxx.xxx accept

In the rules above I’m not logging all the details of the dropped connections, but I’m keeping a counter so that I can see how many times such a rule has been hit. You can use the command

# foomuuri list counter

to see how many packets and bytes have been dropped by these rules.

If you want to log all individual dropped connections , you can add this at the end of every line:

log "name_of_the_blocklsit"

They will then be logged with the prefix name_of_the_blocklist.

Rejecting outgoing connections to malicious IPs

We can also block outgoing connections. You should especially do this for the Spamhaus DROP lists. Add this to the localhost-public section:

        daddr @blocklist_de reject counter "out_blocklist.de" log "out_blocklist.de"
        daddr @techmdw reject counter "out_techmdw" log "out_techmdw"
        daddr @greensnow reject counter "out_greensnow" log "out_greensnow"
        daddr @spamhausdrop reject counter "out_spamhausdrop" log "out_spamhausdrop"
        daddr @spamhausdropv6 reject counter "out_spamhausdropv6" log "out_spamhausdropv6"
        daddr @et reject counter "out_et" log "out_et"
        daddr @interserver reject counter "out_interserver" log "out_interserver"
        daddr @stopforumspam reject counter "out_stopforumspam" log "out_stopforumspam"

Notice that I’m rejecting instead of dropping these connections so that applications don’t keep on waiting until the connection attempt times out and I’m logging these. Normally these rules should only very rarely get triggered, but if they do you want detailed logs so you easily investigate what’s going on.

Dropping or allowing incoming connections by country of origin

Another very effective method to prevent abuse is to limit connections to services like SSH and your mail server to certain countries of origin. You can find lists of IP (both IPv4 and IPv6) addresses per country on https://github.com/ipverse/rir-ip/tree/master/country . You can add them to an iplist in Foomuuri and then use these in the public-localhost section. Note that these lists are not perfect and sometimes connections can come from another country than from the one the IP address is registered to in this database. Especially public VPN services sometimes suffer from these problems, so be careful if you are using these.

You can also use this aggregated list of all European IP addresses, but unfortunately that list only exists for IPv4 addresses.

To use this aggregated list, add this to the iplist section:

        @europe https://ipv4.fetus.jp/krfilter.4.txt refresh=24h 

Then with these rules in the the public-localhost section, I only allow IPv4 connections to port 143 (IMAP), 993 (IMAPs), port 587 (Submission), port 465 (Submissions) from European IPs. Note that I allow IPv6 from the whole world because this aggregated list only contains IPv4 addresses.

        ssh ipv4 saddr @europe
        ssh ipv6
        ssh drop counter "non-europe" log "non-europe"

        imap ipv4 saddr @europe
        imap ipv6
        imap drop counter "non-europe" log "non-europe"
        imaps ipv4 saddr @europe
        imaps ipv6
        imaps drop counter "non-europe" log "non-europe"

        submission ipv4 saddr @europe
        submission ipv6
        submission drop counter "non-europe" log "non-europe"
        submissions ipv4 saddr @europe
        submissions ipv6
        submissions drop counter "non-europe" log "non-europe"

More information

https://github.com/FoobarOy/foomuuri/wiki/Configuration#iplist

Securing PHP-FPM with AppArmor

PHP-FPM is an ideal candidate to secure with AppArmor. Not only can the security of a web server be endangered by security bugs in PHP itself, it can also be affected by security holes in PHP applications. By confining PHP-FPM with AppArmor, we can limit abuse when a security hole is exploited, by preventing PHP-FPM for example from reading arbitrary files on your system or executing random binaries, which may contain a Linux backdoor or crypto-miner malware.

Preparation

First it is important that you run different PHP web applications as different users by running them in a different pool. The different pools can then be confined with their own separate AppArmor subprofile or hat, so that we can protect the different PHP applications from each other.

Then on Debian 12 Bookworm, I recommend that you upgrade to the AppArmor packages from my own repository, because they contain some important bug fixes, notably aa-logprof and aa-genprof supporting exec events in hats. Set up the bookworm-frehi repository in apt and upgrade AppArmor to the version available in this repository:

# apt install -t bookworm-frehi apparmor

Now we are ready to create our AppArmor profile for PHP-FPM. Debian already ships a basic profile /etc/apparmor.d/php-fpm as part of the package apparmor-profiles. However aa-logprof and aa-genprof will want to write everything in one single file, while the /etc/apparmor.d/php-fpm heavily relies on including other files and also the comments in that file will be lost. For that reason I choose to create my own profile from scratch with aa-genprof. So first I disable the php-fpm profile and remove it:

# aa-disable php-fpm
# rm /etc/apparmor.d/disable/php-fpm
# rmdir /etc/apparmor.d/php-fpm.d/

Generating a profile for /usr/sbin/php-fpm8.2

I start aa-genprof:

# aa-genprof /usr/sbin/php-fpm8.2

In another shell, I stop PHP-FPM:

# systemctl stop php8.2-fpm

and now I modify all pools in /etc/php/8.2/fpm/pool.d/*.conf and I add the apparmor_hat value to the name of the pool. For example:

# apparmor_hat = myblog

This instructs PHP-FPM to switch to this subprofile or hat for this pool. If every pool uses a different hat, we can isolate the different pools running different web applications from each other.

Now verify with aa-status whether /usr/sbin/php.php-fpm8.2 is in complain mode and if not, put it in complain mode manually with the aa-complain command:

# aa-complain /etc/apparmor.d/usr.sbin.php-fpm8.2

Now we start PHP-FPM:

# systemctl start php8.2-fpm

It’s a good idea to immediately press S to scan for the first events. Process them like I showed in my previous AppArmor tutorial. As usual, in case of doubt choose the more fine-grained option instead of broad abstractions.

I will discuss some specific things you will encounter.

You will see that PHP-FPM requests read access to /etc/passwd and /etc/group. It needs this in order to look up the UID and GID of the user and group value you have set in your FPM pools. As we are running our different pools in their own subprofiles, which don’t have access to these files, this is not a security problem: your PHP applications won’t have access to these files if you have properly set apparmor_hat in every pool.

At some point our PHP-FPM process switches to the profile /usr/sbin/php-fpm8.2//null-/usr/sbin/php-fpm8.2//myblog. This is because we have set apparmor_hat = myblog in the myblog pool. The null part in the profile name indicates that this profile does not exist yet and so AppArmor uses this as a temporary name. We will have to fix this manually later. Now accept this rule.

Profile:        /usr/sbin/php-fpm8.2
Exec Condition: ALL
Target Profile: /usr/sbin/php-fpm8.2//null-/usr/sbin/php-fpm8.2//myblog

 [1 - change_profile -> /usr/sbin/php-fpm8.2//null-/usr/sbin/php-fpm8.2//myblog,]
(A)llow / [(D)eny] / (I)gnore / Audi(t) / Abo(r)t / (F)inish

Continue processing all events, and when you are done save the profile and quit aa-genprof.

Check with aa-status in which mode the /usr/sbin/php-fpm8.2 profile is and switch it back to complain mode if necessary. Otherwise if it’s in enforce mode, your PHP applications will probably not work correctly any more.

We will now edit the file /etc/apparmor.d/usr.sbin.php-fpm8.2 in a text editor. Find the change_profile rules in that file, and replace them by this line:

change_profile -> /usr/sbin/php-fpm8.2//*,

This allows PHP-FPM to switch to all PHP-FPM hats.

In the file, you will also find some signal rules which refer to the temporary null profile. We need to fix these too like this:

  signal send set=kill peer=/usr/sbin/php-fpm8.2//*,<br />  signal send set=quit peer=/usr/sbin/php-fpm8.2//*,<br />  signal send set=term peer=/usr/sbin/php-fpm8.2//*,

Then within the profile /usr/sbin/php-fpm8.2 flags=(complain) {

create an empty myblog hat in complain mode:

  ^myblog flags=(complain) {
  }

Now reload the AppArmor profile:

# apparmor_parser -r /etc/apparmor.d/usr.sbin.php-fpm8.2

Restart aa-genprof:

# aa-genprof /usr/sbin/php-fpm8.2

and restart PHP-FPM in another shell:

# systemctl restart php8.2-fpm

Now exercise your web applications and process all generated events. Particularly test posting new items on your website and uploading pictures and other files. You will notice that the events triggered by the www pool, are now in the hat myblog, which is shown by aa-genprof as /usr/sbin/php-fpm8.2^myblog.

In WordPress I am using the WebP Express plugin, which automatically creates a webp file for every uploaded image by calling the command cwebp.

aa-genprof will show this event:

Profile:  /usr/sbin/php-fpm8.2^myblog
Execute:  /usr/bin/dash
Severity: unknown

(I)nherit / (N)amed / (U)nconfined / (X) ix On / (D)eny / Abo(r)t / (F)inish

So the myblog pool tries to launch the dash shell as part of this process. By pressing I you can let it inherit the www profile, so it will have exactly the same permissions and cannot read or modify any files from other web applications running in a different PHP-FPM pool, or other files on the system. I also had to do this for the cwebp executable and a few others which are used by WebP Express. Important is that you never choose unconfined, because then you allow these executables to do anything on your system without any AppArmor restrictions.

When WordPress sends e-mail, it will call sendmail:

[(S)can system log for AppArmor events] / (F)inish
Reading log entries from /var/log/audit/audit.log.
Target profile exists: /etc/apparmor.d/usr.sbin.sendmail

Profile:  /usr/sbin/php-fpm8.2^myblog
Execute:  /usr/sbin/sendmail
Severity: unknown

(I)nherit / (P)rofile / (N)amed / (U)nconfined / (X) ix On / (D)eny / Abo(r)t / (F)inish

Sendmail should run in its own profile, so you can choose P here.

Creating a child profile for dash

In our example our PHP-FPM worker calls dash and cwebp to process images. We have chosen to let dash and cwebp inherit all permissions of the worker. While this already is a great security improvement because it will prevent dash from executing random binaries and reading files unrelated to this web application it is still too broad. There is no reason why dash and cwebp should be able to read our PHP files for example and any other files than webp files. So we can improve our security even more by running these commands in a separate child profile.

To do so, I edit /etc/apparmor.d/usr.sbin.php-fpm8.2 in a text editor, and I remove all these lines from the www hat:

    /usr/bin/cwebp mrix,
    /usr/bin/dash mrix,
    /usr/bin/nice mrix,
    /usr/bin/cwebp mrix,
    /usr/bin/which.debianutils mrix,

and I replace it by this rule which will force dash to run in a child profile named myblogdash:

/usr/bin/dash Px -> /usr/sbin/php-fpm8.2//myblogdash,

Then in the usr/sbin/php-fpm8.2 profile, but outside the myblog hat, I create this empty child profile:

  profile myblogdash flags=(complain) {
  }

reload the profile, restart PHP-FPM and start aa-genprof again:

# /etc/apparmor.d/usr.sbin.php-fpm8.2
# systemctl restart php8.2-fpm
# aa-genprof /usr/sbin/php8.2-fpm

Now I upload again an image in order to trigger the execution of dash and and I process the generated events with aa-genprof. Finally I end up with this child profile:

  profile myblogdash {
    include <abstractions/base>
    include <abstractions/bash>

    deny /var/www/example.com/wordpress/wp-admin/admin-ajax.php r,
    deny /var/www/example.com/wordpress/wp-admin/async-upload.php r,
    deny /var/www/example.com/wordpress/wp-admin/options-general.php r,
    deny /var/www/blog.frehi.be/wordpress/xmlrpc.php r,

    /usr/bin/cwebp mrix,
    /usr/bin/dash mr,
    /usr/bin/nice mrix,
    /usr/bin/which.debianutils mrix,
    owner /var/www/example.com/wordpress/**.{jpg,jpeg,png} r,
    owner /var/www/example.com/wordpress/wp-content/uploads/webp-express-test-conversion.webp.lossless.webp w,
    owner /var/www/example.com/wordpress/wp-content/uploads/webp-express-test-conversion.webp.lossy.webp w,
    owner /var/www/example.com/wordpress/wp-content/webp-express/webp-images/doc-root/**.webp w,

  }

As you can see our dash process and all other processes which inherit this profile, including cwepb, now have much more limited permissions and can only read JPEG and PNG files from our website and write WEBP images. I had some events of the sh process reading files like wp-admin/options-general.php, but I have no idea why this process would do that. I chose to deny this, and even when this child profile is in enforce mode, everything appeared to be working OK. Anyway, these files don’t contain anything sensitive, so it would not be a real problem if you accept this.

Write permissions on PHP files

A topic which is worth thinking about: which permissions are you giving your PHP worker process on the PHP files? Obviously PHP-FPM will need read access to these files in order to execute them, but do you also want to give them write access? There is a strong argument not too: it will prevent hackers who manage to exploit your PHP application to write a new PHP file with their own code (such as a PHP backdoor) and then execute it. Or modify your existing PHP files and insert malicious code in them. So confining your PHP process to prevent it from writing PHP files is a huge advantage for security.

However applications like WordPress will try to update themselves and in order to do so, it needs write access to its own files. Also installing new plug-ins via the web interface requires the permissions to write PHP files. Automatic updates is also a big advantage for security. As an alternative to WordPress updating itself automatically, you could easily write a shell script which uses wp-cli to update WordPress and all modules and themes and regularly call this from a cron job or systemd timer.

On my system, I noticed that the wp-supercache module was writing some PHP files. If I understand this correctly, it does this in order to deal with pages which contain dynamic content. So in order not to break this, I need to allow writing PHP files in wp-content/cache/supercache anyway.

You will have to decide for each web application whether you want it to allow writing PHP files. At least run your different PHP applications in different PHP-FPM workers with different AppArmor hats, so that it’s impossible for one PHP applications to affect other PHP applications, but also strongly consider not giving write access to its own PHP files because that would be a huge win security wise.

If you decide to give a web application write access to its own PHP files, I recommend even more setting up Modsecurity with the CoreRuleSet, blocking outgoing network connections by default for that user account with your firewall (such as Foomuuri) and installing Snuffleupagus to further limit the things PHP applications can do on a PHP level.

The usr.sbin.php-fpm8.2 profile

Finally I ended up with the following profile.

abi <abi/3.0>,

include <tunables/global>

/usr/sbin/php-fpm8.2 {
  include <abstractions/base>

  capability chown,
  capability dac_override,
  capability kill,
  capability net_admin,
  capability setgid,
  capability setuid,

  signal send set=kill peer=/usr/sbin/php-fpm8.2//*,
  signal send set=quit peer=/usr/sbin/php-fpm8.2//*,
  signal send set=term peer=/usr/sbin/php-fpm8.2//*,

  /run/php/*.sock w,
  /usr/sbin/php-fpm8.2 mr,
  @{PROC}/@{pid}/attr/{apparmor/,}current rw,
  owner /etc/group r,
  owner /etc/host.conf r,
  owner /etc/hosts r,
  owner /etc/nsswitch.conf r,
  owner /etc/passwd r,
  owner /etc/php/8.2/fpm/conf.d/ r,
  owner /etc/php/8.2/fpm/php-fpm.conf r,
  owner /etc/php/8.2/fpm/php.ini r,
  owner /etc/php/8.2/fpm/pool.d/ r,
  owner /etc/php/8.2/fpm/pool.d/*.conf r,
  owner /etc/php/8.2/mods-available/*.ini r,
  owner /etc/resolv.conf r,
  owner /etc/ssl/openssl.cnf r,
  owner /proc/sys/kernel/random/boot_id r,
  owner /run/php/php8.2-fpm.pid w,
  owner /run/systemd/userdb/ r,
  owner /sys/devices/system/node/ r,
  owner /sys/devices/system/node/node0/meminfo r,
  owner /tmp/.ZendSem.* rwk,
  owner /var/log/php8.2-fpm.log w,

  change_profile -> /usr/sbin/php-fpm8.2//*,


  ^myblog {
    include <abstractions/base>
    include <abstractions/ssl_certs>

    network inet dgram,
    network inet stream,
    network inet6 dgram,
    network inet6 stream,
    network netlink raw,
    network unix stream,

    signal receive set=kill peer=/usr/sbin/php-fpm8.2,
    signal receive set=quit peer=/usr/sbin/php-fpm8.2,
    signal receive set=term peer=/usr/sbin/php-fpm8.2,

    /etc/gai.conf r,
    /etc/hosts r,
    /tmp/.ZendSem.* k,
    /usr/bin/dash Px -> /usr/sbin/php-fpm8.2//blogdash,
    /usr/sbin/postdrop Px,
    /usr/sbin/sendmail Px,
    /var/www/example.com/wordpress/**.php r,
    /var/www/example.com/wordpress/**.{css,js,svg,po} r,
    /var/www/example.com/wordpress/**/ r,
    /var/www/example.com/wordpress/.htaccess r,
    /var/www/example.com/wordpress/wp-content/uploads/wpcf7_uploads/.htaccess r,
    /var/www/example.com/wordpress/wp-includes/*.json r,
    /var/www/example.com/wordpress/wp-includes/certificates/ca-bundle.crt r,
    owner /var/www/example.com/tmp/* rw,
    owner /var/www/example.com/wordpress/.maintenance rw,
    owner /var/www/example.com/wordpress/wp-content/autoptimize_404_handler.php rw,
    owner /var/www/example.com/wordpress/wp-content/cache/**/ rw,
    owner /var/www/example.com/wordpress/wp-content/cache/**/index.html w,
    owner /var/www/example.com/wordpress/wp-content/cache/*.tmp rw,
    owner /var/www/example.com/wordpress/wp-content/cache/autoptimize/.htaccess w,
    owner /var/www/example.com/wordpress/wp-content/cache/autoptimize/css/*.css rw,
    owner /var/www/example.com/wordpress/wp-content/cache/autoptimize/js/*.js rw,
    owner /var/www/example.com/wordpress/wp-content/cache/example.com_wp_cache_gc.txt w,
    owner /var/www/example.com/wordpress/wp-content/cache/lyteCache/*.jpg rw,
    owner /var/www/example.com/wordpress/wp-content/cache/preload_permalink.txt rw,
    owner /var/www/example.com/wordpress/wp-content/cache/supercache/**.html w,
    owner /var/www/example.com/wordpress/wp-content/cache/supercache/**.html.gz rw,
    owner /var/www/example.com/wordpress/wp-content/cache/supercache/**.php rw,
    owner /var/www/example.com/wordpress/wp-content/cache/supercache/**.tmp rw,
    owner /var/www/example.com/wordpress/wp-content/cache/supercache/**.tmp.gz rw,
    owner /var/www/example.com/wordpress/wp-content/cache/supercache/*/feed/*.php rw,
    owner /var/www/example.com/wordpress/wp-content/cache/taxonomy_category.txt rw,
    owner /var/www/example.com/wordpress/wp-content/cache/taxonomy_post_tag.txt rw,
    owner /var/www/example.com/wordpress/wp-content/languages/**.{po,mo} rw,
    owner /var/www/example.com/wordpress/wp-content/languages/plugins/*.json w,
    owner /var/www/example.com/wordpress/wp-content/plugins/**.json r,
    owner /var/www/example.com/wordpress/wp-content/plugins/*/ rw,
    owner /var/www/example.com/wordpress/wp-content/plugins/webp-express/lib/options/options/**.inc r,
    owner /var/www/example.com/wordpress/wp-content/plugins/webp-express/lib/options/options/conversion-options/*.inc r,
    owner /var/www/example.com/wordpress/wp-content/plugins/webp-express/test/*.jpg r,
    owner /var/www/example.com/wordpress/wp-content/plugins/webp-express/test/*.png r,
    owner /var/www/example.com/wordpress/wp-content/themes/.htaccess rwk,
    owner /var/www/example.com/wordpress/wp-content/themes/webp-express-test-images/*.JPEG rw,
    owner /var/www/example.com/wordpress/wp-content/themes/webp-express-test-images/*.PNG rw,
    owner /var/www/example.com/wordpress/wp-content/upgrade-temp-backup/**/ rw,
    owner /var/www/example.com/wordpress/wp-content/upgrade-temp-backup/plugins/** rw,
    owner /var/www/example.com/wordpress/wp-content/upgrade/** rw,
    owner /var/www/example.com/wordpress/wp-content/uploads/**.{jpg,jpeg,png,webp} rw,
    owner /var/www/example.com/wordpress/wp-content/uploads/**/ rw,
    owner /var/www/example.com/wordpress/wp-content/uploads/.htaccess rwk,
    owner /var/www/example.com/wordpress/wp-content/uploads/webp-express-test-conversion.webp w,
    owner /var/www/example.com/wordpress/wp-content/uploads/webp-express-test-conversion.webp.lossless.webp w,
    owner /var/www/example.com/wordpress/wp-content/uploads/webp-express-test-conversion.webp.lossy.webp rw,
    owner /var/www/example.com/wordpress/wp-content/uploads/webp-express-test-images/*.JPEG rw,
    owner /var/www/example.com/wordpress/wp-content/uploads/webp-express-test-images/*.PNG rw,
    owner /var/www/example.com/wordpress/wp-content/uploads/wp-statistics/GeoLite2-Country.mmdb r,
    owner /var/www/example.com/wordpress/wp-content/webp-express/config/*.json rw,
    owner /var/www/example.com/wordpress/wp-content/webp-express/webp-images/.htaccess rwk,
    owner /var/www/example.com/wordpress/wp-content/webp-express/webp-images/doc-root/**.webp rw,

  }

  profile myblogdash {
    include <abstractions/base>
    include <abstractions/bash>

    deny /var/www/example.com/wordpress/wp-admin/admin-ajax.php r,
    deny /var/www/example.com/wordpress/wp-admin/async-upload.php r,
    deny /var/www/example.com/wordpress/wp-admin/options-general.php r,
    deny /var/www/example.com/wordpress/xmlrpc.php r,

    /usr/bin/cwebp mrix,
    /usr/bin/dash mr,
    /usr/bin/nice mrix,
    /usr/bin/which.debianutils mrix,
    owner /var/www/example.com/wordpress/**.{jpg,jpeg,png} r,
    owner /var/www/example.com/wordpress/wp-content/uploads/webp-express-test-conversion.webp.lossless.webp w,
    owner /var/www/example.com/wordpress/wp-content/uploads/webp-express-test-conversion.webp.lossy.webp w,
    owner /var/www/example.com/wordpress/wp-content/webp-express/webp-images/doc-root/**.webp w,

  }
}

Some advice

Creating an AppAmor profile for more complicated applications like this can be challenge. You will need to retest different things a lot and be prepared to do some hand editing of the profile. Some useful advice that I learnt along the way:

  • Often check /var/log/audit.log yourself to see what is happening. At a certain moment I ended up in a loop where my audit log was constantly being filled because the process tried to switch to a not yet existing profile, so you want to intervene immediately if this happens to prevent the logs from filling up your disk.
  • After making modifications to an AppArmor profile, it’s not a bad idea to restart your process. I am not sure, but I think some problems I encountered were fixed by restarting the process.
  • Stop aa-genprof/aa-logprof when you manually edit a profile and start hem again after the editing and reloading it with apparmor_parser, otherwise these processes won’t be aware of the modification you have made to the profile file.
  • When adding hats or child profiles, revise the permissions of your main profile: maybe some things can be moved to the new hat or child profile and can thus be removed completely from the main profile.
  • Keep your profile long enough in complain mode and only switch to enforce mode after enough time without events (I would say: a couple of days). If you encounter problems with your PHP applications later on, don’t forget to check your audit.log.

Protecting systemd services with AppArmor

In a previous blog post I explained how to create an AppArmor profile for a specific binary. However it is also possible to make a profile which is not linked to a specific executable, but you can load the profile in any systemd service. This is for example useful for services which share the same binary and only differ in the arguments you give on the command line. This can be useful for applications you start with uvicorn, or interpreted languages like Lisp or Rscript.

I assume you already have a systemd service file called myservice.service. In order to protect this service, create /etc/apparmor.d/myservice:

abi <abi/3.0>,
include <tunables/global>

profile myservice flags=(complain) {
  include <abstractions/base>
}

You see that I don’t use the path for a binary, but chose an arbitrary name.

Load the profile:

# apparmor_parser <em>/etc/apparmor.d/myservice</em>

Modify /etc/systemd/system/myservice.service or run systemctl edit myservice and in the [Service] section add:

AppArmorProfile=myservice

Reload the service files:

# systemctl daemon-reload

Start your service:

# systemctl start myservice

Exercise your service and process all events with aa-logprof:

# aa-logprof

Don’t forget to put it in enforce mode with aa-enforce if you are sure the profile is complete.

Protecting your Linux server against security exploits with AppArmor

AppArmor is a security feature available in the Linux kernel which adds Mandatory Access Control (MAC) to your system. Mandatory Access Control defines a security policy which the administrator of the system defines and cannot be overridden by the user. You use AppArmor to harden your system against all kinds of attacks, protecting your system against known or unknown zero-day exploits. It does so by restricting the applications on your system. You define an AppArmor profile for security sensitive applications which confines them to access only specific files, network access and other capabilities. This way it will become much harder or even impossible for an attacker to abuse a security hole in a service for which you have defined an AppArmor profile. SuSE sometimes calls this immunization.

AppArmor is similar to SELinux (Security-Enhanced Linux), the other well-known Linux security module which is used by default on Fedora and Red Hat Enterprise Linux. Debian and Ubuntu have opted for AppArmor by default. AppArmor profiles are a bit easier to manage than SELinux.

Setting up Apparmor on Debian

First we install all the AppArmor utilities and all default profiles:

# apt install apparmor apparmor-profiles apparmor-utils apparmor-profiles-extra libpam-apparmor auditd

With the aa-status command you can check the AppArmor status:

# aa-status
43 profiles are loaded.
20 profiles are in enforce mode.
   /usr/bin/freshclam
   /usr/bin/man
   /usr/bin/pidgin
   /usr/bin/pidgin//sanitized_helper
   /usr/bin/totem
   /usr/bin/totem-audio-preview
   /usr/bin/totem-video-thumbnailer
   /usr/bin/totem//sanitized_helper
   /usr/lib/NetworkManager/nm-dhcp-client.action
   /usr/lib/NetworkManager/nm-dhcp-helper
   /usr/lib/connman/scripts/dhclient-script
   /usr/sbin/clamd
   /{,usr/}sbin/dhclient
   apt-cacher-ng
   lsb_release
   man_filter
   man_groff
   nvidia_modprobe
   nvidia_modprobe//kmod
   tcpdump
23 profiles are in complain mode.
   /usr/bin/irssi
   /usr/sbin/sssd
   avahi-daemon
   dnsmasq
   dnsmasq//libvirt_leaseshelper
   identd
   klogd
   mdnsd
   nmbd
   nscd
   php-fpm
   ping
   samba-bgqd
   samba-dcerpcd
   samba-rpcd
   samba-rpcd-classic
   samba-rpcd-spoolss
   smbd
   smbldap-useradd
   smbldap-useradd///etc/init.d/nscd
   syslog-ng
   syslogd
   traceroute
0 profiles are in kill mode.
0 profiles are in unconfined mode.
17 processes have profiles defined.
0 processes are in enforce mode.
14 processes are in complain mode.
   /usr/sbin/php-fpm8.2 (834763) php-fpm
   /usr/sbin/php-fpm8.2 (834764) php-fpm
   /usr/sbin/php-fpm8.2 (834765) php-fpm
   /usr/sbin/php-fpm8.2 (834766) php-fpm
   /usr/sbin/php-fpm8.2 (834767) php-fpm
   /usr/sbin/php-fpm8.2 (834768) php-fpm
   /usr/sbin/php-fpm8.2 (834769) php-fpm
   /usr/sbin/php-fpm8.2 (834770) php-fpm
   /usr/sbin/php-fpm8.2 (834771) php-fpm
   /usr/sbin/php-fpm8.2 (834772) php-fpm
   /usr/sbin/php-fpm8.2 (834773) php-fpm
   /usr/sbin/php-fpm8.2 (834774) php-fpm
   /usr/sbin/php-fpm8.2 (834775) php-fpm
   /usr/sbin/php-fpm8.2 (834826) php-fpm
3 processes are unconfined but have a profile defined.
   /usr/bin/freshclam (797288)
   /usr/sbin/clamd (662585)
   /usr/sbin/sssd (647439)
0 processes are in mixed mode.
0 processes are in kill mode.

We see that AppArmor is enabled and that it has 32 profiles of which 16 are being enforced and another 16 are in complain mode, which means that a warning will be logged when they violate the policy, but they will not be blocked. Currently 1 running process is confined in enforce mode.

All the profiles are installed in /etc/apparmor.d. The text files, usually named after the full path to the binary with the slashes replaced by dots.

To completely disable an Apparmor profile you use the aa-disable command. For example:

# aa-disable /usr/sbin/haveged

To enable a profile in complain mode, use the aa-complain command:

# aa-complain /usr/sbin/haveged

Use the aa-enforce command to set enforce an Apparmor profile:

# aa-enforce /usr/sbin/haveged

If you make a modification to one of the profiles in /etc/apparmor.d, you will need to make the kernel reload the profile with the apparmor_parser command:

# apparmor_parser -r /etc/apparmor.d/usr.sbin.haveged

If you want to remove a profile currently loaded in the kernel, you can use this command:

# apparmor_parser -R /etc/apparmor.d/usr.sbin.haveged

When you have done this, you can remove the file if you don’t need it any more.

Creating Apparmor profiles with aa-genprof

I recommend creating an AppArmor profile for every service which is accessible via the network end/or deals with data from the outside world.

You can run the aa-unconfined command to find applications which listen on a TCP or UDP port and do not have an AppArmor profile loaded. These are excellent candidates to create a profile for.

Example 1: Creating an AppArmor profile for Knot Resolver

One of the unconfined processes indicated by aa-unconfined, is /usr/sbin/kresd which is Knot Resolver, the caching DNS resolver. I will generate a profile with aa-genprof. I need two shells for that: one where I run aa-genprof, and another one where I exercise the functionality of Knot Resolver. aa-genprof creates an empty profile and puts it in complain mode, so that all events generated by the process will be logged. aa-genprof will then propose rules for any of them.

So in the first shell I run:

# aa-genprof /usr/sbin/kresd
Updating AppArmor profiles in /etc/apparmor.d.

Before you begin, you may wish to check if a
profile already exists for the application you
wish to confine. See the following wiki page for
more information:
https://gitlab.com/apparmor/apparmor/wikis/Profiles

Profiling: /usr/sbin/kresd

Please start the application to be profiled in
another window and exercise its functionality now.

Once completed, select the "Scan" option below in
order to scan the system logs for AppArmor events.

For each AppArmor event, you will be given the
opportunity to choose whether the access should be
allowed or denied.

[(S)can system log for AppArmor events] / (F)inish

Now in the second shell we need to exercise the functionality of kresd. Let’s start by stopping, starting and restarting kresd:

# systemctl stop system-kresd.slice && systemctl start kresd.target && systemctl restart system-kresd.slice

I also recommend reloading the service, but the kresd service does not support this, so this is of no use here. Now we exercise kresd by doing some DNS lookups, for example with the host and the kdig commands:

$ host example.com
$ kdig -t mx debian.org

Wait for some time, and then you go back to the shell where aa-genprof is running and press S to scan the audit log for any generated events.

Profile:    /usr/sbin/kresd
Capability: net_bind_service
Severity:   8

 [1 - include <abstractions/nis>]
  2 - capability net_bind_service,
(A)llow / [(D)eny] / (I)gnore / Audi(t) / Abo(r)t / (F)inish

Here kresd requests the net_bind_service capability. This is needed to listen to a privileged port (port number < 1024). aa-genprof proposes 2 possible rules. The first option is to include abstractions/nis which contains capability net_bind_service. Files in /etc/apparmor.d/abstractions contain common rules which may be included in multiple different profiles. Often these abstractions are too broad and in case of doubt I recommend to choose the more specific option, which is option 2 here. I press 2 select this option, then press A to allow this.

Then Knot Resolver checks whether we have transparent hugepages enabled by reading /sys/kernel/mm/transparent_hugepage/enabled. This is because Knot Resolve uses jemalloc which reads this file.

Profile:  /usr/sbin/kresd
Path:     /sys/kernel/mm/transparent_hugepage/enabled
New Mode: r
Severity: 4

 [1 - /sys/kernel/mm/transparent_hugepage/enabled r,]
(A)llow / [(D)eny] / (I)gnore / (G)lob / Glob with (E)xtension / (N)ew / Audi(t) / Abo(r)t / (F)inish

I press A to allow this.

Knot Resolver uses lua for plug-ins, and so wants to read lua code:

Profile:  /usr/sbin/kresd
Path:     /usr/share/lua/5.1/cqueues/socket.lua
New Mode: r
Severity: unknown

 [1 - include <abstractions/totem>]
  2 - /usr/share/lua/5.1/cqueues/socket.lua r,
(A)llow / [(D)eny] / (I)gnore / (G)lob / Glob with (E)xtension / (N)ew / Audi(t) / Abo(r)t / (F)inish

Now I want to allow reading of all lua files in /usr/share/lua, independent of the lua version, so that when Knot Resolver starts using a newer lua version in the future, the rule is still valid. Press 2 select that rule and then press E for glob with extension multiple times, until you get option 5 - /usr/share/lua/**.lua r,. This matches all files which name end with .lua anywhere within /usr/share/lua. Make sure this options is selected and then press A to allow this.

kresd creates a control file:

Profile:  /usr/sbin/kresd
Path:     /run/knot-resolver/control/1
New Mode: owner w
Severity: unknown

 [1 - owner /run/knot-resolver/control/1 w,]
(A)llow / [(D)eny] / (I)gnore / (G)lob / Glob with (E)xtension / (N)ew / Audi(t) / (O)wner permissions off / Abo(r)t / (F)inish

The mode owner w means the permission to write the file only if the user as which the writing process is running is also the owner of the file.

If you are running multiple kresd processes, each process will create its own control file, so we want to use a wildcard here. Press G for glob so that you get the option 2 – owner /run/knot-resolver/control/* w, and press A to accept this.

kresd tries to take an exclusive lock on /var/cache/knot-resolver/lock.mdb:

Profile:  /usr/sbin/kresd
Path:     /var/cache/knot-resolver/lock.mdb
New Mode: owner rwk
Severity: unknown

 [1 - owner /var/cache/knot-resolver/lock.mdb rwk,]
(A)llow / [(D)eny] / (I)gnore / (G)lob / Glob with (E)xtension / (N)ew / Audi(t) / (O)wner permissions off / Abo(r)t / (F)inish

Notice the mode rwk: k is the permission, and in combination with w this is an exclusive lock. You can find the explanation of all file permissions in the AppArmor Core Policy Reference. Press A to allow this.

I’m using rpz-downloader to download RPZ files containing malicious domains I want to block. Knot Resolver wants to read these RPZ files:

Profile:  /usr/sbin/kresd
Path:     /var/lib/rpz-downloader/urlhaus.abuse.ch.rpz
New Mode: r
Severity: unknown

 [1 - /var/lib/rpz-downloader/urlhaus.abuse.ch.rpz r,]
(A)llow / [(D)eny] / (I)gnore / (G)lob / Glob with (E)xtension / (N)ew / Audi(t) / Abo(r)t / (F)inish

I want to allow access to all RPZ files in /var/lib/rpz-downloader, so I press N and I change the path to /var/lib/rpz-downloader/*.rpz and press A to accept this.

Knot Resolver tries to open an IPv4 TCP socket:

Profile:        /usr/sbin/kresd
Network Family: inet
Socket Type:    stream

 [1 - include <abstractions/apache2-common>]
  2 - include <abstractions/nameservice>
  3 - network inet stream,
(A)llow / [(D)eny] / (I)gnore / Audi(t) / Abo(r)t / (F)inish

The proposed abstractions are too broad, so I select 3. You will also need to allow network inet dgram (IPv4 UDP), network inet6 stream (IPv6 TCP) and network inet6 dgram (IPv6 UDP).

When you have processed all events, you will get this:

= Changed Local Profiles =

The following local profiles were changed. Would you like to save them?

 [1 - /usr/sbin/kresd]
(S)ave Changes / Save Selec(t)ed Profile / [(V)iew Changes] / View Changes b/w (C)lean profiles / Abo(r)t

Only one profile is modified and you can view the changes by pressing V. You can save the changes by pressing S. After doing so, aa-genprof will continue running and you can press S to scan the logs again for any newly generated events. This way you can gradually improve your profile. If you are finished, you can quit aa-genprof by pressing F. You can run aa-genprof for your process again at any time, it will then modify the existing profile if it finds any new events.

When you have finished; check with aa-status whether your profile is still in complain mode, and if not change it back to complain mode with aa-complain.

This is how my final profile /etc/apparmor.d/usr.sbin.kresd looks:

abi <abi/3.0>,

include <tunables/global>

/usr/sbin/kresd flags=(complain) {
  include <abstractions/base>

  capability net_bind_service,

  network inet dgram,
  network inet stream,
  network inet6 dgram,
  network inet6 stream,

  /etc/group r,
  /etc/knot-resolver/kresd.conf r,
  /etc/nsswitch.conf r,
  /etc/passwd r,
  /etc/ssl/certs/ca-certificates.crt r,
  /etc/ssl/openssl.cnf r,
  /sys/kernel/mm/transparent_hugepage/enabled r,
  /usr/sbin/kresd mr,
  /usr/share/dns/root.hints r,
  /usr/share/dns/root.key r,
  /usr/share/lua/**.lua r,
  /var/lib/rpz-downloader/*.rpz r,
  owner /run/knot-resolver/control/ w,
  owner /run/knot-resolver/control/* w,
  owner /var/cache/knot-resolver/data.mdb rw,
  owner /var/cache/knot-resolver/lock.mdb rwk,

}

The flags=(complain) indicate that this profile will be loaded in complain mode. Keep it like that for some time until you are sure no new events are generated any more. You can check this by running aa-logprof. aa-logprof is similar to aa-genprof, except that it also processes past events logged in /var/log/audit.log and processes events for all processes for which you have defined an AppArmmor profile, and not a single one like aa-genprof. If you are sure the profile is complete, enforce it by running aa-enforce /usr/sbin/kresd.

Example 2: creating an AppArmor profile for Postfix

Another process marked by aa-unconfined on my system is /usr/lib/postfix/sbin/master, Postfix’ master process. I create a profile with aa-genprof:

# aa-genprof /usr/lib/postfix/sbin/master

In another shell, I stop, start, restart and reload Postfix:

# systemctl stop postfix@- && systemctl start postfix@- && systemctl restart postfix@- && systemctl reload postfix@-

To further exercise Postfix, send some mails, both outgoing mails to other mail servers as incoming mails originating from an external mail server. Check the mail queue with

# mailq

I’m not going through all events generated by Postfix, but I want to point out one particular type of event which we did not see with Knot Resolver:

Profile:  /usr/lib/postfix/sbin/master
Execute:  /usr/lib/postfix/sbin/showq
Severity: unknown

(I)nherit / (C)hild / (P)rofile / (N)amed / (U)nconfined / (X) ix On / (D)eny / Abo(r)t / (F)inish

The master process tries to start another executable, in this case showq. You have several options:

  • (I)nherit: the new process inherits the profile from the parent process. This means that it will run with the same permissions as the parent process.
  • (C)hild: the new process will use a subprofile defined within this profile. This is useful if you want to run the process with a different profile, depending on how it was started.
  • (P)rofile: the new process will run in its own generic profile, in this case /etc/apparmor.d/usr.lib.postfix.sbin.showq . Use this if you want to run it in the same profile as when you would have started /usr/lib/postfix/sbin/showq directly.
  • (N)amed: the new process will run with the profile with a name of your choice.
  • (U)confined: the new process will run unconfined, without any AppArmor restrictions.

Here I choose P because I want showq to run always with the same profile, no matter how it is started. Not only will aa-genprof create a rule which allows master to launch showq, it will also create a profile for showq, and start logging future events.

Then you will get the question whether you want to sanitize the environment:

Should AppArmor sanitise the environment when
switching profiles?

Sanitising environment is more secure,
but some applications depend on the presence
of LD_PRELOAD or LD_LIBRARY_PATH.

[(Y)es] / (N)o

My Postfix installation does not need any enviroment variables like LD_LIBRARY_PATH to be set, so it can safely sanitize the environment. I press Y.

When you have finished, make sure that all profiles are running in complain mode, otherwise you risk problems with mail delivery:

# cd /etc/apparmor.d
# for i in usr.lib.postfix.sbin.*; do aa-complain $i; done

My /etc/apparmor.d/usr.lib.postfix.sbin.master profile looks like this at the moment after some manual tweaking:

abi <abi/3.0>,

include <tunables/global>

/usr/lib/postfix/sbin/master flags=(complain) {
  include <abstractions/base>
  include <abstractions/postfix-common>

  capability dac_read_search,
  capability kill,
  capability net_bind_service,

  network inet stream,
  network inet6 dgram,
  network inet6 stream,
  network netlink raw,

  signal send peer=/usr/lib/postfix/sbin/*,

  /usr/lib/postfix/sbin/anvil Px,
  /usr/lib/postfix/sbin/cleanup Px,
  /usr/lib/postfix/sbin/lmtp Px,
  /usr/lib/postfix/sbin/local Px,
  /usr/lib/postfix/sbin/master mr,
  /usr/lib/postfix/sbin/pickup Px,
  /usr/lib/postfix/sbin/postscreen Px,
  /usr/lib/postfix/sbin/proxymap Px,
  /usr/lib/postfix/sbin/qmgr Px,
  /usr/lib/postfix/sbin/scache Px,
  /usr/lib/postfix/sbin/showq Px,
  /usr/lib/postfix/sbin/smtp Px,
  /usr/lib/postfix/sbin/smtpd Px,
  /usr/lib/postfix/sbin/tlsmgr Px,
  /usr/lib/postfix/sbin/trivial-rewrite Px,
  owner /etc/gai.conf r,
  owner /etc/group r,
  owner /etc/nsswitch.conf r,
  owner /etc/passwd r,
  owner /var/lib/postfix/master.lock rwk,
  owner /var/spool/postfix/pid/master.pid rwk,
  owner /var/spool/postfix/private/anvil w,
  owner /var/spool/postfix/private/bounce w,
  owner /var/spool/postfix/private/bsmtp w,
  owner /var/spool/postfix/private/defer w,
  owner /var/spool/postfix/private/discard w,
  owner /var/spool/postfix/private/dnsblog w,
  owner /var/spool/postfix/private/error w,
  owner /var/spool/postfix/private/ifmail w,
  owner /var/spool/postfix/private/lmtp w,
  owner /var/spool/postfix/private/local w,
  owner /var/spool/postfix/private/maildrop w,
  owner /var/spool/postfix/private/mailman w,
  owner /var/spool/postfix/private/proxymap w,
  owner /var/spool/postfix/private/proxywrite w,
  owner /var/spool/postfix/private/relay w,
  owner /var/spool/postfix/private/retry w,
  owner /var/spool/postfix/private/rewrite w,
  owner /var/spool/postfix/private/scache w,
  owner /var/spool/postfix/private/scalemail-backend w,
  owner /var/spool/postfix/private/smtp w,
  owner /var/spool/postfix/private/smtp-amavis w,
  owner /var/spool/postfix/private/smtpd w,
  owner /var/spool/postfix/private/tlsmgr w,
  owner /var/spool/postfix/private/tlsproxy w,
  owner /var/spool/postfix/private/trace w,
  owner /var/spool/postfix/private/uucp w,
  owner /var/spool/postfix/private/verify w,
  owner /var/spool/postfix/private/virtual w,
  owner /var/spool/postfix/public/cleanup w,
  owner /var/spool/postfix/public/flush w,
  owner /var/spool/postfix/public/pickup rw,
  owner /var/spool/postfix/public/qmgr rw,
  owner /var/spool/postfix/public/showq w,

}

I’m not going to add all profiles for other Postfix processes here, but you get the idea. More rules might still be needed, so that’s why I keep them running in complain mode and I regularly run aa-logprof.

Debugging problems caused by AppArmor

AppAmor logs events in /var/log/audit/audit.log if you have auditd running. You can grep for apparmor to see all events.

To easily process all new events logged in /var/log/audit/audit.log you can use aa-logprof:

# aa-logprof

Sources and more information

Wireguard VPN with systemd-networkd and Foomuri

After my first successful implementation of Foomuuri on a server with an IPv4 connection, I wanted to try Foomuuri in a different environment. This time I choose to implement it on my IPv4/IPv6 dual stack Wireguard VPN server. I originally set up this system with Shorewall, so let’s see how we should configure this with Foomuuri.

While at it, I also moved the configuration of Wireguard to systemd-networkd, where the main network interface was already configured. This was also useful because some things which were configured in Shorewall before and which Foomuuri does not do by itself, can now be configured in systemd-networkd.

systemd-networkd configuration

I create /etc/systemd/network/wg0.netdev with these contents:

[NetDev]
Name = wg0
Kind = wireguard
Description = wg0 - Wireguard VPN server

[WireGuard]
PrivateKeyFile = /etc/systemd/network/wg0.privkey
ListenPort = 51820

# client 1
[WireGuardPeer]
PublicKey = publickey_of_client
AllowedIPs = 192.168.7.2/32
AllowedIPs = aaaa:bbbb:cccc:dddd:ffff::2/128

I moved the /etc/wireguard/privatekey file to /etc/systemd/network/wg0.privkey, and then give it appropriate permissions so that user systemd-network can read it:

# chown root:systemd-network /etc/systemd/network/wg0.privkey
# chmod 640 /etc/systemd/network/wg0.privkey

Then I create /etc/systemd/network/wg0.network:

[Match]
Name = wg0

[Network]
Address = 192.168.7.1/24
Address = fd42:42:42::1/64

[Route]
Destination = aaaa:bbbb:cccc:dddd:ffff::2/128

For IPv4, we set the address to 192.168.7.1/24 and systemd-networkd will automatically take care of adding this subnet to the routing table. As we are using public IPv6 addresses for the VPN clients, I add a [ROUTE] section which takes care of adding these IP address to the routing table.

The configuration of the public network interface is stored in /etc/systemd/network/public.network:

[Match]
Name=ens192

[Network]
Address=aaaa:bbbb:cccc:dddd:0000:0000:0000:0001/64
Gateway=fe80::1
DNS=2a0f:fc80::
DNS=2a0f:fc81::
DNS=193.110.81.0
DNS=185.253.5.0
Address=www.xxx.yyy.zzz/24
Gateway=www.xxx.yyy.1
IPForward=yes
IPv6ProxyNDP=1
IPv6ProxyNDPAddress=aaaa:bbbb:cccc:dddd:ffff::2

Important here is that we enable IP forwarding and IPv6 NDP proxy here. Both were things we could configure in Shorewall before, but Foomuuri does not support setting these. This is not a problem, because this can be set up directly in systemd-networkd.

To reload the configuration for all network interface, I run:

networkctl reload

To bring up the Wireguard connection:

networkctl up wg0

Because of systemd issue #25547, networkctl reload is not enough if you make changes to the peer configuration in wg0.netdev. You will first have to delete the network device with the command

networkctl delete wg0

after which you can run networkctl reload and bring up the network connection. In case of doubt all network interfaces are configured correctly, you can also completely restart the systemd-networkd service:

# systemctl restart systemd-networkd

While working on the network configuration, of course make sure you have access to a real console of the system, so that in case your system becomes inaccessible, you can still fix things through the console.

Foomuuri configuration

Now we define the zones in /etc/foomuuri/zones.conf:

zone {
  localhost
  public ens192
  vpn wg0
}

Foomuuri by default does not define a macro for the Wireguard UDP port, so I create one in /etc/foomuuri/services.conf:

macro {
	wireguard udp dport 51820
}

I adjust some logging settings in /etc/foomuuri/log.conf. In case I want to filter outgoing connections from the machine in the future, I want to log the UID of the process and I also increase the log rate, as I had the impression that I sometimes was missing valuable log messages while debugging. Adjust the values if you wan to reduce log spam.

foomuuri {
  log_rate "2/second burst 20"
  log_level "level info flags skuid"
}

I set up masquerading (SNAT) in /etc/foomuuri.conf/snat.conf :

snat {
  saddr 192.168.7.0/24 oifname ens192 masquerade
}

Then I set up these rules for traffic going through our firewall:

public-localhost {
  ssh
  wireguard
  icmpv6 1 2 3 4 128
  drop log
}

localhost-public {
  accept
}

vpn-public {
  accept
}

public-vpn {
  icmpv6 1 2 3 4 128
  drop log
}

vpn-localhost {
  accept
}

localhost-vpn {
  icmpv6 1 2 3 4 128
  reject log
}

Notice that I allow ICMPv6 traffic that should not be dropped.

As usually check your configuration before reloading it:

# foomuuri check
# foomuuri reload

Testing and debugging

If things don’t work as expected, enable debugging in the wireguard kernel module and check the kernel logs. I refer to the previous article about this for more details.

Conclusion

Setting up Foomuuri was pretty easy again. The most difficult thing was getting the systemd-networkd configuration completely right. Especially with IPv6 it can take quite some time debugging before everything works as expected.

Setting up Foomuuri, an nftables based firewall

Up to now I have always been using the Shorewall firewall on all my Linux systems. I find it very easy to configure while at the same time it’s very powerful and flexible so that you can also use it with more complicated set-ups, such as routers with multiple network interfaces, VPN’s and bridges. Unfortunately Shorewall is still based on the old xtables (iptables, ip6tables, ebtables, etc…) infrastructure. While it still works and in reality the iptables commands are actually now front-ends to the more modern nftables back-end, Shorewall development has stalled and it looks very unlikely it will ever be ported to nftables.

I started using Firewalld, a firewall which is used by default on Red Hat and Fedora based systems. However I did not like it. Configuration of Firewalld happens through the command line with firewall-cmd, which I find much more complicated than just editing a configuration file which usually contains examples and gives you an easy overview of the configuration. Firewalld saves its configuration in XML files. You could edit these files instead of using firewall-cmd, but that is obviously much more complicated than editing configuration files which were designed for human editing. Furthermore I found Firewalld to be very inflexible. Firewalld does not have support of filtering traffic on a bridge (layer 2 filtering), unlike Shorewall.

Recently I discovered the nftables based firewall foomuuri. It’s still a very young project but it’s actively developed, already has extensive features, is packaged in Debian and is configured through human-readable configuration files. I decided to try it on a server where I wanted to filter incoming and outgoing network traffic.

Installing Foomuuri on Debian

Foomuuri is availabe in Debian testing and unstable, but it has also been backported to Debian 12 Bookworm. To use that package, you have to enable the bookworm-backports repository first. Then install the foomuuri package

# apt install foomuuri

If you are using NetworkManager also install foomuuri-firewalld, because it will allow NetworkManager to set the zone the network interface belongs to.

Configuring Foomuuri

Foomuuri can be configured through files in the /etc/foomuuri directory. Foomuuri will read all files which name ends with .conf, so you can split up the configuration in as many files as you want or just put everything in a single file, as you prefer. I like the split configuration files of Shorewall, so I will do something similar here.

Before activating the configuration, always run

# foomuuri check

to validate your configuration. You can start and stop the firewall by starting and stopping the systemd service, you can reload the configuration by running

 # foomuuri reload

You can find the documentation of Foomuuri on the Foomuuri wiki.

Defining zones

The first ting we have to do is define the zones and set which interfaces belongs to which zone. I create /etc/foomuuri/zones.conf:

zone {
  localhost
  public enp1s0
}

I create the zone localhost and the zone public and add the network interface enp1s0 to it. You can add multiple interfaces to a zone by separating them by spaces. If you are using NetworkManager, you don’t have to add the interfaces here and can leave the zone empty. You can configure the firewall zone in NetworkManager and it will set it through foomuuri-firewalld.

Using macros to alias configuration options

Macros can be used to define certain configuration options you want to use multiple times without having to write them completely every time. In practice a lot of macros are already configured which define the configuration for common services. You can see all defined macros by running

# foomuuri list macro

For example the macro imap defines the configuration tcp 143, so that you can just write imap instead of tcp 143 in the configuration. I added a few which were not defined by default in /etc/foomuuri/services.conf:

macro {
	nrpe	tcp 5666
	nmb	udp 137 138 139; tcp 139
}

Macros can be used to configure common subnets. For example I have a file named /etc/foomuuri/subnets.conf:

macro {
	mysubnet		192.168.0.1/24
	othersubnet		192.168.1.1/24
}

I also use macros to create lists of individual hosts, such as all NFS clients which need to access this NFS server in /etc/foomuuri/nfs_clients.conf

macro {
	nfs_clients   192.168.0.1 # web server
	nfs_clients + 192.168.0.2 # gitlab
	nfs_clients + 192.168.0.3 # nextcloud
}

For easy readability, I put every host in a single line, and I add a comment for my own reference. With the + sign I add all next hosts to the macro.

Firewall for incoming connections

To configure Foomuuri to filter incoming connections to my servers, I create a section public-localhost which contains the firewall rules for traffic coming from the public zone to localhost. I put this in the file /etc/foomuuri/public-localhost.conf:

public-localhost {
  dhcp-server
  ssh
  ping  saddr mysubnet
  nmb   saddr mysubnet
  smb   saddr mysubnet
  nfs   saddr nfs_clients
  nrpe  saddr 192.168.0.5
  drop log
}

My server is acting as a DCHP-server, so I use the dhcp-server macro to allow all this traffic, just as I allow all incoming ssh traffic. I allow ping, nmb and smb traffic from mysubnet. Notice that in these rules I use my custom macros nmb and mysubnet. Then I allow nfs from all addresses listed in my macro nfs_clients, and I allow nrpe from a specific IP address. Finally I end with a rule which drops and logs all traffic which has not matched any of the rules before.

Firewall for outgoing connections

I think that filtering outgoing connections is a very effective security hardening measure. In case people with bad intentions get access to your server through a non-root user account, this will severely limit their abilities to move laterally through your network and attack other systems, to run a crypto-miner, or download malware from the Internet.

localhost-public {
  dhcp-client
  nmb uid root
  ntp uid systemd-timesync
  ping uid root daddr mysubnet # dhcpd sometimes pings
  smtp daddr 192.168.0.1 uid postfix
  domain daddr 192.168.0.255 192.168.0.254
  uid root tcp daddr 192.168.0.5 dport 8140 # puppet agent
  uid _apt tcp dport 3142 daddr 192.168.0.6
  uid root ssh daddr 192.168.0.250 # backups
  drop daddr 169.254.169.254 tcp dport 80 # don't fill logs with Puppetlabs facter trying to collect facts from Amazon EC2/Azure
  reject log
}

I allow outgoing connections for different services, and for most services I set the user which can create that connection, and to which host I allow the connection. I explicitly drop without logging connections to 169.254.169.254 port 80, because facter tries to connect to this address every time it runs in order to get some metadata from your cloud service provider. If your system is running on Amazon or Microsoft Azure cloud services, you will probably want to allow this connection instead, so you can then just remove the drop word.

In order to log the UID of the process which tried to establish a rejected connection, in future Foomuuri versions (starting from Foomuuri version 0.22) you can replace the last rule by

reject log log_level "level warn flags skuid"

In current version 0.21, it is possible by setting this globally for all connections. I created /etc/foormuuri/loglevel.conf:

foomuuri {
  log_level "level info flags skuid"
}

Integrating Fail2ban with Foomuuri

I found inspiration for integrating Fail2ban with Foomuuri in issue 9 on the Foomuuri issue tracker.

Create /etc/fail2ban/action.d/foomuuri with these contents:

[Definition]
actionstart =
actionstop  =
actioncheck =
actionban   = /usr/sbin/foomuuri iplist add fail2ban 999d <ip>
actionunban = /usr/sbin/foomuuri iplist del fail2ban <ip>
actionflush = /usr/sbin/foomuuri iplist flush fail2ban

Then set foomuri as the default banaction by creating /etc/fail2ban/jail.d/foomuri.conf:

[DEFAULT]
banaction = foomuuri

Then foomuuri should create the fail2ban iplist. We can configure it to so by creating /etc/foomuuri/fail2ban.conf:

iplist {
	@fail2ban
}

Then I add this rule as first rule to the public-localhost section:

  saddr @fail2ban drop log fail2ban drop

This will drop all connections coming from an address in the iplist fail2ban, and will also log them with prefix fail2ban. If you don’t want this to be logged, just remove log fail2ban.

To ensure that Foomuuri is started before Fail2ban, so that the fail2ban iplist exists before Fail2ban starts to use it, create

/etc/systemd/system/fail2ban.service.d/override.conf:

[Unit]
After=foomuuri.service

After making these changes, first restart Foomuuri and then Fail2ban.

Conclusion

I found Foomuuri easy to use for a system with one network interface. Configuration through the configuration files is easy, also when implementing filtering for outgoing packets. Even though Foomuuri is still a young project, it already has many features and its author is very reactive to discussions and issues on Github. I also found the documentation on the wiki very helpful

I will try to implement Foomuuri on more complex setups in the future, such as on a host for virtual machines of which the network interface is bridged to the main network interface of the host, VPN servers, routers, etc…

Finally I want to thank the Foomuri developer Kim B. Heino and the maintainer of the Debian package Romain Francoise for their work and making this available to the community.

The security risks of Flathub

This week a big refresh of the Flathub website came online and there was quite some buzz around this in the Linux world. However this same week I noticed a worrying thing about Flathub: it is distributing different applications with known security problems. I am really worried about this because many people will unknowingly install these flatpaks, thinking that they are safe because they installed them from a reliable source.

The most striking example of this is Adobe Reader. This application was last updated by Adobe in 2013, so that means it’s 10 years old. Adobe does not support this software any more since 26 June 2013. While the Github Readme of the project mentions that this application is not supported any more, has know security vulnerabilities and is unstable, nothing of this is mentioned on the Flathub page itself. This means that many people who stumble upon this page, will install this flatpak without being aware of these risks. At the moment of writing, Adobe Reader is listed on the Flathub homepage as the third application, because it’s a new package and after a couple of days it had already 1666 installations. I’m wondering how many of these people are aware of the fact that they are installing a no longer supported application with known security bugs.

Unfortunately, Adobe Reader is not the only example. Let’s take a look at Visual Studio Code. I see three different variants on Flathub: two open-source builds Code – OSS and VSCodium and then the proprietary Microsoft build Visual Studio Code. Of these three, only one is up to date at the time of writng: VSCodium. Version 1.77.2 fixed a security problem, but neither the Code – OSS nor the Visual Studio Code flatpak have this version. The latter is the most popular one with 1.3 million installations.

Fortunately security sensitive flatpaks like Firefox, Chromium, Brave and Thunderbird are up to date, so it looks like this is not a bigger, more general problem. Still I think it’s unacceptable that several packages of vulnerable software are offered in the default Flathub repository.

But flatpak packages run in a sandbox so the security risk is only theoretical, isn’t it? Sorry, that ‘s not a serious way of dealing with security. You just need a security vulnerability in flatpak or in the Linux kernel and your software can escape the sandbox. At least two sandbox escape bugs have been found in flatpak in the past (CVE-2021-21261 and CVE-2019-10063). For sure more of these bugs will be discovered in the future, especially if flatpak becomes more popular. Combine this with a vulnerability in the packaged software, such as the Adobe Reader of Visual Studio Code, and opening a file downloaded from the Internet can be enough to get your system compromised.

In practice, we see such sandbox escape bugs being exploited in Chromium/Google Chrome: it has a built-in sandbox to protect the system from security vulnerabilities, yet it often has updates for zero-day vulnerabilities. Up to now already 2 different security fixes were published in 2023 which were already being exploited in the wild. Despite the sandbox. Sandbox escape is explicitly mentioned in the security advisory from a few days ago. Not relying on a single layer of defense against security breaches is called defense in depth and this is simply an essential practice if you care about security.

A PDF viewer is definitely at risk because you often open files downloaded from the Internet with it. But even though a programming editor/lightweight IDE like Visual Studio Code does not appear the most security sensitive application, make no mistake: they can also be targeted by people with bad intentions. I’m thinking of the case uncovered two years ago, where security researchers (!) were successfully targeted by North-Korean hackers who abused a feature in Microsoft’s fully fledged Visual Studio IDE. A security vulnerability in your IDE will only make such abuse easier. Think also of teachers who need to open (untrusted) code from students, which are at risk when their IDE has known security vulnerabilities.

One of the new features of the new website, is that flatpaks by the original developers of the software, are now marked as verified. But I don’t think that’s very useful because it does not say anything about how well it’s maintained and whether there are known security problems. Software which was not packaged by the original author, but which is well maintained, is by far preferred over software which was packaged by its original developer but who has now abandoned maintenance. Compare this to Linux distributions: the software is usually not packaged by the original developer, but by the distribution’s maintainers. That does not make these packages unreliable.

Windows does actually have much more security features enabled by default than Linux: files which originate from the Internet, are marked as such (mark-of-the-web) and these files will then undergo more security protections by the OS and by applications (Protected View in Office for example), there is an integrated malware scanner (Defender), Windows has a firewall enabled by default and it does automatic updates. Many of these things are not the case in Linux. Yet we hear of ransomware attacks on Windows users on a daily basis. It should make us realize that Linux will not be immune to these problems. The first thing we should do, is at least not run software with know security problems.

One thing that has to be done is, is that in the description on Flathub there is a warning in bold explaining that the software has known security vulnerabilities and it should clearly discourage users to install it. But I think that is not enough. People will just search for PDF, will recognize Adobe and won’t even read the description because they know the Adobe PDF reader. And then they will be surprised to discover during usage that the software is unstable and insecure. The same is true for Visual Studio Code: most people installing the flatpak simply won’t be aware that the packaged version has known vulnerabilities.

I think there is only one reasonable solution: these software packages should be moved to a separate repository which is not enabled by default. This repository should be called “unsupported”. If people do the effort of enabling this repository, then they should clearly get an extra warning that the software can be unstable, insecure and that they cannot expect any support. When searching, people should not get such software at the top between other well-maintained software. It should be shown in a separate unsupported category at the bottom. If we don’t do these things, then I’m afraid security incidents will happen one day, possibly destroying all trust in Flathub and Linux in general. And that is something which we should really avoid.

Increasing PHP security with Snuffleupagus

In a previous article, I discussed how to set up ModSecurity with the Core Rule Set on Debian. This can be considered as a first line of defense against malicious HTTP traffic. In a defense in depth strategy of course we want to add additional layers of protection to your web servers. One such layer is Snuffleupagus. Snuffleupagus is a PHP module which protects your web applications against various attacks. Some of the hardening features it offers are encryption of cookies, disabling XML External Entity (XXE) processing, a white or blacklist for the functions which can be used in the eval() function and the possibility to selectively disable PHP functions with specific arguments (virtual-patching).

Installing Snuffleupagus on Debian

Unfortunately there is no package for Snuffleupagus included in Debian, but it is not too difficult to build one yourself:

$ apt install php-dev
$ mkdir snuffleupagus
$ cd snuffleupagus
$ git clone https://github.com/jvoisin/snuffleupagus
$ cd snuffleupagus
$ make debian

This will build the latest development code from the master branch. If you want to build the latest stable release, before running make debian, use these commands to view all tags and to checkout the latest table tag, which in this case was v0.8.2:

$ git tag
$ git checkout v0.8.2

If all went well, you should now have a file snuffleupagus_0.8.2_amd64.deb in the above directory, which you can install:

$ cd ..
$ apt install ./snuffleupagus_0.8.2_amd64.deb

Configuring Snuffleupagus

First we take the example configuration file and put it in PHP’s configuration directory. For example for PHP 7.4:

# zcat /usr/share/doc/snuffleupagus/examples/default.rules.gz > /etc/php/7.4/snuffleupagus.rules

Also take a look at the config subdirectory in the source tree for more example rules.

Edit the file /etc/php/7.4/fpm/conf.d/20-snuffleupagus.ini so that it looks like this:

extension=snuffleupagus.so
sp.configuration_file=/etc/php/7.4/snuffleupagus.rules

Now we will edit the file /etc/php/7.4/snuffleupagus.rules.

We need to set a secret key, which will be used for various cryptographic features:

sp.global.secret_key("YOU _DO_ NEED TO CHANGE THIS WITH SOME RANDOM CHARACTERS.");

You can generate a random key with this shell command:

$ echo $(head -c 512 /dev/urandom | tr -dc 'a-zA-Z0-9')

Simulation mode

Snuffleupagus can run rules in simulation mode. In this mode, the rule will not block further execution of the PHP file, but will just output a warning message in your log. Unfortunately there is no global simulation mode, but it has to be set per rule. You can run a rule in simulation mode by appending .simulation() to it. For example to run INI protection in simulation mode:

sp.ini_protection.simulation();

INI protection

To prevent PHP applications from modifying php.ini settings, you can set this in snuffleupagus.rules:

sp.ini_protection.enable();
sp.ini_protection.policy_readonly();

Cookie protection

The following configuration options sets the SameSite attribute to Lax on session cookies, which offers protection against CSFR on this cookie. We enforce setting the secure option on cookies, which instructs the web browser to only send them over an encrypted HTTPS connection and also enable encryption of the content of the session on the server. The encryption key being used is derived of the value of the global secret key you have set, the client’s user agent and the environment variable SSL_SESSION_ID.

sp.cookie.name("PHPSESSID").samesite("lax");

sp.auto_cookie_secure.enable();
sp.global.cookie_env_var("SSL_SESSION_ID");
sp.session.encrypt();

Note that the definition of cookie_env_var needs to happen before sp.session.encrypt(); which enables the encryption.

You have to make sure the variable SSL_SESSION_ID is passed to PHP. In Apache you can do so by having this in your virtualhost:

<FilesMatch "\.(cgi|shtml|phtml|php)$">
    SSLOptions +StdEnvVars
</FilesMatch>

eval white- or blacklist

eval() is used to evaluate PHP content, for example in a variable. This is very dangerous if the PHP code to be evaluated can contain user provided data. Therefore it is strongly recommended that you create a whitelist of functions which can be called by code evaluated by eval().

Start by putting this in snuffleupagus.rules and restart PHP:

sp.eval_whitelist.list().simulation();

Then test your websites and see which errors you get in the logs, and add them separated by commas to the eval_whitelist.list(). After that you need to remove .simulation() and restart PHP in order to activate this protection. For example

sp.eval_whitelist.list("array_pop,array_push");

You can also use a blacklist, which only blocks certain functions. For example:

sp.eval_blacklist.list("system,exec,shell_exec,proc_open");

Limit execution to read-only PHP files

The read_only_exec() feature of Snuffleupagus will prevent PHP from execution of PHP files on which the PHP process has write permissions. This will block any attacks where an attacker manages to upload a malicious PHP file via a bug in your website, and then attempts to execute this malicious PHP script.

It is a good practice to let your PHP scripts be owned by a different user than the PHP user, and give PHP only read-only permissions on your PHP files.

To test this feature, add this to snuffleupagus.rules:

sp.readonly_exec.simulation();

If you are sure all goes well, enable it:

sp.readonly_exec.enable();

Virtual patching

One of the main features of Snuffleupagus is virtual patching. Thjs feature will disable functions, depending on the parameters or and values they are given. The example rules file contains a good set of generic rules which blocks all kinds of dangerous behaviour. You might need to fine-tune the rules if your PHP applications hits certain rules.

Some examples of virtual-patching rules:

sp.disable_function.function("chmod").param("mode").value("438").drop();
sp.disable_function.function("chmod").param("mode").value("511").drop();

These rules will drop calls to the chmod function with octal values 438 and 511, which correspond to the dangerous 0666 and 0777 decimal permissions.

sp.disable_function.function("include_once").value_r(".(inc|phtml|php)$").allow();
sp.disable_function.function("include_once").drop();

These two rules will only allow the include_once function to include files which file name are ending with inc, phtml or php. All other include_once calls will be dropped.

Using generate-rules.php to automatically site-specific hardening rules

In the scripts subdirectoy of the Snuffleupagus source tree, there is a file named generate_rules.php. You can run this script from the command line, giving it a path to a directory with PHP files, and it will automatically generate rules which specifically allow all needed dangerous function calls, and then disable them globally. For example to generate rules for the /usr/share/tt-rss/www and /var/www directories:

# php generate_rules.php /usr/share/tt-rss/www/ /var/www/

This will generate rules:

sp.disable_function.function("function_exists").filename("/usr/share/tt-rss/www/api/index.php").hash("fa02a93e2724d7e818c5c13f4ba8b110c47bbe7fb65b74c0aad9cff2ed39cf7d").allow();
sp.disable_function.function("function_exists").filename("/usr/share/tt-rss/www/classes/pref/prefs.php").hash("43926a95303bc4e7adefe9d2f290dd8b66c9292be836908081e3f2bd8a198642").allow();
sp.disable_function.function("function_exists").drop();

The first two rules allow these two files to call function_exists and the last rule drops all requests to function_exists from any other files. Note that the first two rules limit the rules not only to the specified file name, but also define the SHA256 of the file. This way, if the file is changed, the function call will be dropped. This is the safest way, but it can be annoying if the files are often or automatically updated because it will break the site. In this case, you can call generate_rules.php with the --without-hash option:

# php generate_rules.php --without-hash /usr/share/tt-rss/www/ /var/www/

After you have generated the rules, you will have to add them to your snuffleupagus.rules file and restart PHP-FPM.

File Upload protection

The default Snuffleupagus rule file contains 2 rule which will block any attempts uploading a html or PHP file. However, I noticed that they were not working with PHP 7.4 and these rules would cause this error message:

PHP Warning: [snuffleupagus][0.0.0.0][config][log] It seems that you are filtering on a parameter 'destination' of the function 'move_uploaded_file', but the parameter does not exists. in /var/www/html/foobar.php on line 15PHP message: PHP Warning: [snuffleupagus][0.0.0.0][config][log] - 0 parameter's name: 'path' in /var/www/html/foobar.php on line 15PHP message: PHP Warning: [snuffleupagus][0.0.0.0][config][log] - 1 parameter's name: 'new_path' in /var/www/html/foobar.php on line 15'

The snuffleupagus rules use the parameter destination for the move_uploaded_file instead of the parameter new_path. You will have to change the rules like this:

sp.disable_function.function("move_uploaded_file").param("new_path").value_r("\.ph").drop();<br />sp.disable_function.function("move_uploaded_file").param("new_path").value_r("\.ht").drop();

Note that on PHP 8, the parameter name is to instead of new_path.

Enabling Snuffleupagus

To enable Snuffleupagus in PHP 7.4, link the configuration file to /etc/php/7.4/fpm/conf.d:

# cd /etc/php/7.4/fpm/conf.d
# ln -s ../../mods-available/snuffleupagus.ini 20-snuffleupagus.ini
# systemctl restart php7.4-fpm

After restarting PHP-FPM, always check the logs to see whether snuffleupagus does not give any warning or messages for example because of a syntax error in your configuration:

# journalctl -u php7.4-fpm -n 50

Snuffleupagus logs

By default Snuffleupagus logs via PHP. Then if you are using Apache with PHP-FPM, you will find Snuffleupagus logs, just like any PHP warnings and errors in the Apache error_log, for example /var/log/apache/error.log. If you encounter any problems with your website, go check this log to see what is wrong.

Snuffleupagus can also be configured to log via syslog, and actually even recommends this, because PHP’s logging system can be manipulated at runtime by malicious scripts. To log via syslog, add this to snuffleupagus.rules:

sp.log_media("syslog");

I give a few examples of errors you can encounter in the logs and how to fix them:

[snuffleupagus][0.0.0.0][xxe][log] A call to libxml_disable_entity_loader was tried and nopped in /usr/share/tt-rss/www/include/functions.php on line 22

tt-rss calls the function libxml_disable_entity_loader but this is blocked by the XXE protection. Commenting this in snuffleupagus.rules should fix this:

sp.xxe_protection.enable();

Another example:

[snuffleupagus][0.0.0.0][disabled_function][drop] Aborted execution on call of the function 'ini_set', because its argument '$varname' content (display_errors) matched a rule in /usr/share/tt-rss/www/include/functions.php on line 37'

Modifying the PHP INI option display_errors is not allowed by this rule:

sp.disable_function.function("ini_set").param("varname").value_r("display_errors").drop();

You can completely remove (or comment) this rule in order to disable it. But a better way is to add a rule before this rule which allows it for specially that PHP file. So add this rule before:

sp.disable_function.function("ini_set").filename("/usr/share/tt-rss/www/include/functions.php").param("varname").value_r("display_errors").allow();

If you get something like this:

[snuffleupagus][0.0.0.0][disabled_function][drop] Aborted execution on call of the function 'function_exists', because its argument '$function_name' content (exec) matched a rule in /var/www/wordpress/wp-content/plugins/webp-express/vendor/rosell-dk/exec-with-fallback/src/ExecWithFallback.php on line 35', referer: wp-admin/media-new.php

It’s caused by this rule:

sp.disable_function.function("function_exists").param("function_name").value("exec").drop();

You can add this rule before to allow this:

sp.disable_function.function("function_exists").filename("/var/www/wordpress/wp-admin/media-new.php").param("function_name").value("exec").allow();

More information

Snuffleupagus documentation

Snuffleupagus on Github

Julien Voisin blog archives

Using the Solo V2 FIDO2 security key

Last year I supported the Solo V2 Kickstarter camaign. Solo is a completely open source FIDO2 security key. You can use it for Two-Factor Authentication (2FA) on web sites, for protecting your private SSH keys and other things. The Solo2 is similar to keys such as the Yubikey from Yubico, the Google Titan Security Key, the Kensington Verimark or Nitrokey. Because all these keys implement the standards of the FIDO2 project, many of the examples here work with these keys too.

The Kickstarter campagin has ended, however now you can buy Solo V2 security keys via their Indiegogo campaign. If you decide to buy a security key, then I strongly recommend buying at least two of them so that you can use the second key as a back-up key in case the first key breaks or gets lost.

It appears that the firmware of the Solo V2 currently has some problems, preventing it to work correctly on some sites and there are some complaints about the lack of progress in this matter and a lack of communication. There is hope that these problems will be fixed in the near future though. A new firmware version 2:20220822.0 is available fixing some important problems. Make sure to update the firmware before starting to use the key, because updating to this version will erase all existing credentials, forcing you to re-register your key everywhere.

There is also little documentation, which can make it a bit difficult to get started if you are new to FIDO2 security keys. That’s why I decided to create this guide, to serve as a tutorial explaining how to use the Solo2.

Installing software

For basic usage of the key, you actually do not need to install any software. However there are some utilities available which allow you to update the firmware, set a PIN, view all credentials stored on the key, etc. First of all, we will install the solo2 CLI. It’s not yet packaged in Debian, so we need to download it from Github. I check the sha256sum to ensure I get the right files. This utility written in Rust does not support all functionality yet and for that reason I also install the Python based Solo1 CLI, which is packaged in Debian. The fido2-tools package finally contains some utilities which work on all FIDO2 keys.

# apt install solo-python fido2-tools
$ curl -L -O https://github.com/solokeys/solo2-cli/releases/download/v0.2.0/70-solo2.rules
$ curl -L -O https://github.com/solokeys/solo2-cli/releases/download/v0.2.0/solo2-v0.2.0-x86_64-unknown-linux-gnu
$ curl -L -O https://github.com/solokeys/solo2-cli/releases/download/v0.2.0/solo2.completions.bash
$ sha256sum 70-solo2.rules solo2-v0.2.0-x86_64-unknown-linux-gnu solo2.completions.bash
4133644b12a4e938f04e19e3059f9aec08f1c36b1b33b2f729b5815c88099fe3  70-solo2.rules
d03b20e2ba3be5f9d67f7a7fc1361104960243ebbe44289224f92b513479ed9b  solo2-v0.2.0-x86_64-unknown-linux-gnu
a892afc3c71eb09c1d8e57745dabbbe415f6cfd3f8b49ee6084518a07b73d9a8  solo2.completions.bash
# mv 70-solo2.rules /etc/udev/rules.d/
# mv solo2-v0.2.0-x86_64-unknown-linux-gnu /usr/local/bin/solo2
# chmod 755 /usr/local/bin/solo2
# mv solo2.completions.bash /etc/bash_completion.d/

Touching the Solo2

When authenticating to websites or doing other operations, you will be asked to tap or touch the security key. The Solo2, unlike the Solo1 or Yubikey, does not have a physical button which needs to be depressed, but has 3 touch areas. These are the 3 gold coloured areas at both sides and at the back of the key. You do not need to press them, gently touching one of them is enough. In practice, I had most success touching the two touch zones at both sides of the key simultaneously with 2 fingers.

Updating the Solo V2 firmware version

You can check which version of the firmware is currently installed on your key with this command:

$ solo2 app admin version

At the moment of writing, the most recent version is 1:20200101.9 2:20220822.0.

To update the firmware version, run this command:

$ solo2 update

Setting a PIN on the Solo V2 key

I strongly recommend setting up a PIN on your FIDO2 key. It will be required to do any administrative tasks on your key, such as adding or removing credentials such as SSH keys,

You cannot set a pin with the solo2 CLI, but you can simply use the solo1 CLI:

$ solo key set-pin

If a PIN has already been set and you want to modify it, run:

$ solo key change-pin

You can also use any Chromium based browser (such as Google Chrome), and go the the URI: chrome://settings/securityKeys . There click on Create a PIN.

Yet another alternative is to use the fido2-token utility, part of fido2-tools. First you need to get the device path of the key:

$ fido2-token -L
/dev/hidraw4: vendor=0x1209, product=0xbeee (SoloKeys Solo 2 Security Key)

So in my case it’s /dev/hidraw4. Then change the PIN like this:

$ fido2-token -C /dev/hidraw4

Do not forget your PIN, otherwise you cannot use your key any more to authenticate to registered sites!

In case you forgot your FIDO2 PIN, you will need to completely reset your key. This will erase all keys and generate new ones, so you will need to have an alternative way to authenticate to websites where you registered this key.

$ solo key reset

FIDO2 Two-Factor Authentication

Usually you go the security settings on the website and there you can enable 2FA. For some sites, you will be required to set up TOTP first before you can register a security key. So make sure you have a TOTP application such as FreeOTP+ for Android or Raivo OTP on iOS. TOTP is then a back-up method for 2FA in case you loose access to your key. If you have multiple FIDO2 keys, don’t forget to register them all.

A side note: don’t use SMS as a second factor for authentication. SMS 2FA is insecure because these messages are transferred in clear text and there are various ways they can be intercepted.

2FA with the Solo2 on Android

You can connect the Solo2 to your Android device by USB, or you can use NFC. When a web application tries to authenticate your key, you will get a pop-up message where you can choose whether you want to connect it via USB or use NFC. In the case of USB, connect your key to the USB port and tap it, just like you would do on your PC. If you chose NFC, just bring your Solo2 key to the back of your phone and it should authenticate.

This all works fine in Chromium based browsers, however I was not able to successfully authenticate with the Solo2 in Firefox. I managed to get it working with Firefox Nightly though. You will need to go to about:config and set security.webauthn.ctap2 and security.webauth.webauthn_enable_usb_token both to true in order to get it working.

Enabling 2FA on well-know websites

Google

Go to https://myaccount.google.com/ and in the left menu click on Security. Under Signing in to Google click on 2-Step Verification. There click on Enable two-factor authentication.In the wizard that appears, you will have to click on Security Key and follow to instructions to add your key.

Github

In the right top corner, click on your avatar and choose Settings. Then in the left menu click on Password & Authentication where you can enable Two-Factor Authentication. You will have to set up TOTP first, and after that, you can register your security key.

Gitlab

In the right top corner, click on your avatar and choose Preferences. Then in the left menu click on Account where you can enable Two-Factor Authentication. You will have to register a Two-Factor Authenticator (TOTP) first, and after that, you can register your WebAuthn devices.

Masstodon

Click on the Preferences icon then choose AccountTwo-Factor Auth. You will need to set up TOTP first, and after that you can add a security key.

Nextcloud

The app Two-Factor WebAuthn needs to be installed on your Nextcloud instance.

Click on your avator in the top right corner and choose Settings. Then choose Security in the left menu and there you can add Webauthn devices.

Microsoft personal account

Make sure you are using the newest firmware because this is not working with older firmware versions.

You need to go with a Chromium based browser to https://account.microsoft.com/ (Firefox does not work at the moment). There click on SecuritySecurity DashboardAdvanced security optionsAdd a new way to sign in or verifyUse a security key.

Microsoft Azure Directory account

If you have a Microsoft Directory Azure account, for example if you are using Office 365 in your organization, then it might be possible to use your Solo2, however this depends on the settings made by your administrator. if Enforce key restrictions is enabled, certain keys can be blocked, or only specific keys are allowed. Also the option Enforce attestation needs to be disabled, because otherwise only keys which have been tested by Microsoft are allowed. Unfortunately Solo keys have not been validated by Microsoft (also Google’s Titan security keys are in this case). Note that this attestation does not include an evaluation of the security of the key.

Currently there is no news about applying this attestation for the Solo keys.

Twitter

On the website in the left menu, click on MoreSettings and privacy and then on Security and account access – SecurityTwo-factor authentication. There choose Security key.

https://help.twitter.com/en/managing-your-account/two-factor-authentication

Facebook

Really? You should not be using Facebook.

If you really must use Facebook: https://www.facebook.com/help/148233965247823/

LinkedIn

It appears that at the moment of writing LinkedIn does not support 2FA with FIDO2. You can set up TOTP though, which I recommend doing. Click on Me in the top menu and choose Settings & Privacy. Then in the left menu choose Sign in & security and click on Two-Step verification.

WordPress

To enable FIDO2 two-factor authentication in WordPress, install the plugins two-factor and two-factor-provider-webauthn. Enable both modules and then in the WordPress administration menu go to SettingsTwoFactor WebAuthn. Use the option: Disable old U2F provider. the two-factor plugin includes U2F by default, but this is not supported any more by Chromium based browsers, so you want to use the more modern webauthn instead. Then you can set up 2FA in the menu UsersProfile: enable WebAuthn . Then under Security Keys (WebAuthn) click on Register New Key, tap your key and give it a unique name. Do this for both your security keys.

If you have set up Modsecurity with the Core Rule Set, you will end up with a HTTP 403 Forbidden error when trying to register your key or try to authenticate with it. Create /etc/modsecurity/99-wordpress-webauthn.conf with this content:

SecRule REQUEST_FILENAME "@endsWith /wp-admin/profile.php" \
    "id:1100,\
    phase:2,\
    pass,\
    t:none,\
    nolog,\
    chain"
    SecRule ARGS:action "@streq update" \
        "t:none,\
        chain"
        SecRule &ARGS:action "@eq 1" \
            "t:none,\
            ctl:ruleRemoveTargetByTag=OWASP_CRS;ARGS:u2f_response,\
            ctl:ruleRemoveTargetByTag=OWASP_CRS;ARGS:webauthn_response"

SecRule REQUEST_FILENAME "@endsWith /wp-login.php" \
    "id:1101,\
    phase:2,\
    pass,\
    t:none,\
    nolog,\
    ctl:ruleRemoveTargetByTag=OWASP_CRS;ARGS:u2f_response,\
    ctl:ruleRemoveTargetByTag=OWASP_CRS;ARGS:webauthn_response,\

SecRule REQUEST_FILENAME "@endsWith /wp-admin/admin-ajax.php" \
    "id:1102,\
    phase:2,\
    pass,\
    t:none,\
    nolog,\
    chain"
    SecRule ARGS:action "@streq webauthn_register" \
        "t:none,\
        chain"
        SecRule &ARGS:action "@eq 1" \
            "t:none,\
            ctl:ruleRemoveTargetByTag=OWASP_CRS;ARGS:credential"

and reload your Apache configuration. It should now work.

What if it does not work?

If registering your key or authenticating with your key fails on a website, try with a Chromium based browser. Firefox does not support CTAP2 yet, and this can cause trouble on sites which require verification of a PIN. Firefox has CTAP2 support now, but it’s disabled by default. Make sure you use the latest version of Firefox (109 at the time of writing) and activate CTAP2 support by going to about:config and setting security.webauthn.ctap2 to true.

OpenSSH

To use your Solo2 key for OpenSSH authentication, you will at least version 8.2p1 on both server and client. OpenSSH 8.3p1 adds support for discoverable credentials or resident keys: with discoverable credentials, the FIDO2 security key itself is enough to do SSH public key authentication. This has a slight security risk though if people get access to your Solo2 key because now the only protection is the PIN you have set on the key. Non-discoverable keys don’t have this security risk, because you also need the private key stored on your computer to authenticate.

SSHD configuration for FIDO2 keys

As written before, you need at least version 8.2p1 or 8.3p1 of OpenSSH. The default settings as provided by Debian should be OK, but I strongly recommend to add this option to sshd_config if you only use FIDO2 keys for interactive login:

PubkeyAuthOptions verify-required

I prefer doing this by creating a file /etc/ssh/sshd_config.d/fido2.conf with this line.

This options ensures that only keys which require a PIN can be used, at least adding some protection against theft of a FIDO2 key which contains discoverable credentials.

You can also add the option touch-required to PubkeyAuthOptions in order to require touching the key when authenticating. This will make it impossible to authenticate with keys which were created with the no-touch-required option.

Setting up FIDO2 credentials for SSH

To generate credentials for SSH with your FIDO2, you basically use this command

$ ssh-keygen -t ed25519-sk

There are diffferent options available which you can add:

  • -O resident: You want to create discoverable credentials.
  • -O no-touch-required: You want to disable the requirement of touching the key for authenticating.
  • -O verify-required: You require that the PIN is entered when authenticating. I strongly recommend this option.
  • -O application=ssh:SomeUniqueName: In case you want to store different SSH keys on your Solo2, you will have to give each of them a different application name starting with ssh:
  • -f ~/.ssh/id_ed25519_sk_solo2_blue: If you use multiple FIDO2 keys, you may want to store the key in a unique file for every FIDO2 key. Replace the file name of this example by the name of your choice.

You can verify that the credentials are correctly stored on your Solo2 using this command:

$ solo key credential ls

In case you would want to remove the credentials stored on your key, you can do so by using this command:

$ solo key credential rm CREDENTIALID

Replace CREDENTIALID by the value you found with the previous command.

After creating the key, you need to copy the public key to the authorized_keys file on your server. You can use ssh-copy-id for that:

$ ssh-copy-id -i ~/.ssh/id_ed25519_sk_solo2_blue.pub username@server.example.org

Of course use the correct file name for the public key.

If you used the option no-touch-required when generating the key, you will have to edit the ~/.ssh/authorized_keys file on your server so that this options precedes the key. For example if authorized_keys contains this:

sk-ssh-ed25519@openssh.com AAAA....= username@host

Change it to this:

no-touch-required sk-ssh-ed25519@openssh.com AAAA....= username@host

Now it should be possible to log in to the server using this command:

$ ssh -i ~/.ssh/id_ed25519_sk_solo2_blue -o IdentitiesOnly=yes username@server.example.org

You will be asked to enter the PIN of your key and to touch it, depending on the options you used when creating the key. I add -o IdentitiesOnly=yes because otherwise ssh will first try to authenticate using the keys loaded in your SSH agent. With this option we enforce it to use only the private key we have specified with the -i parameter.

You can make this default by editing ~/.ssh/config, so that you don’t need to repeat the -i and -o parameters every time when connecting:

Host server.example.org
    User myusername
    IdentityFile ~/.ssh/id_ed25519_sk_solo2_blue
    IdentitiesOnly yes

Importing discoverable credentials on another system

When you use discoverable credentials, all information needed for authentication is stored on the key itself, in contrast to non-discoverable credentials, where part of that information is also stored in the private key file on the computer. For this reason, with discoverable credentials, it is easy to import them on any computer.

$ cd ~/.ssh/ 
$ ssh-keygen -K

The public and private key will be written in the .ssh directory, and then you can authenticate again using the ssh -o IdentiesOnly=yes -i command just like on the system where you generated the key.

Troubleshooting FIDO2 SSH authentication

On the server check the sshd logs, which can be found in /var/log/auth.log or in the ssh journal:

# journalctl -u ssh

Successful authentication with your FIDO2 key, should be logged like this:

Accepted publickey for username from xxx.xxx.xxx.xxx port zzzzzz ssh2: ED25519-SK SHA256:....

Notice the ED25519-SK part which indicates that the credentials on your FIDO2 key were used.

If you see this:

error: public key ED25519-SK SHA256:... signature for username from xxx.xxx.xxx.xxx port zzzzz rejected: user presence (authenticator touch) requirement not met

This means that you have created a key with the no-touch-required options not set. Try adding no-touch-required to the authorized_keys on the server, as noted above, at least if your server does not have PubkeyAuthOptions touch-required set.

On the client-side, you can add the -v parameter to debug what happens:

$ ssh -v -o IdentitiesOnly=yes -i ~/.ssh/id_ed25519_sk_solo2_blue username@server.example.org

If you are using GNOME with gnome-keyring as ssh-agent, you will encounter this problem:

debug1: Offering public key: /home/username/.ssh/id_ed25519_sk_solo2_blue ED25519-SK SHA256:... explicit authenticator agent
debug1: Server accepts key: /home/username/.ssh/id_ed25519_sk_solo2_blue ED25519-SK SHA256:... explicit authenticator agent
sign_and_send_pubkey: signing failed for ED25519-SK "/home/username/.ssh/id_ed25519_sk_solo2_blue" from agent: agent refused operation

This is because of the lack of support of verify-required credentials in ssh-agent/gnome-keyring.

A work-around is to rename the public key, so that gnome-keyring will ignore it:

$ mv ~/.ssh/id_ed25519_sk_solo2_blue.pub ~/.ssh/id_ed25519_sk_solo2_blue.public

you will need to log out and login again after making this change.

Sources and more information

Securing SSH with FIDO2

Solo2 discussions

Setting up Wireguard VPN with IPv6

I wanted to set up Wireguard on a VPS, not only to tunnel IPv4 traffic, but also allowing me to tunnel IPv6 traffic. As this is IPv6 of course I preferred not to use NAT, but to assign a public IP address to the client. I read some documentation and blog posts, but I struggled getting it to work. Most tutorials I found on the Internet, create a separate IPv6 subnet for the VPN but I could not get this to work. For some reason, IPv6 traffic successfully went through the VPN tunnel and then exited the VPN gateway, but then any response never reached my VPN gateway and hence also not the client.

I decided to try another way: using an NDP proxy. NDP or the Neighbour Discovery Protocol, is similar to ARP which is used in IPv4. Using this protocol, network devices can discover where on the network a certain IP is located. By letting the VPN gateway answer NDP requests for the VPN client, the gateway would correctly send back all responses to the VPN gateway, which then forwards it to the VPN clients.

Configuring the network on the VPN gateway

I use systemd-networkd to set up the network. It’s the most modern way of network configuration and works the same on all distributions using systemd, but of course you can make the same settings in /etc/network/interfaces or whatever your distribution uses. Of course when making changes to a remote server, make sure you can access a console without needing a working network connection on the server, in case things go wrong and the network connection breaks.

On my VPN server, the public network interface is named ens192 (use the command $ ip addr to find it on your system). My public IPv4 address is www.xxx.yyy.zzz with subnet 255.255.255.0 and gateway ww.xx.yy.1. I have the 64 bit IPV6 prefix aaaa:bbbb:cccc:dddd and the IPv6 gateway is fe80::1.

I set this in /etc/systemd/network/internet.net:

[Match]
Name=ens192

[Network]
Address=aaaa:bbbb:cccc:dddd:0000:0000:0000:0001/64
Gateway=fe80::1
DNS=1.1.1.2
DNS=1.0.0.2
Address= www.xxx.yyy.zzz/24
Gateway=www.xxx.yyy.1
DNS=2606:4700:4700::1112
DNS=2606:4700:4700::1002

In this example I’m using the Cloudflare malware blocking DNS filters, but you can of course just use your ISP’s DNS servers here.

Setting up Wireguard

Run these commands on the Wireguard VPN gateway, and on all clients:

# apt install wireguard-tools
# cd /etc/wireguard
# umask 077
# wg genkey | tee privatekey | wg pubkey > publickey

Then create /etc/wireguard/wg0.conf on the VPN gateway with these contents:

[Interface]
Address = 192.168.7.1,fd42:42:42::1/64
PrivateKey = contents_of_file_privatekey
ListenPort = 51820

#client1
[Peer]
PublicKey = contents_of_publickey_of_client
AllowedIPs = 192.168.7.2/32,aaaa:bbbb:cccc:dddd:ffff::2/128

Add a [Peer] section for every client, and change the both the IPv4 and IPv6 address in AllowedIPs so that they are unique (replace 2 by 3 and so on) .

On the clients, create /etc/wireguard/wg0.conf with these contents:

[Interface]
Address = 192.168.7.2/32,aaaa:bbbb:cccc:dddd:ffff::2/128
PrivateKey = contents_of_privatekey_of_client
DNS = 2606:4700:4700::1112, 2606:4700:4700::1002, 1.1.1.2, 1.0.0.2

[Peer]
PublicKey = contents_of_publickey_of_vpn_gateway
Endpoint = vpngateway.example.com:51820
AllowedIPs = 0.0.0.0/0, ::/0

In the [Interface] section make sure to use the same IP addresses as the ones you have set in the corresponding [Peer] section on the VPN gateway. Set the DNS name (or IP address) of the VPN gateway as Endpoint in the [Peer] section. The hostname’s DNS entry can have both an A and AAAA record. You can replace your DNS servers by your preferred ones. You can also consider running your own DNS server on the VPN gateway.

Make sure that all wg*.conf files on client and server are only readable by root, because they contain private keys.

Configuring the firewall (Shorewall)

I use the Shoreline firewall, Shorewall, as firewall.

Make sure you have shorewall and shorewall6 installed:

# apt install shorewall shorewall6

Shorewall6

First we create a separate zone for our VPN in /etc/shorewall6/zones:

fw firewall
net ipv6
vpn ipv6

Then we configure the network interfaces and assign it to the right zone in /etc/shorewall6/interfaces:

net     NET_IF          tcpflags,routeback,proxyndp,physical=ens192
vpn     wg0		tcpflags,routeback,optional

Then we allow connections from the VPN to the firewall and to the Internet in /etc/shorewall6/policy:

$FW	net		ACCEPT
vpn     net		ACCEPT
vpn     $FW		ACCEPT
net	all		DROP		$LOG_LEVEL
# The FOLLOWING POLICY MUST BE LAST
all	all		REJECT		$LOG_LEVEL

Keep in mind that your VPN client will have a public IPv6 address, which is accessible from the Internet. The rule net all DROP protects your VPN clients against access from the Internet.

Then we create some rules which allows access to the SSH server and the Wireguard VPN server from the Internet in /etc/shorewall6/rules:

Invalid(DROP)      net    	$FW		tcp
Ping(DROP)	   net		$FW
ACCEPT		   $FW		net		ipv6-icmp
AllowICMPs(ACCEPT) all		all
ACCEPT		   all		all		ipv6-icmp	echo-request


SSH(ACCEPT)	   net		$FW
ACCEPT		   net		$FW		udp	51820 # Wireguard

We allow some required ICMPv6 message types defined in /usr/share/shorewall/action.AllowICMPs, as well as the echo-request type, which should not be dropped on IPv6.

For security reasons you could even choose to not open the SSH port for the net zone. SSH will only be accessible via the VPN then.

Finally we need to enable IP forwarding in /etc/shorewall6/shorewall6.conf:

IP_FORWARDING=Yes

Then we check whether everything compiles fine and enable and start the service:

# shorewall6 compile
# systemctl restart shorewall6
# systemctl enable shorewall6

Shorewall

For IPv4 we configure Shorewall to use NAT to provide Internet access to the VPN clients.

/etc/shorewall/zones:

fw	firewall
net	ipv4
vpn	ipv4

/etc/shorewall/interfaces:

net     NET_IF          dhcp,tcpflags,logmartians,nosmurfs,sourceroute=0,routefilter,routeback,physical=ens192
vpn	wg0		tcpflags,logmartians,nosmurfs,sourceroute=0,optional,routefilter,routeback

/etc/shorewall/policy:

$FW	net	ACCEPT
vpn	net	ACCEPT
vpn	$FW	ACCEPT
net	all	DROP		$LOG_LEVEL
# The FOLLOWING POLICY MUST BE LAST
all	all	REJECT		$LOG_LEVEL

/etc/shorewall/rules:

# Drop packets in the INVALID state

Invalid(DROP)  net    	        $FW		tcp

# Drop Ping from the "bad" net zone.. and prevent your log from being flooded..

Ping(DROP)	net		$FW

SSH(ACCEPT)	net		$FW
ACCEPT		net		$FW		udp	51820

/etc/shorewall/snat:

MASQUERADE	192.168.7.0/24	NET_IF

/etc/shorewall/shorewall.conf:

IP_FORWARDING=Yes

Compile and load the rules and enable Shorewall permanently:

# shorewall compile
# systemctl restart shorewall
# systemctl enable shorewall

Setting up NDP proxying

Then in order to make sure that the gateway knows that the VPN client aaa:bbb:cccc:dddd::2 is reachable via the VPN gateway, we need to set up NDP proxying. The Neighbor Discovery Protocol is similar to ARP in IPv6.

In a previous version of this guide, I configured NDP proxying in Shorewall6. However, we can directly set this up with systemd-networkd, so this will also work if you don’t use Shorewall6 but another firewall like Firewalld. Furthermore I also experienced problems with NDP proxy settings being lost after some time, requiring a restart of Shorewall6 to make the IPv6 connection over Wireguard work again. I hope this will be fixed by settings this up in systemd-networkd.

Edit again the file /etc/systemd/network/internet.net and in the [NETWORK] section add this

IPv6ProxyNDP=1
IPv6ProxyNDPAddress=aaaa:bbbb:cccc:dddd:ffff::2

If you have more than 1 Wireguard client, you can add multiple IPv6ProxyNDPAddress lines to the file, one for each IPv6 address you want to proxy.

Then restart the systemd-networkd service:

# systemctl restart systemd-networkd

With this command you can check whether they have been set up correctly:

# ip -6 neighbour show proxy

Enabling and testing the VPN

On the server run this to enable the Wireguard server:

# systemctl enable --now wg-quick@wg0

To connect to the VPN, run this on the client:

# systemctl start wg-quick@wg0

Check if you can browse the world wide web. Use these websites to check your IP address and whether you have a working IPv6 connection:

https://test-ipv6.com/

https://ipv6-test.com/

You can also use traceroute and traceroute6 to test whether traffic is correctly going through the VPN tunnel:

# traceroute www.google.com
# traceroute6 www.google.com

Debugging Wireguard

If things don’t work as expected, you can enable debug logging in the Wireguard module with this command:

# echo module wireguard +p > /sys/kernel/debug/dynamic_debug/control

Replace +p by -p in order to disable debug logging. You can find the logs in your kernel messages, for example by running

# journalctl -f -k

Also firewall log messages will appear here.

You can use tcpdump to check the traffic on the wire (or in the VPN tunnel). For example to see all ipv6 traffic in the tunnel on the gateway:

# tcpdump -nettti wg0 "ip6"

Sources

Setup WireGuard with global IPv6

Setting up WireGuard IPv6

Reddit: Wireguard doesn’t seem to work with IPv6

Wireguard: enable debug logging to fix network issues

Shorewall6: Proxy NDP