Apache optimization and mitigating DoS and DDoS attacks

Denial-of-service (DoS) and Distributed Denial-of-service (DDoS) attacks are some of the most common cyberattacks these days. They are fairly easy to execute and the consequences can vary from annoying to very problematic, for example if a crucial web service of a company or public service becomes inaccessible. In the current geopolitical situation DDoS attacks are a very popular method used by hacktivists.

In this guide I propose a configuration which can help against some of these attacks. I recommend implementing such measures on all web servers. Even if they might not completely prevent an attack, at least they from a basis which you can easily adapt to future attacks. These measures can also help against spammers and scrapers of your website and improve general performance of your server.

Firewall protection

Blocking connections from known bad IP addresses

We don’t want to waste CPU time to packets coming from known bad IP addresses, so we block them as soon as possible. Follow my earlier article to block known bad IP addresses.

Geoblocking connections

If you need a quick way to mitigate a DoS or DDoS attack you are experiencing, you can consider geoblocking connections in your firewall. For example you could temporarily only allow connections to your web server from IP addresses of your own country if that’s where most of your visitors come from. You can download lists of all IPv4 and IPv6 addresses per country. See the previously mentioned article about using iplists to implement geoblocking with Foomuuri for configuring the firewall.

Rate limiting connections

Then we can rate limit new connections per source IP address in our firewall.

In Foomuuri, you can set something like this in the public-localhost section:

http  saddr_rate "3/minute burst 20" saddr_rate_name http_limit
https saddr_rate "3/minute burst 20" saddr_rate_name http_limit
http  drop counter "http_limit" log "http_limit"
https drop counter "http_limit" log "http_limit"

This will allow a remote host to open 20 new connections per source IP to both port 80 and port 443 together, after which new connections will be limited to 3 per minute. Other connection attempts will be dropped, counted and logged with prefix http_limit. You will need to adapt these numbers to what your own server requires. When under attack, I recommend removing the last 2 lines: you don’t want to waste time logging everything. When setting up rules which define limits per IP address, take into account users which are sharing a public IP via NAT, such as on corporate networks.

The above rules still allows one source IP address to have more than 20 simultaneous connections open with your web server, because it only limits the rate at which new connections can be made. If you want to limit the total number of open connections, you can use something like this:

http  saddr_rate "ct count 20" saddr_rate_name http_limit
https saddr_rate "ct count 20" saddr_rate_name http_limit
http  drop counter "http_limit" log "http_limit"
https drop counter "http_limit" log "http_limit"

It is also possible to implement a global connection limit to the http and https ports, irrespective of the source IP address. While this will be very effective against DDoS attacks, you will also block unmalicious visitors of your site. If you implement global rate limiting, add rules before to allow your own IP address, so that you will not be blocked yourself. In case of emergency and if all else fails, you could combine global rate limiting with geoblocking: first create rules which allow connections from certain countries and regions and after that place the global rate limit rules which will then be applied to connections from other countries and regions.

For example:

http  saddr_rate "ct count 20" saddr_rate_name http_limit saddr @belgium4
https saddr_rate "ct count 20" saddr_rate_name http_limit saddr @belgium6
http saddr @belgium4 drop
http saddr @belgium6 drop
http global_rate "10/second burst 20"
https global_rate "10/second burst 20"

If the iplists @belgium4 and @belgium6 contain all Belgian Pv4 and IPv6 addresses, then these rules will allow up to 20 connections per source IP from Belgium. More connections per source IP from Belgium will be dropped. From other parts of the world, we allow a burst of 20 new connections after which there will be a global limit of 10 new connections per second, irrespective the source address.

Enable SYN cookies

SYN cookies are a method which helps to mitigate SYN attacks, in which a server is flooded by SYN requests. When the backlog queue where SYN requests are stored, is full, SYN cookies will be sent in the server’s SYN + ACK response, instead of creating an entry in the backlog queue. The SYN cookie contains a cryptographic encoding of a SYN entry in the backlog and can be reconstructed by the server from the client’s ACK response.

SYN cookies are enabled by default in all Debian Linux kernels because they set CONFIG_SYN_COOKIES=y. If you want to check whether SYN cookies are enabled, run

# sysctl net.ipv4.tcp_syncookies

If the value is 0, you can enable them by running

sysctl -w net.ipv4.tcp_syncookies=1

To make this setting persist after a reboot, you create the file /etc/sysctld.d/syncookies.conf with this content:

net.ipv4.tcp_syncookies=1

Configuring Apache to block Denial-of-Service attacks

Tuning Apache and the event MPM

In /etc/apache2/apache2.conf check these settings:

Timeout 60
KeepAlive On
MaxKeepAliveRequests 100
KeepAliveTimeout 5

These are the default in recent upstream Apache versions, but they might be different if you are using an older packaged Apache version. Also Debian still sets a much higher Timeout value by default. If Timeout and KeepAliveTimeout are set too high, clients can keep a connection open for way too much time, filling up the available workers. If you are under attack, you can consider setting the Timeout value even lower and disabling KeepAlive completely. See further to read how to do the latter automatically with mod_qos.

Make sure you are using the Event MPM in Apache, because it’s the most performant. You can check with this command:

# apache2ctl -M | grep mpm
 mpm_event_module (shared)

Then we need to configure the Event MPM in /etc/apache2/mods-enabled/mpm_event.conf:

StartServers             5
ServerLimit              16
MinSpareThreads          100
MaxSpareThreads		 400
ThreadLimit              120
ThreadsPerChild		 100
MaxRequestWorkers        1000
MaxConnectionsPerChild   50000

We start 5 (StartServers) child server processes, with each child having up to 100 (ThreadsPerChild) threads to deal with connections.We want to have 100-400 (MinSpareThreadsMaxSparehreads) spare threads to be available to handle new requests. The MaxRequestWorkers sets an upper limit of 1000 requests we can handle simultaneously. ServerLimit defines the maximum amount of child server processes. You only need to increase the default value of 16 if MaxRequestWorkers / ThreadsPerChild is higher than 16. The ThreadLimit is the maximum value you can set ThreadsPerChild to by just restarting Apache. If you need to increase the value of ThreadsPerChild to a value higher than the current ThreadLimit, you will need to modify ThreadLimit too and stop and start the parent process manually. Don’t set the ThreadLimit value much higher than the ThreadsPerChild value, because it increases memory consumption. After 50000 connections, a child process will exit and a new one will be spawn. This is useful in case of memory leaks.

Enabling compression and client caching of content

Enable the deflate and brotli modules in Apache to serve compressed files to clients which support this:

# a2enmod deflate
# a2enmod brotli

Then in /etc/apache2/mods-enabled/deflate.conf put this:

<IfModule mod_filter.c>
	AddOutputFilterByType DEFLATE text/html text/plain text/xml
	AddOutputFilterByType DEFLATE text/css
	AddOutputFilterByType DEFLATE application/x-javascript application/javascript application/ecmascript
	AddOutputFilterByType DEFLATE application/rss+xml
	AddOutputFilterByType DEFLATE application/xml
	AddOutputFilterByType DEFLATE text/json application/json
	AddOutputFilterByType DEFLATE application/wasm
</IfModule>

and in /etc/apache2/mods-enabled/brotli.conf this:

<IfModule mod_filter.c>
	AddOutputFilterByType BROTLI_COMPRESS text/html text/plain text/xml
	AddOutputFilterByType BROTLI_COMPRESS text/css
	AddOutputFilterByType BROTLI_COMPRESS application/x-javascript application/javascript application/ecmascript
	AddOutputFilterByType BROTLI_COMPRESS application/rss+xml
	AddOutputFilterByType BROTLI_COMPRESS application/xml
	AddOutputFilterByType BROTLI_COMPRESS text/json application/json
	AddOutputFilterByType BROTLI_COMPRESS application/wasm
</IfModule>

In order to let browsers cache static files, so that they don’t have to redownload all images if a user comes back, we set some expires headers. First make sure the expires module is enabled:

# a2enmod expires

Then in /etc/apache2/mods-enabled/expires.conf set this:

<IfModule mod_expires.c>
        ExpiresActive On
        ExpiresDefault A120
        ExpiresByType image/x-icon A604800
        ExpiresByType application/x-javascript A604800
        ExpiresByType application/javascript A604800
        ExpiresByType text/css A604800
        ExpiresByType image/gif A604800
        ExpiresByType image/png A604800
        ExpiresByType image/jpeg A604800
        ExpiresByType image/webp A604800
        ExpiresByType image/avif A604800
        ExpiresByType image/x-ms-bmp A604800
        ExpiresByType image/svg+xml A604800
        ExpiresByType image/vnd.microsoft.icon A604800
        ExpiresByType text/plain A604800
        ExpiresByType application/x-shockwave-flash A604800
        ExpiresByType video/x-flv A604800
        ExpiresByType video/mp4 A604800
        ExpiresByType application/pdf A604800
        ExpiresByType application/font-woff A604800
        ExpiresByType font/woff A604800
        ExpiresByType font/woff2 A604800
        ExpiresByType application/vnd.ms-fontobject A604800
        ExpiresByType text/html A120
</IfModule>

This will allow the mentioned MIME types to be cached for a week, while HTML and other files will be cached up to 2 minutes.

Modsecurity to block web attacks

Configure Modsecurity with the OWASP CoreRuleSet in order to prevent wasting time to web attacks. Block old HTTP versions: this will already block some stupid security scanners and bots.

mod_reqtimeout to mitigate Slowloris attacks

The Slowloris attack is an easy attack in which the attacker opens many connections to your web server, but intentionally slows down completing its request. Your server worker threads get busy waiting for completion of the requests, which does not happen This way, one client can occupy all worker threads of the web server without using a lot of bandwidth.

Debian has enabled protection in Apache by default by means of Apache mod_reqtimeout. The configuration can be found in /etc/apache2/mods-enabled/reqtimeout.conf:

RequestReadTimeout header=20-40,minrate=500
RequestReadTimeout body=10,minrate=500

The first line will limit the wait time until the first byte of the request line is received to 20 seconds, after that it will require at least 500 bytes per second, with a total limit of 40 seconds until all request headers are received.

The second line limits the wait time until the first byte of the request body is received to 10 seconds, after which 500 bytes per second are required.

If a connection is slower than the above requirements, it will be closed by the server.

mod_qos to limit requests on your server

First an important caveat: the libapache2-mod-qos package included in Debian 12 Bookworm is broken. If you install this one, your Apache web server will crash at startup. You can find a fixed libapache12-mod-qos package for Debian 12 in my bookworm-frehi repository.

Enable the module with

# a2enmod qos

Then we set some configuration options in /etc/apache2/mods-enabled/qos.conf:

# allows keep-alive support till the server reaches 80% of all MaxRequestWorkers
QS_SrvMaxConnClose 80%

# don't allow a single client to open more than 10 TCP connections if
# the server has more than 600 connections:
QS_SrvMaxConnPerIP 10 600

# minimum data rate (bytes/sec) when the server
# has 100 or more open TCP connections:
QS_SrvMinDataRate 1000 32000 100

# disables connection restrictions for certain clients:
QS_SrvMaxConnExcludeIP 192.0.2.
QS_SrvMaxConnExcludeIP 2001:db8::2

QS_ClientEntries 50000

We disable keepalive as soon as we reach 80% of MaxRequestWorkers, so that thread are not occupied any more by keepalive requests. You can also use an absolute number instead of a percentage if you prefer that. As soon as we reach 600 simultaneous connections, we only only allow 10 connections per IP address. As soon as there are more than 100 connections, we require that there is minimum of data rate of 1000 bytes per second. The required transfer rate is increased linearly to 32000 bytes per second when the amount of connections reaches the value of MaxRequestWorkers. The QS_ClientEntries setting defines how many different IP addresses mod_qos keeps track of. By default it’s 50000. On very busy servers you will need to increase this, but keep in mind that this increases memory usage. Use QS_SrvMaxConnExcludeIP to exclude certain IP addresses from these limitations.

Then we want to limit the amount of requests a client can make. We focus on the resource intensive requests. Very often attackers will abuse the search form, because this not only stresses your web server, but also your database and your web application itself. If one of them, or the combination of these three, can’t cope with requests, your application will be knocked off line. Also other forms, such as contact forms or authentication forms, are often targeted.

First I add this to the VirtualHost I want to protect:

RewriteEngine on
RewriteCond "%{REQUEST_URI}" "^/$"
RewriteCond "%{QUERY_STRING}" "s="
RewriteRule ^ - [E=LimitWPSearch] 
                
RewriteCond "%{REQUEST_URI}" "^/wp-login.php$"
RewriteCond "%{REQUEST_METHOD}" "POST"
RewriteRule ^ - [E=LimitWPLogin]

RewriteCond "%{REQUEST_URI}" "^/wp-json/"
RewriteRule ^ - [E=LimitWPJson]

RewriteCond "%{REQUEST_URI}" "^/wp-comments-post.php"
RewriteRule ^ - [E=LimitWPComment]

RewriteCond "%{REQUEST_URI}" "^/wp-json/contact-form-7/"
RewriteCond "%{REQUEST_METHOD}" "POST"
RewriteRule ^ - [E=LimitWPFormPost]

RewriteCond "%{REQUEST_URI}" "^/xmlrpc.php"
RewriteRule ^ - [E=LimitWPXmlrpc]

RewriteCond "%{REQUEST_METHOD}" "POST"
RewriteRule ^ - [E=LimitWPPost]

RewriteCond "%{REQUEST_URI}" ".*"
RewriteRule ^ - [E=LimitWPAll]

The website I want to protect runs on WordPress. I use mod_rewrite to set a variable when certain requests are made, for example LimitWPSearch when a search is done, LimitWPLogin when data is submitted to the login form, LimitWPJson when something in /wp-json/ is accessed, LimitWPComment when a comment is submitted, LimitWPXmlrpc when /xmlrpc.php is accessed, LimitWPFormPost when a form created with the Contact Form 7 plugin is posted, LimitWPPost when anything is sent via POST, and LimitWPAll on any request. A request can trigger multiple rules.

Then we go back to our global server configuration to define the exact values for these limits. You can again do this in /etc/apache2/mods-enabled/qos.conf or /etc/apache2/conf-enabled/qos.conf or something like that:

QS_ClientEventLimitCount 20 40 LimitWPPost
QS_ClientEventLimitCount 400 120 LimiWPAll
QS_ClientEventLimitCount 5 60 LimitWPSearch
QS_ClientEventLimitCount 50 600 LimitWPJson
QS_ClientEventLimitCount 10 3600 LimitWPLogin
QS_ClientEventLimitCount 3 60 LimitWPComment
QS_ClientEventLimitCount 6 60 LimitWPXmlrpc
QS_ClientEventLimitCount 3 60 LimitWPFormPost

For example when a client hits the LimitWPSearch rule 5 times in 60 seconds, the server will return a HTTP error code. In practice, this means that a client can do up to 4 successful searches per minute. You will have to adapt these settings to your own web applications. In case you get hit by an attack, you can easily adapt the values as necessary or add new limits.

Using Fail2ban to block attackers

Now I want to go further and block offenders of my QOS rules for a longer time in my firewall with Fail2ban.

Create /etc/fail2ban/filter.d/apache-mod_qos.conf:

[INCLUDES]

# Read common prefixes. If any customizations available -- read them from
# apache-common.local
before = apache-common.conf


[Definition]


failregex = mod_qos\(\d+\): access denied,.+\sc=<HOST>

ignoreregex = 

Then create /etc/fail2ban/jail.d/apache-mod_qos.conf:

[apache-mod_qos]

port         = http,https
backend      = pyinotify
journalmatch = 
logpath      = 	/var/log/apache2/*error.log 
		/var/log/apache2/*/error.log
maxretry     = 3
enabled      = true

Make sure you have the package python3-pyinotify installed.

Note that a stateful firewall will by default only block new connections, and you might still see some violations over the existing connection even after an IP is banned by Fail2ban.

Optimizing your web applications

Databases (MariaDB, MySQL, PostgreSQL)

Ideally you should run your DBMS on a separate server. If you run MariaDB (or MySQL), you can use mysqltuner to get automatic performance tuning recommendations.

Generic configuration recommendations for PostgreSQL can be created on PgTune.

WordPress

If you are using WordPress, I recommend setting up the Autoptimize, WebP Express, Avif Express and WP Super Cache modules. Autoptimize can aggregate and minify your HTML, CSS and Javascript files, reducing bandwidth usage and reducing the number of request needed to load your site. WebP Express and Avif Express will automatically convert your JPEG, GIF and PNG images to the more efficient WebP and AVIF formats, which again reduces bandwidth.

WP Super Cache can cache your pages, so that they don’t have to be dynamically generated for every request. I strongly recommend that in the settings of WP Super Cache, in the Advanced page, you choose the Expert cache delivery method. You will need to set up some rewrite rules in your .htaccess file. In this mode, Apache will directly serve the cached pages without any involvement of PHP. You can easily check this by stopping the php8.2-fpm service and visiting your website. The cached pages will just load.

Drupal

I’m not really a Drupal specialist, but at least make sure that in Administration – Configuration – Performance caching and the bandwidth optimization options are enabled.

Mediawiki

There are some performance tuning tips for Mediawiki in the documentation.

Testing

I like to use h2load to do benchmarking of web servers. It’s part of the nghttp2-client package in Debian:

# apt install nghttp2-client

Ideally, you run benchmarks before making any of the above changes, and you retest after each modification to see the effect. Also when testing, you should do them from a host where it does not matter if it gets locked out. It can be a good idea to stop fail2ban, because it can get annoying if you are blocked on the firewall. You can also run these benchmarks on the host itself, bypassing the firewall rules.

First we test our mod_qos rule which limits the amount of searches we can do:

# h2load -n8 -c1 https://example.com/?s=test
starting benchmark...
spawning thread #0: 1 total client(s). 8 total requests
TLS Protocol: TLSv1.3
Cipher: TLS_AES_128_GCM_SHA256
Server Temp Key: X25519 253 bits
Application protocol: h2
progress: 12% done
progress: 25% done
progress: 37% done
progress: 50% done
progress: 62% done
progress: 75% done
progress: 87% done
progress: 100% done

finished in 277.41ms, 28.84 req/s, 544.07KB/s
requests: 8 total, 8 started, 8 done, 4 succeeded, 4 failed, 0 errored, 0 timeout
status codes: 4 2xx, 0 3xx, 4 4xx, 0 5xx
traffic: 150.93KB (154553) total, 636B (636) headers (space savings 83.13%), 150.03KB (153628) data
                     min         max         mean         sd        +/- sd
time for request:     2.97ms    210.18ms     32.38ms     71.98ms    87.50%
time for connect:    16.45ms     16.45ms     16.45ms         0us   100.00%
time to 1st byte:   226.50ms    226.50ms    226.50ms         0us   100.00%
req/s           :      28.98       28.98       28.98        0.00   100.00%

You can see that 4 of these request succeed (status code 2xx) and the other 4 failed (status code 4xx). This is exactly what we configured: starting from the 5th request in a period of 60 seconds, we don’t allow searches. In Apache’s error log we see:

[qos:error] [pid 1473291:tid 1473481] [remote 2001:db8::2] mod_qos(067): access denied, QS_ClientEventLimitCount rule: event=LimitWPSearch, max=5, current=5, age=0, c=2001:db8::2, id=Z4PiOQcCdoj4ljQCvAkxPwAAsD8

We can check the connection rate limit we have configured in our firewall, by benchmarking the creation of more connections than we allowed in the burst value:

# h2load -n10000 -c50 https://example.com/
starting benchmark...
spawning thread #0: 50 total client(s). 10000 total requests
TLS Protocol: TLSv1.3
Cipher: TLS_AES_128_GCM_SHA256
Server Temp Key: X25519 253 bits
Application protocol: h2
progress: 10% done
progress: 20% done
progress: 30% done
progress: 40% done
progress: 50% done
progress: 60% done
progress: 70% done
progress: 80% done
progress: 90% done
progress: 100% done

finished in 148.78s, 67.22 req/s, 695.57KB/s
requests: 10000 total, 10000 started, 10000 done, 798 succeeded, 9202 failed, 0 errored, 0 timeout
status codes: 798 2xx, 0 3xx, 9202 4xx, 0 5xx
traffic: 101.06MB (105968397) total, 181.31KB (185665) headers (space savings 95.80%), 100.66MB (105550608) data
                     min         max         mean         sd        +/- sd
time for request:    19.47ms    649.75ms     50.94ms     23.46ms    86.23%
time for connect:   127.26ms      69.74s      25.28s      26.08s    80.00%
time to 1st byte:   211.04ms      69.79s      25.34s      26.06s    80.00%
req/s           :       1.34       17.91        6.75        5.79    80.00%

As you can see, the “time for connect” went up to 69.74 seconds because connections were dropped because of the limit set in Foomuuri.

If logging of these connections is enabled in the firewall, we can see this:

http_limit IN=ens3 OUT= MAC=aa:aa:aa:aa:aa:aa:aa:aa:aa8:aa:aa:aa:aa:aa SRC=2001:db8::2 DST=2001:db8::3 LEN=80 TC=0 HOPLIMIT=45 FLOWLBL=327007 PROTO=TCP SPT=32850 DPT=443 WINDOW=64800 RES=0x00 SYN URGP=0

Also mod_qos kicks in and this results in only 798 succeeded requests, while the others where blocked.

Conclusion

DoS and DDoS attacks by hacktivist groups are very common and happen on a daily basis, often kicking websites of companies and public services offline. Unfortunately, there is not a lot of practical advise on the Wold Wide Web how to mitigate these attacks. I hope this article provides a some basic insight.

The proposed measures should protect well against a simple DoS attack. More difficult to judge is how well they protect against a DDoS attack. They will certainly make them more difficult, but certainly, the larger the botnet executing the attack is, the more difficult it becomes to mitigate this on the server level. If the attackers manage to saturate your bandwidth, there is nothing you can do and you will need measures on the network level to prevent the traffic hitting your server at all.

In any case, the proposed configuration should provide a basis for mitigating some attacks. You can easily adapt the rules to the attacks you are experiencing. I recommend implementing and testing this configuration before you experience a real attack.

Sources and more information

Protecting your server from known bad IPs with Foomuuri iplists

On the Internet we can find (usually crowdsourced) lists of malicious IP addresses responsible for attacks. We can easily integrate them in Foomuuri in order to block connections from these bad hosts. Not only does this improve security, it is also a performance win, because our daemons don’t don’t have to waste any more time dealing with these malicious connections.

The blocklists

  • blocklist.de: a crowdsourced list of IP addresses involved in all kinds of attacks
  • techmdw blacklist: this list is compiled by the Swedish company TechMDW AB and is based on Crowdsec with their own additions
  • Greensow
  • Spamhaus DROP: The Don’t Route or Peer List by Spamhaus contains netblocks which you should never interact with because they are leased or stolen by criminal organisations
  • Emerging Threats: compilation of Spamhaus DROP list, the top attackers list by DShield and lists by abuse.ch
  • Interserver: list consisting of IP addresses attacking servers of the web hosting company Interserver
  • Stopforumspam: list of toxic IP addresses which are believed to be used only for spamming websites
  • Zonefiles Compromised: a list of suspicious, malware, phishing and ransom IPs

Blocking incoming connections from malicious IPs

Create file/etc/foomuuri/iplist.conf with these contents:

iplist {
	@blocklist_de https://lists.blocklist.de/lists/all.txt refresh=15m
        @techmdw https://blacklist.techmdw.com/ refresh=30m
	@greensnow https://blocklist.greensnow.co/greensnow.txt refresh=30m
	@et https://rules.emergingthreats.net/fwrules/emerging-Block-IPs.txt refresh=24h
	@interserver https://rbldata.interserver.net/ip.txt refresh=15m
	@stopforumspam https://www.stopforumspam.com/downloads/toxic_ip_cidr.txt refresh=2h
        @spamhausdrop https://www.spamhaus.org/drop/drop.txt refresh=24h
        @spamhausdropv6 https://www.spamhaus.org/drop/dropv6.txt refresh=24h
        @compromised https://zonefiles.io/f/compromised/ip/live/ refresh=12h
}

Then in the public-localhost section (which I usually put in /etc/foomuuri/public-localhost.conf), add this at the top:

        saddr @blocklist_de drop counter "blocklist.de"
        saddr @techmdw drop counter "techmdw" 
        saddr @greensnow drop counter "greensnow"
        saddr @spamhausdrop drop counter "spamhausdrop"
        saddr @spamhausdropv6 drop counter "spamhausdropv6"
        saddr @et drop counter "et"
        saddr @interserver drop counter "interserver"
        saddr @stopforumspam drop counter "stopforumspam"
        saddr @compromised drop counter "compromised"

In order to prevent getting locked out of your system yourself because of a false positive or an error in one of the lists, I recommend adding one rule which allows you to always access your system before these rules. For example to make sure you can always connect from IP xxx.xxx.xxx.xxx put this as first rule in public-localhost section:

        ssh saddr xxx.xxx.xxx.xxx accept

In the rules above I’m not logging all the details of the dropped connections, but I’m keeping a counter so that I can see how many times such a rule has been hit. You can use the command

# foomuuri list counter

to see how many packets and bytes have been dropped by these rules.

If you want to log all individual dropped connections , you can add this at the end of every line:

log "name_of_the_blocklist"

They will then be logged with the prefix name_of_the_blocklist.

Rejecting outgoing connections to malicious IPs

We can also block outgoing connections. You should especially do this for the Spamhaus DROP lists. Add this to the localhost-public section:

        daddr @blocklist_de reject counter "out_blocklist.de" log "out_blocklist.de"
        daddr @techmdw reject counter "out_techmdw" log "out_techmdw"
        daddr @greensnow reject counter "out_greensnow" log "out_greensnow"
        daddr @spamhausdrop reject counter "out_spamhausdrop" log "out_spamhausdrop"
        daddr @spamhausdropv6 reject counter "out_spamhausdropv6" log "out_spamhausdropv6"
        daddr @et reject counter "out_et" log "out_et"
        daddr @interserver reject counter "out_interserver" log "out_interserver"
        daddr @stopforumspam reject counter "out_stopforumspam" log "out_stopforumspam"
        daddr @compromised reject counter "out_compromised" log "out_compromised"

Notice that I’m rejecting instead of dropping these connections so that applications don’t keep on waiting until the connection attempt times out and I’m logging these. Normally these rules should only very rarely get triggered, but if they do you want detailed logs so you easily investigate what’s going on.

Dropping or allowing incoming connections by country of origin

Another very effective method to prevent abuse is to limit connections to services like SSH and your mail server to certain countries of origin. You can find lists of IP (both IPv4 and IPv6) addresses per country on https://github.com/ipverse/rir-ip/tree/master/country . You can add them to an iplist in Foomuuri and then use these in the public-localhost section. Note that these lists are not perfect and sometimes connections can come from another country than from the one the IP address is registered to in this database. Especially public VPN services sometimes suffer from these problems, so be careful if you are using these.

You can also use this aggregated list of all European IP addresses, but unfortunately that list only exists for IPv4 addresses.

To use this aggregated list, add this to the iplist section:

        @europe https://ipv4.fetus.jp/krfilter.4.txt refresh=24h 

Then with these rules in the the public-localhost section, I only allow IPv4 connections to port 143 (IMAP), 993 (IMAPs), port 587 (Submission), port 465 (Submissions) from European IPs. Note that I allow IPv6 from the whole world because this aggregated list only contains IPv4 addresses.

        ssh ipv4 saddr @europe
        ssh ipv6
        ssh drop counter "non-europe" log "non-europe"

        imap ipv4 saddr @europe
        imap ipv6
        imap drop counter "non-europe" log "non-europe"
        imaps ipv4 saddr @europe
        imaps ipv6
        imaps drop counter "non-europe" log "non-europe"

        submission ipv4 saddr @europe
        submission ipv6
        submission drop counter "non-europe" log "non-europe"
        submissions ipv4 saddr @europe
        submissions ipv6
        submissions drop counter "non-europe" log "non-europe"

More information

https://github.com/FoobarOy/foomuuri/wiki/Configuration#iplist

Some various performance improvements for Debian 12 Bookworm

Here some various improvements I implemented on some of my Debian 12 Bookworm servers in order to improve performance.

zswap: use zsmalloc allocator with newer kernel

If your system has little memory, you might be using zswap already. When memory is getting full, the system will try to swap out less used data from memory to a compressed swap in memory instead of writing it immediately to a swap partition or swap file on slower storage. In Linux kernel version 6.7 the zsmalloc allocator, which is superior to other allocators (zbud and z3fold), became the default.

So first upgrade to a more recent kernel. You can get a recent kernel from bookworm-backports or Debian testing or unstable.

To enable zswap at boot you can create a file named /etc/default/grub.d/zswap.cfg which contains:

GRUB_CMDLINE_LINUX_DEFAULT="$GRUB_CMDLINE_LINUX_DEFAULT zswap.enabled=1 zswap.compressor=lz4 zswap.max_pool_percent=30"

If you want higher compression at the cost of more CPU time, you can replace lz4 by zstd.

You will need to add the compression module of your preference to your initramfs. So in the case of lz4, just add

lz4

to /etc/initramfs-tools/modules and then run

# update-initramfs -u

Finally execute

# update-grub

to update your Grub configuration so that these settings will become automatically active at the next boot.

I upgraded on to Linux 6.11 on a VPS with 2GB of RAM with these settings, and the system feels much snappier now.

To check effectiveness of zswap, you can use the zswap-stats script.

Update to systemd 254 or higher to improve behaviour under memory pressure

systemd 254 includes a change which makes journald and resolved flush their caches when the system is under memory pressure. This will free memory, reducing swapping. When this happens, this can be found in the logs:

systemd-journald[587721]: Under memory pressure, flushing caches.

You can find systemd 254 for Debian 12 in bookworm-backports.

Exclude cron from audit logging

On one of my systems, my audit logs where rapidly filling. You can check whether this is happening for you by looking at the dates of the files in /var/log/audit/:

# ls -l /var/log/audit/

By default auditd will write files up to 8 MB, after which it will rotate the file. If these different files have their modification date very close to each others, then you might consider reducing the logging.

Possible causes for audit logs filling up are Apparmor logs. Improve your Apparmor profiles to reduce warnings and errors. On one of my system, the cause was the logging caused by the execution of cron jobs. Especially because mailman3-web contains a cron job which is executed every single minute.

To prevent logging everything related to cron, create a file /etc/audit/rules.d/excludecron.rules:

-a exclude,always -F exe=/usr/sbin/cron

Then run

# augenrules --load

to load the new rules.

Securing PHP-FPM with AppArmor

PHP-FPM is an ideal candidate to secure with AppArmor. Not only can the security of a web server be endangered by security bugs in PHP itself, it can also be affected by security holes in PHP applications. By confining PHP-FPM with AppArmor, we can limit abuse when a security hole is exploited, by preventing PHP-FPM for example from reading arbitrary files on your system or executing random binaries, which may contain a Linux backdoor or crypto-miner malware.

Preparation

First it is important that you run different PHP web applications as different users by running them in a different pool. The different pools can then be confined with their own separate AppArmor subprofile or hat, so that we can protect the different PHP applications from each other.

Then on Debian 12 Bookworm, I recommend that you upgrade to the AppArmor packages from my own repository, because they contain some important bug fixes, notably aa-logprof and aa-genprof supporting exec events in hats. Set up the bookworm-frehi repository in apt and upgrade AppArmor to the version available in this repository:

# apt install -t bookworm-frehi apparmor

Now we are ready to create our AppArmor profile for PHP-FPM. Debian already ships a basic profile /etc/apparmor.d/php-fpm as part of the package apparmor-profiles. However aa-logprof and aa-genprof will want to write everything in one single file, while the /etc/apparmor.d/php-fpm heavily relies on including other files and also the comments in that file will be lost. For that reason I choose to create my own profile from scratch with aa-genprof. So first I disable the php-fpm profile and remove it:

# aa-disable php-fpm
# rm /etc/apparmor.d/disable/php-fpm
# rmdir /etc/apparmor.d/php-fpm.d/

Generating a profile for /usr/sbin/php-fpm8.2

I start aa-genprof:

# aa-genprof /usr/sbin/php-fpm8.2

In another shell, I stop PHP-FPM:

# systemctl stop php8.2-fpm

and now I modify all pools in /etc/php/8.2/fpm/pool.d/*.conf and I add the apparmor_hat value to the name of the pool. For example:

# apparmor_hat = myblog

This instructs PHP-FPM to switch to this subprofile or hat for this pool. If every pool uses a different hat, we can isolate the different pools running different web applications from each other.

Now verify with aa-status whether /usr/sbin/php.php-fpm8.2 is in complain mode and if not, put it in complain mode manually with the aa-complain command:

# aa-complain /etc/apparmor.d/usr.sbin.php-fpm8.2

Now we start PHP-FPM:

# systemctl start php8.2-fpm

It’s a good idea to immediately press S to scan for the first events. Process them like I showed in my previous AppArmor tutorial. As usual, in case of doubt choose the more fine-grained option instead of broad abstractions.

I will discuss some specific things you will encounter.

You will see that PHP-FPM requests read access to /etc/passwd and /etc/group. It needs this in order to look up the UID and GID of the user and group value you have set in your FPM pools. As we are running our different pools in their own subprofiles, which don’t have access to these files, this is not a security problem: your PHP applications won’t have access to these files if you have properly set apparmor_hat in every pool.

At some point our PHP-FPM process switches to the profile /usr/sbin/php-fpm8.2//null-/usr/sbin/php-fpm8.2//myblog. This is because we have set apparmor_hat = myblog in the myblog pool. The null part in the profile name indicates that this profile does not exist yet and so AppArmor uses this as a temporary name. We will have to fix this manually later. Now accept this rule.

Profile:        /usr/sbin/php-fpm8.2
Exec Condition: ALL
Target Profile: /usr/sbin/php-fpm8.2//null-/usr/sbin/php-fpm8.2//myblog

 [1 - change_profile -> /usr/sbin/php-fpm8.2//null-/usr/sbin/php-fpm8.2//myblog,]
(A)llow / [(D)eny] / (I)gnore / Audi(t) / Abo(r)t / (F)inish

Continue processing all events, and when you are done save the profile and quit aa-genprof.

Check with aa-status in which mode the /usr/sbin/php-fpm8.2 profile is and switch it back to complain mode if necessary. Otherwise if it’s in enforce mode, your PHP applications will probably not work correctly any more.

We will now edit the file /etc/apparmor.d/usr.sbin.php-fpm8.2 in a text editor. Find the change_profile rules in that file, and replace them by this line:

change_profile -> /usr/sbin/php-fpm8.2//*,

This allows PHP-FPM to switch to all PHP-FPM hats.

In the file, you will also find some signal rules which refer to the temporary null profile. We need to fix these too like this:

  signal send set=kill peer=/usr/sbin/php-fpm8.2//*,<br />  signal send set=quit peer=/usr/sbin/php-fpm8.2//*,<br />  signal send set=term peer=/usr/sbin/php-fpm8.2//*,

Then within the profile /usr/sbin/php-fpm8.2 flags=(complain) {

create an empty myblog hat in complain mode:

  ^myblog flags=(complain) {
  }

Now reload the AppArmor profile:

# apparmor_parser -r /etc/apparmor.d/usr.sbin.php-fpm8.2

Restart aa-genprof:

# aa-genprof /usr/sbin/php-fpm8.2

and restart PHP-FPM in another shell:

# systemctl restart php8.2-fpm

Now exercise your web applications and process all generated events. Particularly test posting new items on your website and uploading pictures and other files. You will notice that the events triggered by the www pool, are now in the hat myblog, which is shown by aa-genprof as /usr/sbin/php-fpm8.2^myblog.

In WordPress I am using the WebP Express plugin, which automatically creates a webp file for every uploaded image by calling the command cwebp.

aa-genprof will show this event:

Profile:  /usr/sbin/php-fpm8.2^myblog
Execute:  /usr/bin/dash
Severity: unknown

(I)nherit / (N)amed / (U)nconfined / (X) ix On / (D)eny / Abo(r)t / (F)inish

So the myblog pool tries to launch the dash shell as part of this process. By pressing I you can let it inherit the www profile, so it will have exactly the same permissions and cannot read or modify any files from other web applications running in a different PHP-FPM pool, or other files on the system. I also had to do this for the cwebp executable and a few others which are used by WebP Express. Important is that you never choose unconfined, because then you allow these executables to do anything on your system without any AppArmor restrictions.

When WordPress sends e-mail, it will call sendmail:

[(S)can system log for AppArmor events] / (F)inish
Reading log entries from /var/log/audit/audit.log.
Target profile exists: /etc/apparmor.d/usr.sbin.sendmail

Profile:  /usr/sbin/php-fpm8.2^myblog
Execute:  /usr/sbin/sendmail
Severity: unknown

(I)nherit / (P)rofile / (N)amed / (U)nconfined / (X) ix On / (D)eny / Abo(r)t / (F)inish

Sendmail should run in its own profile, so you can choose P here.

Creating a child profile for dash

In our example our PHP-FPM worker calls dash and cwebp to process images. We have chosen to let dash and cwebp inherit all permissions of the worker. While this already is a great security improvement because it will prevent dash from executing random binaries and reading files unrelated to this web application it is still too broad. There is no reason why dash and cwebp should be able to read our PHP files for example and any other files than webp files. So we can improve our security even more by running these commands in a separate child profile.

To do so, I edit /etc/apparmor.d/usr.sbin.php-fpm8.2 in a text editor, and I remove all these lines from the www hat:

    /usr/bin/cwebp mrix,
    /usr/bin/dash mrix,
    /usr/bin/nice mrix,
    /usr/bin/cwebp mrix,
    /usr/bin/which.debianutils mrix,

and I replace it by this rule which will force dash to run in a child profile named myblogdash:

/usr/bin/dash Px -> /usr/sbin/php-fpm8.2//myblogdash,

Then in the usr/sbin/php-fpm8.2 profile, but outside the myblog hat, I create this empty child profile:

  profile myblogdash flags=(complain) {
  }

reload the profile, restart PHP-FPM and start aa-genprof again:

# /etc/apparmor.d/usr.sbin.php-fpm8.2
# systemctl restart php8.2-fpm
# aa-genprof /usr/sbin/php8.2-fpm

Now I upload again an image in order to trigger the execution of dash and and I process the generated events with aa-genprof. Finally I end up with this child profile:

  profile myblogdash {
    include <abstractions/base>
    include <abstractions/bash>

    deny /var/www/example.com/wordpress/wp-admin/admin-ajax.php r,
    deny /var/www/example.com/wordpress/wp-admin/async-upload.php r,
    deny /var/www/example.com/wordpress/wp-admin/options-general.php r,
    deny /var/www/blog.frehi.be/wordpress/xmlrpc.php r,

    /usr/bin/cwebp mrix,
    /usr/bin/dash mr,
    /usr/bin/nice mrix,
    /usr/bin/which.debianutils mrix,
    owner /var/www/example.com/wordpress/**.{jpg,jpeg,png} r,
    owner /var/www/example.com/wordpress/wp-content/uploads/webp-express-test-conversion.webp.lossless.webp w,
    owner /var/www/example.com/wordpress/wp-content/uploads/webp-express-test-conversion.webp.lossy.webp w,
    owner /var/www/example.com/wordpress/wp-content/webp-express/webp-images/doc-root/**.webp w,

  }

As you can see our dash process and all other processes which inherit this profile, including cwepb, now have much more limited permissions and can only read JPEG and PNG files from our website and write WEBP images. I had some events of the sh process reading files like wp-admin/options-general.php, but I have no idea why this process would do that. I chose to deny this, and even when this child profile is in enforce mode, everything appeared to be working OK. Anyway, these files don’t contain anything sensitive, so it would not be a real problem if you accept this.

Write permissions on PHP files

A topic which is worth thinking about: which permissions are you giving your PHP worker process on the PHP files? Obviously PHP-FPM will need read access to these files in order to execute them, but do you also want to give them write access? There is a strong argument not too: it will prevent hackers who manage to exploit your PHP application to write a new PHP file with their own code (such as a PHP backdoor) and then execute it. Or modify your existing PHP files and insert malicious code in them. So confining your PHP process to prevent it from writing PHP files is a huge advantage for security.

However applications like WordPress will try to update themselves and in order to do so, it needs write access to its own files. Also installing new plug-ins via the web interface requires the permissions to write PHP files. Automatic updates is also a big advantage for security. As an alternative to WordPress updating itself automatically, you could easily write a shell script which uses wp-cli to update WordPress and all modules and themes and regularly call this from a cron job or systemd timer.

On my system, I noticed that the wp-supercache module was writing some PHP files. If I understand this correctly, it does this in order to deal with pages which contain dynamic content. So in order not to break this, I need to allow writing PHP files in wp-content/cache/supercache anyway.

You will have to decide for each web application whether you want it to allow writing PHP files. At least run your different PHP applications in different PHP-FPM workers with different AppArmor hats, so that it’s impossible for one PHP applications to affect other PHP applications, but also strongly consider not giving write access to its own PHP files because that would be a huge win security wise.

If you decide to give a web application write access to its own PHP files, I recommend even more setting up Modsecurity with the CoreRuleSet, blocking outgoing network connections by default for that user account with your firewall (such as Foomuuri) and installing Snuffleupagus to further limit the things PHP applications can do on a PHP level.

The usr.sbin.php-fpm8.2 profile

Finally I ended up with the following profile.

abi <abi/3.0>,

include <tunables/global>

/usr/sbin/php-fpm8.2 {
  include <abstractions/base>

  capability chown,
  capability dac_override,
  capability kill,
  capability net_admin,
  capability setgid,
  capability setuid,

  signal send set=kill peer=/usr/sbin/php-fpm8.2//*,
  signal send set=quit peer=/usr/sbin/php-fpm8.2//*,
  signal send set=term peer=/usr/sbin/php-fpm8.2//*,

  /run/php/*.sock w,
  /usr/sbin/php-fpm8.2 mr,
  @{PROC}/@{pid}/attr/{apparmor/,}current rw,
  owner /etc/group r,
  owner /etc/host.conf r,
  owner /etc/hosts r,
  owner /etc/nsswitch.conf r,
  owner /etc/passwd r,
  owner /etc/php/8.2/fpm/conf.d/ r,
  owner /etc/php/8.2/fpm/php-fpm.conf r,
  owner /etc/php/8.2/fpm/php.ini r,
  owner /etc/php/8.2/fpm/pool.d/ r,
  owner /etc/php/8.2/fpm/pool.d/*.conf r,
  owner /etc/php/8.2/mods-available/*.ini r,
  owner /etc/resolv.conf r,
  owner /etc/ssl/openssl.cnf r,
  owner /proc/sys/kernel/random/boot_id r,
  owner /run/php/php8.2-fpm.pid w,
  owner /run/systemd/userdb/ r,
  owner /sys/devices/system/node/ r,
  owner /sys/devices/system/node/node0/meminfo r,
  owner /tmp/.ZendSem.* rwk,
  owner /var/log/php8.2-fpm.log w,

  change_profile -> /usr/sbin/php-fpm8.2//*,


  ^myblog {
    include <abstractions/base>
    include <abstractions/ssl_certs>

    network inet dgram,
    network inet stream,
    network inet6 dgram,
    network inet6 stream,
    network netlink raw,
    network unix stream,

    signal receive set=kill peer=/usr/sbin/php-fpm8.2,
    signal receive set=quit peer=/usr/sbin/php-fpm8.2,
    signal receive set=term peer=/usr/sbin/php-fpm8.2,

    /etc/gai.conf r,
    /etc/hosts r,
    /tmp/.ZendSem.* k,
    /usr/bin/dash Px -> /usr/sbin/php-fpm8.2//blogdash,
    /usr/sbin/postdrop Px,
    /usr/sbin/sendmail Px,
    /var/www/example.com/wordpress/**.php r,
    /var/www/example.com/wordpress/**.{css,js,svg,po} r,
    /var/www/example.com/wordpress/**/ r,
    /var/www/example.com/wordpress/.htaccess r,
    /var/www/example.com/wordpress/wp-content/uploads/wpcf7_uploads/.htaccess r,
    /var/www/example.com/wordpress/wp-includes/*.json r,
    /var/www/example.com/wordpress/wp-includes/certificates/ca-bundle.crt r,
    owner /var/www/example.com/tmp/* rw,
    owner /var/www/example.com/wordpress/.maintenance rw,
    owner /var/www/example.com/wordpress/wp-content/autoptimize_404_handler.php rw,
    owner /var/www/example.com/wordpress/wp-content/cache/**/ rw,
    owner /var/www/example.com/wordpress/wp-content/cache/**/index.html w,
    owner /var/www/example.com/wordpress/wp-content/cache/*.tmp rw,
    owner /var/www/example.com/wordpress/wp-content/cache/autoptimize/.htaccess w,
    owner /var/www/example.com/wordpress/wp-content/cache/autoptimize/css/*.css rw,
    owner /var/www/example.com/wordpress/wp-content/cache/autoptimize/js/*.js rw,
    owner /var/www/example.com/wordpress/wp-content/cache/example.com_wp_cache_gc.txt w,
    owner /var/www/example.com/wordpress/wp-content/cache/lyteCache/*.jpg rw,
    owner /var/www/example.com/wordpress/wp-content/cache/preload_permalink.txt rw,
    owner /var/www/example.com/wordpress/wp-content/cache/supercache/**.html w,
    owner /var/www/example.com/wordpress/wp-content/cache/supercache/**.html.gz rw,
    owner /var/www/example.com/wordpress/wp-content/cache/supercache/**.php rw,
    owner /var/www/example.com/wordpress/wp-content/cache/supercache/**.tmp rw,
    owner /var/www/example.com/wordpress/wp-content/cache/supercache/**.tmp.gz rw,
    owner /var/www/example.com/wordpress/wp-content/cache/supercache/*/feed/*.php rw,
    owner /var/www/example.com/wordpress/wp-content/cache/taxonomy_category.txt rw,
    owner /var/www/example.com/wordpress/wp-content/cache/taxonomy_post_tag.txt rw,
    owner /var/www/example.com/wordpress/wp-content/languages/**.{po,mo} rw,
    owner /var/www/example.com/wordpress/wp-content/languages/plugins/*.json w,
    owner /var/www/example.com/wordpress/wp-content/plugins/**.json r,
    owner /var/www/example.com/wordpress/wp-content/plugins/*/ rw,
    owner /var/www/example.com/wordpress/wp-content/plugins/webp-express/lib/options/options/**.inc r,
    owner /var/www/example.com/wordpress/wp-content/plugins/webp-express/lib/options/options/conversion-options/*.inc r,
    owner /var/www/example.com/wordpress/wp-content/plugins/webp-express/test/*.jpg r,
    owner /var/www/example.com/wordpress/wp-content/plugins/webp-express/test/*.png r,
    owner /var/www/example.com/wordpress/wp-content/themes/.htaccess rwk,
    owner /var/www/example.com/wordpress/wp-content/themes/webp-express-test-images/*.JPEG rw,
    owner /var/www/example.com/wordpress/wp-content/themes/webp-express-test-images/*.PNG rw,
    owner /var/www/example.com/wordpress/wp-content/upgrade-temp-backup/**/ rw,
    owner /var/www/example.com/wordpress/wp-content/upgrade-temp-backup/plugins/** rw,
    owner /var/www/example.com/wordpress/wp-content/upgrade/** rw,
    owner /var/www/example.com/wordpress/wp-content/uploads/**.{jpg,jpeg,png,webp} rw,
    owner /var/www/example.com/wordpress/wp-content/uploads/**/ rw,
    owner /var/www/example.com/wordpress/wp-content/uploads/.htaccess rwk,
    owner /var/www/example.com/wordpress/wp-content/uploads/webp-express-test-conversion.webp w,
    owner /var/www/example.com/wordpress/wp-content/uploads/webp-express-test-conversion.webp.lossless.webp w,
    owner /var/www/example.com/wordpress/wp-content/uploads/webp-express-test-conversion.webp.lossy.webp rw,
    owner /var/www/example.com/wordpress/wp-content/uploads/webp-express-test-images/*.JPEG rw,
    owner /var/www/example.com/wordpress/wp-content/uploads/webp-express-test-images/*.PNG rw,
    owner /var/www/example.com/wordpress/wp-content/uploads/wp-statistics/GeoLite2-Country.mmdb r,
    owner /var/www/example.com/wordpress/wp-content/webp-express/config/*.json rw,
    owner /var/www/example.com/wordpress/wp-content/webp-express/webp-images/.htaccess rwk,
    owner /var/www/example.com/wordpress/wp-content/webp-express/webp-images/doc-root/**.webp rw,

  }

  profile myblogdash {
    include <abstractions/base>
    include <abstractions/bash>

    deny /var/www/example.com/wordpress/wp-admin/admin-ajax.php r,
    deny /var/www/example.com/wordpress/wp-admin/async-upload.php r,
    deny /var/www/example.com/wordpress/wp-admin/options-general.php r,
    deny /var/www/example.com/wordpress/xmlrpc.php r,

    /usr/bin/cwebp mrix,
    /usr/bin/dash mr,
    /usr/bin/nice mrix,
    /usr/bin/which.debianutils mrix,
    owner /var/www/example.com/wordpress/**.{jpg,jpeg,png} r,
    owner /var/www/example.com/wordpress/wp-content/uploads/webp-express-test-conversion.webp.lossless.webp w,
    owner /var/www/example.com/wordpress/wp-content/uploads/webp-express-test-conversion.webp.lossy.webp w,
    owner /var/www/example.com/wordpress/wp-content/webp-express/webp-images/doc-root/**.webp w,

  }
}

Some advice

Creating an AppAmor profile for more complicated applications like this can be challenge. You will need to retest different things a lot and be prepared to do some hand editing of the profile. Some useful advice that I learnt along the way:

  • Often check /var/log/audit.log yourself to see what is happening. At a certain moment I ended up in a loop where my audit log was constantly being filled because the process tried to switch to a not yet existing profile, so you want to intervene immediately if this happens to prevent the logs from filling up your disk.
  • After making modifications to an AppArmor profile, it’s not a bad idea to restart your process. I am not sure, but I think some problems I encountered were fixed by restarting the process.
  • Stop aa-genprof/aa-logprof when you manually edit a profile and start hem again after the editing and reloading it with apparmor_parser, otherwise these processes won’t be aware of the modification you have made to the profile file.
  • When adding hats or child profiles, revise the permissions of your main profile: maybe some things can be moved to the new hat or child profile and can thus be removed completely from the main profile.
  • Keep your profile long enough in complain mode and only switch to enforce mode after enough time without events (I would say: a couple of days). If you encounter problems with your PHP applications later on, don’t forget to check your audit.log.

Protecting systemd services with AppArmor

In a previous blog post I explained how to create an AppArmor profile for a specific binary. However it is also possible to make a profile which is not linked to a specific executable, but you can load the profile in any systemd service. This is for example useful for services which share the same binary and only differ in the arguments you give on the command line. This can be useful for applications you start with uvicorn, or interpreted languages like Lisp or Rscript.

I assume you already have a systemd service file called myservice.service. In order to protect this service, create /etc/apparmor.d/myservice:

abi <abi/3.0>,
include <tunables/global>

profile myservice flags=(complain) {
  include <abstractions/base>
}

You see that I don’t use the path for a binary, but chose an arbitrary name.

Load the profile:

# apparmor_parser <em>/etc/apparmor.d/myservice</em>

Modify /etc/systemd/system/myservice.service or run systemctl edit myservice and in the [Service] section add:

AppArmorProfile=myservice

Reload the service files:

# systemctl daemon-reload

Start your service:

# systemctl start myservice

Exercise your service and process all events with aa-logprof:

# aa-logprof

Don’t forget to put it in enforce mode with aa-enforce if you are sure the profile is complete.

Frehi Debian package repository for Bookworm

While creating AppArmor profiles, I recently encountered a few problems with the packages on Debian 12 Bookworm. If you use a more recent Linux kernel than the one which is in Bookworm (Linux 6.1 from Bookworm works fine), apparmor_parser can hang on certain profiles and cause a null pointer dereference in the kernel. This bug is also being tracked as upstream bug 346 and a partial fix has been committed to the Apparmor git repository. Another problem I encountered, is that aa-logprof and aa-genprof would completely ignore any exec events from within a subprofile, because these tools don’t support nested profiles. An AppArmor developer created a merge request which would at least show these events in aa-genprof and aa-logprof, and give you at least the option to inherit the profile and run the new process unconfined. If you want to create a child profile, you will still have to do this manually but at least the are now other valid options are now available.

I also recently stumbled on the package libapache2-mod-qos which is completely broken in Debian Bookworm: it is built against an older libpcre version which conflicts with the one Apache is using, causing it to crash immediately at startup. The bug is fixed in Debian trixie/sid, but that does not help users of the stable Debian release.

So I decided to build Apparmor 3.0.12 from sid with the additional patches mentioned above for Debian Bookworm, as well as the new libapache2-mod-qos which fixes the crash at Apache startup. I have created a public repository you can use if you are interested in these fixes. The packages work for me, but I cannot guarantee that they won’t cause any problem for you, so use them at your own risk. I only build for AMD64, so other architectures are not available.

Setting up the bookworm-frehi repository on Debian

In order to use these packages, create a file /etc/apt/sources.list.d/bookworm-frehi.list with this content:

deb http://debian.frehi.be/debian bookworm-frehi main contrib non-free
deb-src http://debian.frehi.be/debian bookworm-frehi main contrib non-free

You can also use https in case you prefer that, but I try to use http because then I can cache packages with apt-cacher-ng.

Then create a file /etc/apt/preferences.d/bookworm-frehi:

Package: *
Pin: release n=bookworm-frehi
Pin-Priority: 99

This makes sure that by default you will still be using packages from the Debian repository, and it will only use packages from this repository when you explicitly request to do so.

Then you will have to request he public GPG key from pgp.surf.nl and add this to your trusted apt keys:

$ export GNUPGHOME="$(mktemp -d)"
$ gpg --keyserver pgp.surf.nl --recv-keys 1FBBAB8D2CA17863
$ gpg --export "1FBBAB8D2CA17863" > /tmp/bookworm-frehi.gpg
# mv /tmp/bookworm-frehi.gpg /etc/apt/trusted.gpg.d/
# rm -rf $GNUPGHOME

Now run:

# apt update

and you can use the repository, for example:

# apt-cache policy apparmor
# apt-cache policy libapache2-mod-qos
# apt install -t bookworm-frehi apparmor libapache2-mod-qos

Protecting your Linux server against security exploits with AppArmor

AppArmor is a security feature available in the Linux kernel which adds Mandatory Access Control (MAC) to your system. Mandatory Access Control defines a security policy which the administrator of the system defines and cannot be overridden by the user. You use AppArmor to harden your system against all kinds of attacks, protecting your system against known or unknown zero-day exploits. It does so by restricting the applications on your system. You define an AppArmor profile for security sensitive applications which confines them to access only specific files, network access and other capabilities. This way it will become much harder or even impossible for an attacker to abuse a security hole in a service for which you have defined an AppArmor profile. SuSE sometimes calls this immunization.

AppArmor is similar to SELinux (Security-Enhanced Linux), the other well-known Linux security module which is used by default on Fedora and Red Hat Enterprise Linux. Debian and Ubuntu have opted for AppArmor by default. AppArmor profiles are a bit easier to manage than SELinux.

Setting up Apparmor on Debian

First we install all the AppArmor utilities and all default profiles:

# apt install apparmor apparmor-profiles apparmor-utils apparmor-profiles-extra libpam-apparmor auditd

With the aa-status command you can check the AppArmor status:

# aa-status
43 profiles are loaded.
20 profiles are in enforce mode.
   /usr/bin/freshclam
   /usr/bin/man
   /usr/bin/pidgin
   /usr/bin/pidgin//sanitized_helper
   /usr/bin/totem
   /usr/bin/totem-audio-preview
   /usr/bin/totem-video-thumbnailer
   /usr/bin/totem//sanitized_helper
   /usr/lib/NetworkManager/nm-dhcp-client.action
   /usr/lib/NetworkManager/nm-dhcp-helper
   /usr/lib/connman/scripts/dhclient-script
   /usr/sbin/clamd
   /{,usr/}sbin/dhclient
   apt-cacher-ng
   lsb_release
   man_filter
   man_groff
   nvidia_modprobe
   nvidia_modprobe//kmod
   tcpdump
23 profiles are in complain mode.
   /usr/bin/irssi
   /usr/sbin/sssd
   avahi-daemon
   dnsmasq
   dnsmasq//libvirt_leaseshelper
   identd
   klogd
   mdnsd
   nmbd
   nscd
   php-fpm
   ping
   samba-bgqd
   samba-dcerpcd
   samba-rpcd
   samba-rpcd-classic
   samba-rpcd-spoolss
   smbd
   smbldap-useradd
   smbldap-useradd///etc/init.d/nscd
   syslog-ng
   syslogd
   traceroute
0 profiles are in kill mode.
0 profiles are in unconfined mode.
17 processes have profiles defined.
0 processes are in enforce mode.
14 processes are in complain mode.
   /usr/sbin/php-fpm8.2 (834763) php-fpm
   /usr/sbin/php-fpm8.2 (834764) php-fpm
   /usr/sbin/php-fpm8.2 (834765) php-fpm
   /usr/sbin/php-fpm8.2 (834766) php-fpm
   /usr/sbin/php-fpm8.2 (834767) php-fpm
   /usr/sbin/php-fpm8.2 (834768) php-fpm
   /usr/sbin/php-fpm8.2 (834769) php-fpm
   /usr/sbin/php-fpm8.2 (834770) php-fpm
   /usr/sbin/php-fpm8.2 (834771) php-fpm
   /usr/sbin/php-fpm8.2 (834772) php-fpm
   /usr/sbin/php-fpm8.2 (834773) php-fpm
   /usr/sbin/php-fpm8.2 (834774) php-fpm
   /usr/sbin/php-fpm8.2 (834775) php-fpm
   /usr/sbin/php-fpm8.2 (834826) php-fpm
3 processes are unconfined but have a profile defined.
   /usr/bin/freshclam (797288)
   /usr/sbin/clamd (662585)
   /usr/sbin/sssd (647439)
0 processes are in mixed mode.
0 processes are in kill mode.

We see that AppArmor is enabled and that it has 32 profiles of which 16 are being enforced and another 16 are in complain mode, which means that a warning will be logged when they violate the policy, but they will not be blocked. Currently 1 running process is confined in enforce mode.

All the profiles are installed in /etc/apparmor.d. The text files, usually named after the full path to the binary with the slashes replaced by dots.

To completely disable an Apparmor profile you use the aa-disable command. For example:

# aa-disable /usr/sbin/haveged

To enable a profile in complain mode, use the aa-complain command:

# aa-complain /usr/sbin/haveged

Use the aa-enforce command to set enforce an Apparmor profile:

# aa-enforce /usr/sbin/haveged

If you make a modification to one of the profiles in /etc/apparmor.d, you will need to make the kernel reload the profile with the apparmor_parser command:

# apparmor_parser -r /etc/apparmor.d/usr.sbin.haveged

If you want to remove a profile currently loaded in the kernel, you can use this command:

# apparmor_parser -R /etc/apparmor.d/usr.sbin.haveged

When you have done this, you can remove the file if you don’t need it any more.

Creating Apparmor profiles with aa-genprof

I recommend creating an AppArmor profile for every service which is accessible via the network end/or deals with data from the outside world.

You can run the aa-unconfined command to find applications which listen on a TCP or UDP port and do not have an AppArmor profile loaded. These are excellent candidates to create a profile for.

Example 1: Creating an AppArmor profile for Knot Resolver

One of the unconfined processes indicated by aa-unconfined, is /usr/sbin/kresd which is Knot Resolver, the caching DNS resolver. I will generate a profile with aa-genprof. I need two shells for that: one where I run aa-genprof, and another one where I exercise the functionality of Knot Resolver. aa-genprof creates an empty profile and puts it in complain mode, so that all events generated by the process will be logged. aa-genprof will then propose rules for any of them.

So in the first shell I run:

# aa-genprof /usr/sbin/kresd
Updating AppArmor profiles in /etc/apparmor.d.

Before you begin, you may wish to check if a
profile already exists for the application you
wish to confine. See the following wiki page for
more information:
https://gitlab.com/apparmor/apparmor/wikis/Profiles

Profiling: /usr/sbin/kresd

Please start the application to be profiled in
another window and exercise its functionality now.

Once completed, select the "Scan" option below in
order to scan the system logs for AppArmor events.

For each AppArmor event, you will be given the
opportunity to choose whether the access should be
allowed or denied.

[(S)can system log for AppArmor events] / (F)inish

Now in the second shell we need to exercise the functionality of kresd. Let’s start by stopping, starting and restarting kresd:

# systemctl stop system-kresd.slice && systemctl start kresd.target && systemctl restart system-kresd.slice

I also recommend reloading the service, but the kresd service does not support this, so this is of no use here. Now we exercise kresd by doing some DNS lookups, for example with the host and the kdig commands:

$ host example.com
$ kdig -t mx debian.org

Wait for some time, and then you go back to the shell where aa-genprof is running and press S to scan the audit log for any generated events.

Profile:    /usr/sbin/kresd
Capability: net_bind_service
Severity:   8

 [1 - include <abstractions/nis>]
  2 - capability net_bind_service,
(A)llow / [(D)eny] / (I)gnore / Audi(t) / Abo(r)t / (F)inish

Here kresd requests the net_bind_service capability. This is needed to listen to a privileged port (port number < 1024). aa-genprof proposes 2 possible rules. The first option is to include abstractions/nis which contains capability net_bind_service. Files in /etc/apparmor.d/abstractions contain common rules which may be included in multiple different profiles. Often these abstractions are too broad and in case of doubt I recommend to choose the more specific option, which is option 2 here. I press 2 select this option, then press A to allow this.

Then Knot Resolver checks whether we have transparent hugepages enabled by reading /sys/kernel/mm/transparent_hugepage/enabled. This is because Knot Resolve uses jemalloc which reads this file.

Profile:  /usr/sbin/kresd
Path:     /sys/kernel/mm/transparent_hugepage/enabled
New Mode: r
Severity: 4

 [1 - /sys/kernel/mm/transparent_hugepage/enabled r,]
(A)llow / [(D)eny] / (I)gnore / (G)lob / Glob with (E)xtension / (N)ew / Audi(t) / Abo(r)t / (F)inish

I press A to allow this.

Knot Resolver uses lua for plug-ins, and so wants to read lua code:

Profile:  /usr/sbin/kresd
Path:     /usr/share/lua/5.1/cqueues/socket.lua
New Mode: r
Severity: unknown

 [1 - include <abstractions/totem>]
  2 - /usr/share/lua/5.1/cqueues/socket.lua r,
(A)llow / [(D)eny] / (I)gnore / (G)lob / Glob with (E)xtension / (N)ew / Audi(t) / Abo(r)t / (F)inish

Now I want to allow reading of all lua files in /usr/share/lua, independent of the lua version, so that when Knot Resolver starts using a newer lua version in the future, the rule is still valid. Press 2 select that rule and then press E for glob with extension multiple times, until you get option 5 - /usr/share/lua/**.lua r,. This matches all files which name end with .lua anywhere within /usr/share/lua. Make sure this options is selected and then press A to allow this.

kresd creates a control file:

Profile:  /usr/sbin/kresd
Path:     /run/knot-resolver/control/1
New Mode: owner w
Severity: unknown

 [1 - owner /run/knot-resolver/control/1 w,]
(A)llow / [(D)eny] / (I)gnore / (G)lob / Glob with (E)xtension / (N)ew / Audi(t) / (O)wner permissions off / Abo(r)t / (F)inish

The mode owner w means the permission to write the file only if the user as which the writing process is running is also the owner of the file.

If you are running multiple kresd processes, each process will create its own control file, so we want to use a wildcard here. Press G for glob so that you get the option 2 – owner /run/knot-resolver/control/* w, and press A to accept this.

kresd tries to take an exclusive lock on /var/cache/knot-resolver/lock.mdb:

Profile:  /usr/sbin/kresd
Path:     /var/cache/knot-resolver/lock.mdb
New Mode: owner rwk
Severity: unknown

 [1 - owner /var/cache/knot-resolver/lock.mdb rwk,]
(A)llow / [(D)eny] / (I)gnore / (G)lob / Glob with (E)xtension / (N)ew / Audi(t) / (O)wner permissions off / Abo(r)t / (F)inish

Notice the mode rwk: k is the permission, and in combination with w this is an exclusive lock. You can find the explanation of all file permissions in the AppArmor Core Policy Reference. Press A to allow this.

I’m using rpz-downloader to download RPZ files containing malicious domains I want to block. Knot Resolver wants to read these RPZ files:

Profile:  /usr/sbin/kresd
Path:     /var/lib/rpz-downloader/urlhaus.abuse.ch.rpz
New Mode: r
Severity: unknown

 [1 - /var/lib/rpz-downloader/urlhaus.abuse.ch.rpz r,]
(A)llow / [(D)eny] / (I)gnore / (G)lob / Glob with (E)xtension / (N)ew / Audi(t) / Abo(r)t / (F)inish

I want to allow access to all RPZ files in /var/lib/rpz-downloader, so I press N and I change the path to /var/lib/rpz-downloader/*.rpz and press A to accept this.

Knot Resolver tries to open an IPv4 TCP socket:

Profile:        /usr/sbin/kresd
Network Family: inet
Socket Type:    stream

 [1 - include <abstractions/apache2-common>]
  2 - include <abstractions/nameservice>
  3 - network inet stream,
(A)llow / [(D)eny] / (I)gnore / Audi(t) / Abo(r)t / (F)inish

The proposed abstractions are too broad, so I select 3. You will also need to allow network inet dgram (IPv4 UDP), network inet6 stream (IPv6 TCP) and network inet6 dgram (IPv6 UDP).

When you have processed all events, you will get this:

= Changed Local Profiles =

The following local profiles were changed. Would you like to save them?

 [1 - /usr/sbin/kresd]
(S)ave Changes / Save Selec(t)ed Profile / [(V)iew Changes] / View Changes b/w (C)lean profiles / Abo(r)t

Only one profile is modified and you can view the changes by pressing V. You can save the changes by pressing S. After doing so, aa-genprof will continue running and you can press S to scan the logs again for any newly generated events. This way you can gradually improve your profile. If you are finished, you can quit aa-genprof by pressing F. You can run aa-genprof for your process again at any time, it will then modify the existing profile if it finds any new events.

When you have finished; check with aa-status whether your profile is still in complain mode, and if not change it back to complain mode with aa-complain.

This is how my final profile /etc/apparmor.d/usr.sbin.kresd looks:

abi <abi/3.0>,

include <tunables/global>

/usr/sbin/kresd flags=(complain) {
  include <abstractions/base>

  capability net_bind_service,

  network inet dgram,
  network inet stream,
  network inet6 dgram,
  network inet6 stream,

  /etc/group r,
  /etc/knot-resolver/kresd.conf r,
  /etc/nsswitch.conf r,
  /etc/passwd r,
  /etc/ssl/certs/ca-certificates.crt r,
  /etc/ssl/openssl.cnf r,
  /sys/kernel/mm/transparent_hugepage/enabled r,
  /usr/sbin/kresd mr,
  /usr/share/dns/root.hints r,
  /usr/share/dns/root.key r,
  /usr/share/lua/**.lua r,
  /var/lib/rpz-downloader/*.rpz r,
  owner /run/knot-resolver/control/ w,
  owner /run/knot-resolver/control/* w,
  owner /var/cache/knot-resolver/data.mdb rw,
  owner /var/cache/knot-resolver/lock.mdb rwk,

}

The flags=(complain) indicate that this profile will be loaded in complain mode. Keep it like that for some time until you are sure no new events are generated any more. You can check this by running aa-logprof. aa-logprof is similar to aa-genprof, except that it also processes past events logged in /var/log/audit.log and processes events for all processes for which you have defined an AppArmmor profile, and not a single one like aa-genprof. If you are sure the profile is complete, enforce it by running aa-enforce /usr/sbin/kresd.

Example 2: creating an AppArmor profile for Postfix

Another process marked by aa-unconfined on my system is /usr/lib/postfix/sbin/master, Postfix’ master process. I create a profile with aa-genprof:

# aa-genprof /usr/lib/postfix/sbin/master

In another shell, I stop, start, restart and reload Postfix:

# systemctl stop postfix@- && systemctl start postfix@- && systemctl restart postfix@- && systemctl reload postfix@-

To further exercise Postfix, send some mails, both outgoing mails to other mail servers as incoming mails originating from an external mail server. Check the mail queue with

# mailq

I’m not going through all events generated by Postfix, but I want to point out one particular type of event which we did not see with Knot Resolver:

Profile:  /usr/lib/postfix/sbin/master
Execute:  /usr/lib/postfix/sbin/showq
Severity: unknown

(I)nherit / (C)hild / (P)rofile / (N)amed / (U)nconfined / (X) ix On / (D)eny / Abo(r)t / (F)inish

The master process tries to start another executable, in this case showq. You have several options:

  • (I)nherit: the new process inherits the profile from the parent process. This means that it will run with the same permissions as the parent process.
  • (C)hild: the new process will use a subprofile defined within this profile. This is useful if you want to run the process with a different profile, depending on how it was started.
  • (P)rofile: the new process will run in its own generic profile, in this case /etc/apparmor.d/usr.lib.postfix.sbin.showq . Use this if you want to run it in the same profile as when you would have started /usr/lib/postfix/sbin/showq directly.
  • (N)amed: the new process will run with the profile with a name of your choice.
  • (U)confined: the new process will run unconfined, without any AppArmor restrictions.

Here I choose P because I want showq to run always with the same profile, no matter how it is started. Not only will aa-genprof create a rule which allows master to launch showq, it will also create a profile for showq, and start logging future events.

Then you will get the question whether you want to sanitize the environment:

Should AppArmor sanitise the environment when
switching profiles?

Sanitising environment is more secure,
but some applications depend on the presence
of LD_PRELOAD or LD_LIBRARY_PATH.

[(Y)es] / (N)o

My Postfix installation does not need any enviroment variables like LD_LIBRARY_PATH to be set, so it can safely sanitize the environment. I press Y.

When you have finished, make sure that all profiles are running in complain mode, otherwise you risk problems with mail delivery:

# cd /etc/apparmor.d
# for i in usr.lib.postfix.sbin.*; do aa-complain $i; done

My /etc/apparmor.d/usr.lib.postfix.sbin.master profile looks like this at the moment after some manual tweaking:

abi <abi/3.0>,

include <tunables/global>

/usr/lib/postfix/sbin/master flags=(complain) {
  include <abstractions/base>
  include <abstractions/postfix-common>

  capability dac_read_search,
  capability kill,
  capability net_bind_service,

  network inet stream,
  network inet6 dgram,
  network inet6 stream,
  network netlink raw,

  signal send peer=/usr/lib/postfix/sbin/*,

  /usr/lib/postfix/sbin/anvil Px,
  /usr/lib/postfix/sbin/cleanup Px,
  /usr/lib/postfix/sbin/lmtp Px,
  /usr/lib/postfix/sbin/local Px,
  /usr/lib/postfix/sbin/master mr,
  /usr/lib/postfix/sbin/pickup Px,
  /usr/lib/postfix/sbin/postscreen Px,
  /usr/lib/postfix/sbin/proxymap Px,
  /usr/lib/postfix/sbin/qmgr Px,
  /usr/lib/postfix/sbin/scache Px,
  /usr/lib/postfix/sbin/showq Px,
  /usr/lib/postfix/sbin/smtp Px,
  /usr/lib/postfix/sbin/smtpd Px,
  /usr/lib/postfix/sbin/tlsmgr Px,
  /usr/lib/postfix/sbin/trivial-rewrite Px,
  owner /etc/gai.conf r,
  owner /etc/group r,
  owner /etc/nsswitch.conf r,
  owner /etc/passwd r,
  owner /var/lib/postfix/master.lock rwk,
  owner /var/spool/postfix/pid/master.pid rwk,
  owner /var/spool/postfix/private/anvil w,
  owner /var/spool/postfix/private/bounce w,
  owner /var/spool/postfix/private/bsmtp w,
  owner /var/spool/postfix/private/defer w,
  owner /var/spool/postfix/private/discard w,
  owner /var/spool/postfix/private/dnsblog w,
  owner /var/spool/postfix/private/error w,
  owner /var/spool/postfix/private/ifmail w,
  owner /var/spool/postfix/private/lmtp w,
  owner /var/spool/postfix/private/local w,
  owner /var/spool/postfix/private/maildrop w,
  owner /var/spool/postfix/private/mailman w,
  owner /var/spool/postfix/private/proxymap w,
  owner /var/spool/postfix/private/proxywrite w,
  owner /var/spool/postfix/private/relay w,
  owner /var/spool/postfix/private/retry w,
  owner /var/spool/postfix/private/rewrite w,
  owner /var/spool/postfix/private/scache w,
  owner /var/spool/postfix/private/scalemail-backend w,
  owner /var/spool/postfix/private/smtp w,
  owner /var/spool/postfix/private/smtp-amavis w,
  owner /var/spool/postfix/private/smtpd w,
  owner /var/spool/postfix/private/tlsmgr w,
  owner /var/spool/postfix/private/tlsproxy w,
  owner /var/spool/postfix/private/trace w,
  owner /var/spool/postfix/private/uucp w,
  owner /var/spool/postfix/private/verify w,
  owner /var/spool/postfix/private/virtual w,
  owner /var/spool/postfix/public/cleanup w,
  owner /var/spool/postfix/public/flush w,
  owner /var/spool/postfix/public/pickup rw,
  owner /var/spool/postfix/public/qmgr rw,
  owner /var/spool/postfix/public/showq w,

}

I’m not going to add all profiles for other Postfix processes here, but you get the idea. More rules might still be needed, so that’s why I keep them running in complain mode and I regularly run aa-logprof.

Debugging problems caused by AppArmor

AppAmor logs events in /var/log/audit/audit.log if you have auditd running. You can grep for apparmor to see all events.

To easily process all new events logged in /var/log/audit/audit.log you can use aa-logprof:

# aa-logprof

Sources and more information

Inter and IBM Plex fonts for your Linux desktop

Recently I came around a post on the Fediverse mentioning the Inter fonts. There is even a GNOME issue open discussing making the Inter fonts the default in a future version of the GNOME desktop. This prompted me to try this font, and I have to say I am liking it so far.

The Inter font, does not have a monospace version available, but the Inter developer recommended some nice monospace fonts which match Inter. I decided to go for the IBM Plex Mono font.

Both Inter and IBM Plex are packaged in Debian, so you can easily install them with apt:

# apt install fonts-inter fonts-ibm-plex

To change the fonts in the GNOME desktop, you need to launch gnome-tweaks (install the package with apt if it’s not present on your system, and go to Fonts. I set Interface Text and Document Text fonts to Inter Light 10, the Monospace Text font to IBM Plex Mono Regular 10 and the Legacy Windows Titles fonts to Inter Bold.

Then in Firefox in the menu click on Settings and if you scroll down you will find the Fonts section with a button Advanced… next to it. Click on that button and set Proportional font to Sans Serif, the Serif font to IBM Plex Serif, the Sans Serif font to Inter and the Monospace font to IBM Plex Mono.

Of course you can use these fonts in other desktops and browsers.

Enjoy your fresh desktop fonts!

Wireguard VPN with systemd-networkd and Foomuri

After my first successful implementation of Foomuuri on a server with an IPv4 connection, I wanted to try Foomuuri in a different environment. This time I choose to implement it on my IPv4/IPv6 dual stack Wireguard VPN server. I originally set up this system with Shorewall, so let’s see how we should configure this with Foomuuri.

While at it, I also moved the configuration of Wireguard to systemd-networkd, where the main network interface was already configured. This was also useful because some things which were configured in Shorewall before and which Foomuuri does not do by itself, can now be configured in systemd-networkd.

systemd-networkd configuration

I create /etc/systemd/network/wg0.netdev with these contents:

[NetDev]
Name = wg0
Kind = wireguard
Description = wg0 - Wireguard VPN server

[WireGuard]
PrivateKeyFile = /etc/systemd/network/wg0.privkey
ListenPort = 51820

# client 1
[WireGuardPeer]
PublicKey = publickey_of_client
AllowedIPs = 192.168.7.2/32
AllowedIPs = aaaa:bbbb:cccc:dddd:ffff::2/128

I moved the /etc/wireguard/privatekey file to /etc/systemd/network/wg0.privkey, and then give it appropriate permissions so that user systemd-network can read it:

# chown root:systemd-network /etc/systemd/network/wg0.privkey
# chmod 640 /etc/systemd/network/wg0.privkey

Then I create /etc/systemd/network/wg0.network:

[Match]
Name = wg0

[Network]
Address = 192.168.7.1/24
Address = fd42:42:42::1/64

[Route]
Destination = aaaa:bbbb:cccc:dddd:ffff::2/128

For IPv4, we set the address to 192.168.7.1/24 and systemd-networkd will automatically take care of adding this subnet to the routing table. As we are using public IPv6 addresses for the VPN clients, I add a [ROUTE] section which takes care of adding these IP address to the routing table.

The configuration of the public network interface is stored in /etc/systemd/network/public.network:

[Match]
Name=ens192

[Network]
Address=aaaa:bbbb:cccc:dddd:0000:0000:0000:0001/64
Gateway=fe80::1
DNS=2a0f:fc80::
DNS=2a0f:fc81::
DNS=193.110.81.0
DNS=185.253.5.0
Address=www.xxx.yyy.zzz/24
Gateway=www.xxx.yyy.1
IPForward=yes
IPv6ProxyNDP=1
IPv6ProxyNDPAddress=aaaa:bbbb:cccc:dddd:ffff::2

Important here is that we enable IP forwarding and IPv6 NDP proxy here. Both were things we could configure in Shorewall before, but Foomuuri does not support setting these. This is not a problem, because this can be set up directly in systemd-networkd.

To reload the configuration for all network interface, I run:

networkctl reload

To bring up the Wireguard connection:

networkctl up wg0

Because of systemd issue #25547, networkctl reload is not enough if you make changes to the peer configuration in wg0.netdev. You will first have to delete the network device with the command

networkctl delete wg0

after which you can run networkctl reload and bring up the network connection. In case of doubt all network interfaces are configured correctly, you can also completely restart the systemd-networkd service:

# systemctl restart systemd-networkd

While working on the network configuration, of course make sure you have access to a real console of the system, so that in case your system becomes inaccessible, you can still fix things through the console.

Foomuuri configuration

Now we define the zones in /etc/foomuuri/zones.conf:

zone {
  localhost
  public ens192
  vpn wg0
}

Foomuuri by default does not define a macro for the Wireguard UDP port, so I create one in /etc/foomuuri/services.conf:

macro {
	wireguard udp dport 51820
}

I adjust some logging settings in /etc/foomuuri/log.conf. In case I want to filter outgoing connections from the machine in the future, I want to log the UID of the process and I also increase the log rate, as I had the impression that I sometimes was missing valuable log messages while debugging. Adjust the values if you wan to reduce log spam.

foomuuri {
  log_rate "2/second burst 20"
  log_level "level info flags skuid"
}

I set up masquerading (SNAT) in /etc/foomuuri.conf/snat.conf :

snat {
  saddr 192.168.7.0/24 oifname ens192 masquerade
}

Then I set up these rules for traffic going through our firewall:

public-localhost {
  ssh
  wireguard
  icmpv6 1 2 3 4 128
  drop log
}

localhost-public {
  accept
}

vpn-public {
  accept
}

public-vpn {
  icmpv6 1 2 3 4 128
  drop log
}

vpn-localhost {
  accept
}

localhost-vpn {
  icmpv6 1 2 3 4 128
  reject log
}

Notice that I allow ICMPv6 traffic that should not be dropped.

As usually check your configuration before reloading it:

# foomuuri check
# foomuuri reload

Testing and debugging

If things don’t work as expected, enable debugging in the wireguard kernel module and check the kernel logs. I refer to the previous article about this for more details.

Conclusion

Setting up Foomuuri was pretty easy again. The most difficult thing was getting the systemd-networkd configuration completely right. Especially with IPv6 it can take quite some time debugging before everything works as expected.

Setting up Foomuuri, an nftables based firewall

Up to now I have always been using the Shorewall firewall on all my Linux systems. I find it very easy to configure while at the same time it’s very powerful and flexible so that you can also use it with more complicated set-ups, such as routers with multiple network interfaces, VPN’s and bridges. Unfortunately Shorewall is still based on the old xtables (iptables, ip6tables, ebtables, etc…) infrastructure. While it still works and in reality the iptables commands are actually now front-ends to the more modern nftables back-end, Shorewall development has stalled and it looks very unlikely it will ever be ported to nftables.

I started using Firewalld, a firewall which is used by default on Red Hat and Fedora based systems. However I did not like it. Configuration of Firewalld happens through the command line with firewall-cmd, which I find much more complicated than just editing a configuration file which usually contains examples and gives you an easy overview of the configuration. Firewalld saves its configuration in XML files. You could edit these files instead of using firewall-cmd, but that is obviously much more complicated than editing configuration files which were designed for human editing. Furthermore I found Firewalld to be very inflexible. Firewalld does not have support of filtering traffic on a bridge (layer 2 filtering), unlike Shorewall.

Recently I discovered the nftables based firewall foomuuri. It’s still a very young project but it’s actively developed, already has extensive features, is packaged in Debian and is configured through human-readable configuration files. I decided to try it on a server where I wanted to filter incoming and outgoing network traffic.

Installing Foomuuri on Debian

Foomuuri is availabe in Debian testing and unstable, but it has also been backported to Debian 12 Bookworm. To use that package, you have to enable the bookworm-backports repository first. Then install the foomuuri package

# apt install foomuuri

If you are using NetworkManager also install foomuuri-firewalld, because it will allow NetworkManager to set the zone the network interface belongs to.

Configuring Foomuuri

Foomuuri can be configured through files in the /etc/foomuuri directory. Foomuuri will read all files which name ends with .conf, so you can split up the configuration in as many files as you want or just put everything in a single file, as you prefer. I like the split configuration files of Shorewall, so I will do something similar here.

Before activating the configuration, always run

# foomuuri check

to validate your configuration. You can start and stop the firewall by starting and stopping the systemd service, you can reload the configuration by running

 # foomuuri reload

You can find the documentation of Foomuuri on the Foomuuri wiki.

Defining zones

The first ting we have to do is define the zones and set which interfaces belongs to which zone. I create /etc/foomuuri/zones.conf:

zone {
  localhost
  public enp1s0
}

I create the zone localhost and the zone public and add the network interface enp1s0 to it. You can add multiple interfaces to a zone by separating them by spaces. If you are using NetworkManager, you don’t have to add the interfaces here and can leave the zone empty. You can configure the firewall zone in NetworkManager and it will set it through foomuuri-firewalld.

Using macros to alias configuration options

Macros can be used to define certain configuration options you want to use multiple times without having to write them completely every time. In practice a lot of macros are already configured which define the configuration for common services. You can see all defined macros by running

# foomuuri list macro

For example the macro imap defines the configuration tcp 143, so that you can just write imap instead of tcp 143 in the configuration. I added a few which were not defined by default in /etc/foomuuri/services.conf:

macro {
	nrpe	tcp 5666
	nmb	udp 137 138 139; tcp 139
}

Macros can be used to configure common subnets. For example I have a file named /etc/foomuuri/subnets.conf:

macro {
	mysubnet		192.168.0.1/24
	othersubnet		192.168.1.1/24
}

I also use macros to create lists of individual hosts, such as all NFS clients which need to access this NFS server in /etc/foomuuri/nfs_clients.conf

macro {
	nfs_clients   192.168.0.1 # web server
	nfs_clients + 192.168.0.2 # gitlab
	nfs_clients + 192.168.0.3 # nextcloud
}

For easy readability, I put every host in a single line, and I add a comment for my own reference. With the + sign I add all next hosts to the macro.

Firewall for incoming connections

To configure Foomuuri to filter incoming connections to my servers, I create a section public-localhost which contains the firewall rules for traffic coming from the public zone to localhost. I put this in the file /etc/foomuuri/public-localhost.conf:

public-localhost {
  dhcp-server
  ssh
  ping  saddr mysubnet
  nmb   saddr mysubnet
  smb   saddr mysubnet
  nfs   saddr nfs_clients
  nrpe  saddr 192.168.0.5
  drop log
}

My server is acting as a DCHP-server, so I use the dhcp-server macro to allow all this traffic, just as I allow all incoming ssh traffic. I allow ping, nmb and smb traffic from mysubnet. Notice that in these rules I use my custom macros nmb and mysubnet. Then I allow nfs from all addresses listed in my macro nfs_clients, and I allow nrpe from a specific IP address. Finally I end with a rule which drops and logs all traffic which has not matched any of the rules before.

Firewall for outgoing connections

I think that filtering outgoing connections is a very effective security hardening measure. In case people with bad intentions get access to your server through a non-root user account, this will severely limit their abilities to move laterally through your network and attack other systems, to run a crypto-miner, or download malware from the Internet.

localhost-public {
  dhcp-client
  nmb uid root
  ntp uid systemd-timesync
  ping uid root daddr mysubnet # dhcpd sometimes pings
  smtp daddr 192.168.0.1 uid postfix
  domain daddr 192.168.0.255 192.168.0.254
  uid root tcp daddr 192.168.0.5 dport 8140 # puppet agent
  uid _apt tcp dport 3142 daddr 192.168.0.6
  uid root ssh daddr 192.168.0.250 # backups
  drop daddr 169.254.169.254 tcp dport 80 # don't fill logs with Puppetlabs facter trying to collect facts from Amazon EC2/Azure
  reject log
}

I allow outgoing connections for different services, and for most services I set the user which can create that connection, and to which host I allow the connection. I explicitly drop without logging connections to 169.254.169.254 port 80, because facter tries to connect to this address every time it runs in order to get some metadata from your cloud service provider. If your system is running on Amazon or Microsoft Azure cloud services, you will probably want to allow this connection instead, so you can then just remove the drop word.

In order to log the UID of the process which tried to establish a rejected connection, in future Foomuuri versions (starting from Foomuuri version 0.22) you can replace the last rule by

reject log log_level "level warn flags skuid"

In current version 0.21, it is possible by setting this globally for all connections. I created /etc/foormuuri/loglevel.conf:

foomuuri {
  log_level "level info flags skuid"
}

Integrating Fail2ban with Foomuuri

I found inspiration for integrating Fail2ban with Foomuuri in issue 9 on the Foomuuri issue tracker.

Create /etc/fail2ban/action.d/foomuuri with these contents:

[Definition]
actionstart =
actionstop  =
actioncheck =
actionban   = /usr/sbin/foomuuri iplist add fail2ban 999d <ip>
actionunban = /usr/sbin/foomuuri iplist del fail2ban <ip>
actionflush = /usr/sbin/foomuuri iplist flush fail2ban

Then set foomuri as the default banaction by creating /etc/fail2ban/jail.d/foomuri.conf:

[DEFAULT]
banaction = foomuuri

Then foomuuri should create the fail2ban iplist. We can configure it to so by creating /etc/foomuuri/fail2ban.conf:

iplist {
	@fail2ban
}

Then I add this rule as first rule to the public-localhost section:

  saddr @fail2ban drop log fail2ban drop

This will drop all connections coming from an address in the iplist fail2ban, and will also log them with prefix fail2ban. If you don’t want this to be logged, just remove log fail2ban.

To ensure that Foomuuri is started before Fail2ban, so that the fail2ban iplist exists before Fail2ban starts to use it, create

/etc/systemd/system/fail2ban.service.d/override.conf:

[Unit]
After=foomuuri.service

After making these changes, first restart Foomuuri and then Fail2ban.

Conclusion

I found Foomuuri easy to use for a system with one network interface. Configuration through the configuration files is easy, also when implementing filtering for outgoing packets. Even though Foomuuri is still a young project, it already has many features and its author is very reactive to discussions and issues on Github. I also found the documentation on the wiki very helpful

I will try to implement Foomuuri on more complex setups in the future, such as on a host for virtual machines of which the network interface is bridged to the main network interface of the host, VPN servers, routers, etc…

Finally I want to thank the Foomuri developer Kim B. Heino and the maintainer of the Debian package Romain Francoise for their work and making this available to the community.