Apache optimization and mitigating DoS and DDoS attacks

Denial-of-service (DoS) and Distributed Denial-of-service (DDoS) attacks are some of the most common cyberattacks these days. They are fairly easy to execute and the consequences can vary from annoying to very problematic, for example if a crucial web service of a company or public service becomes inaccessible. In the current geopolitical situation DDoS attacks are a very popular method used by hacktivists.

In this guide I propose a configuration which can help against some of these attacks. I recommend implementing such measures on all web servers. Even if they might not completely prevent an attack, at least they from a basis which you can easily adapt to future attacks. These measures can also help against spammers and scrapers of your website and improve general performance of your server.

Firewall protection

Blocking connections from known bad IP addresses

We don’t want to waste CPU time to packets coming from known bad IP addresses, so we block them as soon as possible. Follow my earlier article to block known bad IP addresses.

Geoblocking connections

If you need a quick way to mitigate a DoS or DDoS attack you are experiencing, you can consider geoblocking connections in your firewall. For example you could temporarily only allow connections to your web server from IP addresses of your own country if that’s where most of your visitors come from. You can download lists of all IPv4 and IPv6 addresses per country. See the previously mentioned article about using iplists to implement geoblocking with Foomuuri for configuring the firewall.

Rate limiting connections

Then we can rate limit new connections per source IP address in our firewall.

In Foomuuri, you can set something like this in the public-localhost section:

http  saddr_rate "3/minute burst 20" saddr_rate_name http_limit
https saddr_rate "3/minute burst 20" saddr_rate_name http_limit
http  drop counter "http_limit" log "http_limit"
https drop counter "http_limit" log "http_limit"

This will allow a remote host to open 20 new connections per source IP to both port 80 and port 443 together, after which new connections will be limited to 3 per minute. Other connection attempts will be dropped, counted and logged with prefix http_limit. You will need to adapt these numbers to what your own server requires. When under attack, I recommend removing the last 2 lines: you don’t want to waste time logging everything. When setting up rules which define limits per IP address, take into account users which are sharing a public IP via NAT, such as on corporate networks.

The above rules still allows one source IP address to have more than 20 simultaneous connections open with your web server, because it only limits the rate at which new connections can be made. If you want to limit the total number of open connections, you can use something like this:

http  saddr_rate "ct count 20" saddr_rate_name http_limit
https saddr_rate "ct count 20" saddr_rate_name http_limit
http  drop counter "http_limit" log "http_limit"
https drop counter "http_limit" log "http_limit"

It is also possible to implement a global connection limit to the http and https ports, irrespective of the source IP address. While this will be very effective against DDoS attacks, you will also block unmalicious visitors of your site. If you implement global rate limiting, add rules before to allow your own IP address, so that you will not be blocked yourself. In case of emergency and if all else fails, you could combine global rate limiting with geoblocking: first create rules which allow connections from certain countries and regions and after that place the global rate limit rules which will then be applied to connections from other countries and regions.

For example:

http  saddr_rate "ct count 20" saddr_rate_name http_limit saddr @belgium4
https saddr_rate "ct count 20" saddr_rate_name http_limit saddr @belgium6
http saddr @belgium4 drop
http saddr @belgium6 drop
http global_rate "10/second burst 20"
https global_rate "10/second burst 20"

If the iplists @belgium4 and @belgium6 contain all Belgian Pv4 and IPv6 addresses, then these rules will allow up to 20 connections per source IP from Belgium. More connections per source IP from Belgium will be dropped. From other parts of the world, we allow a burst of 20 new connections after which there will be a global limit of 10 new connections per second, irrespective the source address.

Enable SYN cookies

SYN cookies are a method which helps to mitigate SYN attacks, in which a server is flooded by SYN requests. When the backlog queue where SYN requests are stored, is full, SYN cookies will be sent in the server’s SYN + ACK response, instead of creating an entry in the backlog queue. The SYN cookie contains a cryptographic encoding of a SYN entry in the backlog and can be reconstructed by the server from the client’s ACK response.

SYN cookies are enabled by default in all Debian Linux kernels because they set CONFIG_SYN_COOKIES=y. If you want to check whether SYN cookies are enabled, run

# sysctl net.ipv4.tcp_syncookies

If the value is 0, you can enable them by running

sysctl -w net.ipv4.tcp_syncookies=1

To make this setting persist after a reboot, you create the file /etc/sysctld.d/syncookies.conf with this content:

net.ipv4.tcp_syncookies=1

Configuring Apache to block Denial-of-Service attacks

Tuning Apache and the event MPM

In /etc/apache2/apache2.conf check these settings:

Timeout 60
KeepAlive On
MaxKeepAliveRequests 100
KeepAliveTimeout 5

These are the default in recent upstream Apache versions, but they might be different if you are using an older packaged Apache version. Also Debian still sets a much higher Timeout value by default. If Timeout and KeepAliveTimeout are set too high, clients can keep a connection open for way too much time, filling up the available workers. If you are under attack, you can consider setting the Timeout value even lower and disabling KeepAlive completely. See further to read how to do the latter automatically with mod_qos.

Make sure you are using the Event MPM in Apache, because it’s the most performant. You can check with this command:

# apache2ctl -M | grep mpm
 mpm_event_module (shared)

Then we need to configure the Event MPM in /etc/apache2/mods-enabled/mpm_event.conf:

StartServers             5
ServerLimit              16
MinSpareThreads          100
MaxSpareThreads		 400
ThreadLimit              120
ThreadsPerChild		 100
MaxRequestWorkers        1000
MaxConnectionsPerChild   50000

We start 5 (StartServers) child server processes, with each child having up to 100 (ThreadsPerChild) threads to deal with connections.We want to have 100-400 (MinSpareThreadsMaxSparehreads) spare threads to be available to handle new requests. The MaxRequestWorkers sets an upper limit of 1000 requests we can handle simultaneously. ServerLimit defines the maximum amount of child server processes. You only need to increase the default value of 16 if MaxRequestWorkers / ThreadsPerChild is higher than 16. The ThreadLimit is the maximum value you can set ThreadsPerChild to by just restarting Apache. If you need to increase the value of ThreadsPerChild to a value higher than the current ThreadLimit, you will need to modify ThreadLimit too and stop and start the parent process manually. Don’t set the ThreadLimit value much higher than the ThreadsPerChild value, because it increases memory consumption. After 50000 connections, a child process will exit and a new one will be spawn. This is useful in case of memory leaks.

Enabling compression and client caching of content

Enable the deflate and brotli modules in Apache to serve compressed files to clients which support this:

# a2enmod deflate
# a2enmod brotli

Then in /etc/apache2/mods-enabled/deflate.conf put this:

<IfModule mod_filter.c>
	AddOutputFilterByType DEFLATE text/html text/plain text/xml
	AddOutputFilterByType DEFLATE text/css
	AddOutputFilterByType DEFLATE application/x-javascript application/javascript application/ecmascript
	AddOutputFilterByType DEFLATE application/rss+xml
	AddOutputFilterByType DEFLATE application/xml
	AddOutputFilterByType DEFLATE text/json application/json
	AddOutputFilterByType DEFLATE application/wasm
</IfModule>

and in /etc/apache2/mods-enabled/brotli.conf this:

<IfModule mod_filter.c>
	AddOutputFilterByType BROTLI_COMPRESS text/html text/plain text/xml
	AddOutputFilterByType BROTLI_COMPRESS text/css
	AddOutputFilterByType BROTLI_COMPRESS application/x-javascript application/javascript application/ecmascript
	AddOutputFilterByType BROTLI_COMPRESS application/rss+xml
	AddOutputFilterByType BROTLI_COMPRESS application/xml
	AddOutputFilterByType BROTLI_COMPRESS text/json application/json
	AddOutputFilterByType BROTLI_COMPRESS application/wasm
</IfModule>

In order to let browsers cache static files, so that they don’t have to redownload all images if a user comes back, we set some expires headers. First make sure the expires module is enabled:

# a2enmod expires

Then in /etc/apache2/mods-enabled/expires.conf set this:

<IfModule mod_expires.c>
        ExpiresActive On
        ExpiresDefault A120
        ExpiresByType image/x-icon A604800
        ExpiresByType application/x-javascript A604800
        ExpiresByType application/javascript A604800
        ExpiresByType text/css A604800
        ExpiresByType image/gif A604800
        ExpiresByType image/png A604800
        ExpiresByType image/jpeg A604800
        ExpiresByType image/webp A604800
        ExpiresByType image/avif A604800
        ExpiresByType image/x-ms-bmp A604800
        ExpiresByType image/svg+xml A604800
        ExpiresByType image/vnd.microsoft.icon A604800
        ExpiresByType text/plain A604800
        ExpiresByType application/x-shockwave-flash A604800
        ExpiresByType video/x-flv A604800
        ExpiresByType video/mp4 A604800
        ExpiresByType application/pdf A604800
        ExpiresByType application/font-woff A604800
        ExpiresByType font/woff A604800
        ExpiresByType font/woff2 A604800
        ExpiresByType application/vnd.ms-fontobject A604800
        ExpiresByType text/html A120
</IfModule>

This will allow the mentioned MIME types to be cached for a week, while HTML and other files will be cached up to 2 minutes.

Modsecurity to block web attacks

Configure Modsecurity with the OWASP CoreRuleSet in order to prevent wasting time to web attacks. Block old HTTP versions: this will already block some stupid security scanners and bots.

mod_reqtimeout to mitigate Slowloris attacks

The Slowloris attack is an easy attack in which the attacker opens many connections to your web server, but intentionally slows down completing its request. Your server worker threads get busy waiting for completion of the requests, which does not happen This way, one client can occupy all worker threads of the web server without using a lot of bandwidth.

Debian has enabled protection in Apache by default by means of Apache mod_reqtimeout. The configuration can be found in /etc/apache2/mods-enabled/reqtimeout.conf:

RequestReadTimeout header=20-40,minrate=500
RequestReadTimeout body=10,minrate=500

The first line will limit the wait time until the first byte of the request line is received to 20 seconds, after that it will require at least 500 bytes per second, with a total limit of 40 seconds until all request headers are received.

The second line limits the wait time until the first byte of the request body is received to 10 seconds, after which 500 bytes per second are required.

If a connection is slower than the above requirements, it will be closed by the server.

mod_qos to limit requests on your server

First an important caveat: the libapache2-mod-qos package included in Debian 12 Bookworm is broken. If you install this one, your Apache web server will crash at startup. You can find a fixed libapache12-mod-qos package for Debian 12 in my bookworm-frehi repository.

Enable the module with

# a2enmod qos

Then we set some configuration options in /etc/apache2/mods-enabled/qos.conf:

# allows keep-alive support till the server reaches 80% of all MaxRequestWorkers
QS_SrvMaxConnClose 80%

# don't allow a single client to open more than 10 TCP connections if
# the server has more than 600 connections:
QS_SrvMaxConnPerIP 10 600

# minimum data rate (bytes/sec) when the server
# has 100 or more open TCP connections:
QS_SrvMinDataRate 1000 32000 100

# disables connection restrictions for certain clients:
QS_SrvMaxConnExcludeIP 192.0.2.
QS_SrvMaxConnExcludeIP 2001:db8::2

QS_ClientEntries 50000

We disable keepalive as soon as we reach 80% of MaxRequestWorkers, so that thread are not occupied any more by keepalive requests. You can also use an absolute number instead of a percentage if you prefer that. As soon as we reach 600 simultaneous connections, we only only allow 10 connections per IP address. As soon as there are more than 100 connections, we require that there is minimum of data rate of 1000 bytes per second. The required transfer rate is increased linearly to 32000 bytes per second when the amount of connections reaches the value of MaxRequestWorkers. The QS_ClientEntries setting defines how many different IP addresses mod_qos keeps track of. By default it’s 50000. On very busy servers you will need to increase this, but keep in mind that this increases memory usage. Use QS_SrvMaxConnExcludeIP to exclude certain IP addresses from these limitations.

Then we want to limit the amount of requests a client can make. We focus on the resource intensive requests. Very often attackers will abuse the search form, because this not only stresses your web server, but also your database and your web application itself. If one of them, or the combination of these three, can’t cope with requests, your application will be knocked off line. Also other forms, such as contact forms or authentication forms, are often targeted.

First I add this to the VirtualHost I want to protect:

RewriteEngine on
RewriteCond "%{REQUEST_URI}" "^/$"
RewriteCond "%{QUERY_STRING}" "s="
RewriteRule ^ - [E=LimitWPSearch] 
                
RewriteCond "%{REQUEST_URI}" "^/wp-login.php$"
RewriteCond "%{REQUEST_METHOD}" "POST"
RewriteRule ^ - [E=LimitWPLogin]

RewriteCond "%{REQUEST_URI}" "^/wp-json/"
RewriteRule ^ - [E=LimitWPJson]

RewriteCond "%{REQUEST_URI}" "^/wp-comments-post.php"
RewriteRule ^ - [E=LimitWPComment]

RewriteCond "%{REQUEST_URI}" "^/wp-json/contact-form-7/"
RewriteCond "%{REQUEST_METHOD}" "POST"
RewriteRule ^ - [E=LimitWPFormPost]

RewriteCond "%{REQUEST_URI}" "^/xmlrpc.php"
RewriteRule ^ - [E=LimitWPXmlrpc]

RewriteCond "%{REQUEST_METHOD}" "POST"
RewriteRule ^ - [E=LimitWPPost]

RewriteCond "%{REQUEST_URI}" ".*"
RewriteRule ^ - [E=LimitWPAll]

The website I want to protect runs on WordPress. I use mod_rewrite to set a variable when certain requests are made, for example LimitWPSearch when a search is done, LimitWPLogin when data is submitted to the login form, LimitWPJson when something in /wp-json/ is accessed, LimitWPComment when a comment is submitted, LimitWPXmlrpc when /xmlrpc.php is accessed, LimitWPFormPost when a form created with the Contact Form 7 plugin is posted, LimitWPPost when anything is sent via POST, and LimitWPAll on any request. A request can trigger multiple rules.

Then we go back to our global server configuration to define the exact values for these limits. You can again do this in /etc/apache2/mods-enabled/qos.conf or /etc/apache2/conf-enabled/qos.conf or something like that:

QS_ClientEventLimitCount 20 40 LimitWPPost
QS_ClientEventLimitCount 400 120 LimiWPAll
QS_ClientEventLimitCount 5 60 LimitWPSearch
QS_ClientEventLimitCount 50 600 LimitWPJson
QS_ClientEventLimitCount 10 3600 LimitWPLogin
QS_ClientEventLimitCount 3 60 LimitWPComment
QS_ClientEventLimitCount 6 60 LimitWPXmlrpc
QS_ClientEventLimitCount 3 60 LimitWPFormPost

For example when a client hits the LimitWPSearch rule 5 times in 60 seconds, the server will return a HTTP error code. In practice, this means that a client can do up to 4 successful searches per minute. You will have to adapt these settings to your own web applications. In case you get hit by an attack, you can easily adapt the values as necessary or add new limits.

Using Fail2ban to block attackers

Now I want to go further and block offenders of my QOS rules for a longer time in my firewall with Fail2ban.

Create /etc/fail2ban/filter.d/apache-mod_qos.conf:

[INCLUDES]

# Read common prefixes. If any customizations available -- read them from
# apache-common.local
before = apache-common.conf


[Definition]


failregex = mod_qos\(\d+\): access denied,.+\sc=<HOST>

ignoreregex = 

Then create /etc/fail2ban/jail.d/apache-mod_qos.conf:

[apache-mod_qos]

port         = http,https
backend      = pyinotify
journalmatch = 
logpath      = 	/var/log/apache2/*error.log 
		/var/log/apache2/*/error.log
maxretry     = 3
enabled      = true

Make sure you have the package python3-pyinotify installed.

Note that a stateful firewall will by default only block new connections, and you might still see some violations over the existing connection even after an IP is banned by Fail2ban.

Optimizing your web applications

Databases (MariaDB, MySQL, PostgreSQL)

Ideally you should run your DBMS on a separate server. If you run MariaDB (or MySQL), you can use mysqltuner to get automatic performance tuning recommendations.

Generic configuration recommendations for PostgreSQL can be created on PgTune.

WordPress

If you are using WordPress, I recommend setting up the Autoptimize, WebP Express, Avif Express and WP Super Cache modules. Autoptimize can aggregate and minify your HTML, CSS and Javascript files, reducing bandwidth usage and reducing the number of request needed to load your site. WebP Express and Avif Express will automatically convert your JPEG, GIF and PNG images to the more efficient WebP and AVIF formats, which again reduces bandwidth.

WP Super Cache can cache your pages, so that they don’t have to be dynamically generated for every request. I strongly recommend that in the settings of WP Super Cache, in the Advanced page, you choose the Expert cache delivery method. You will need to set up some rewrite rules in your .htaccess file. In this mode, Apache will directly serve the cached pages without any involvement of PHP. You can easily check this by stopping the php8.2-fpm service and visiting your website. The cached pages will just load.

Drupal

I’m not really a Drupal specialist, but at least make sure that in Administration – Configuration – Performance caching and the bandwidth optimization options are enabled.

Mediawiki

There are some performance tuning tips for Mediawiki in the documentation.

Testing

I like to use h2load to do benchmarking of web servers. It’s part of the nghttp2-client package in Debian:

# apt install nghttp2-client

Ideally, you run benchmarks before making any of the above changes, and you retest after each modification to see the effect. Also when testing, you should do them from a host where it does not matter if it gets locked out. It can be a good idea to stop fail2ban, because it can get annoying if you are blocked on the firewall. You can also run these benchmarks on the host itself, bypassing the firewall rules.

First we test our mod_qos rule which limits the amount of searches we can do:

# h2load -n8 -c1 https://example.com/?s=test
starting benchmark...
spawning thread #0: 1 total client(s). 8 total requests
TLS Protocol: TLSv1.3
Cipher: TLS_AES_128_GCM_SHA256
Server Temp Key: X25519 253 bits
Application protocol: h2
progress: 12% done
progress: 25% done
progress: 37% done
progress: 50% done
progress: 62% done
progress: 75% done
progress: 87% done
progress: 100% done

finished in 277.41ms, 28.84 req/s, 544.07KB/s
requests: 8 total, 8 started, 8 done, 4 succeeded, 4 failed, 0 errored, 0 timeout
status codes: 4 2xx, 0 3xx, 4 4xx, 0 5xx
traffic: 150.93KB (154553) total, 636B (636) headers (space savings 83.13%), 150.03KB (153628) data
                     min         max         mean         sd        +/- sd
time for request:     2.97ms    210.18ms     32.38ms     71.98ms    87.50%
time for connect:    16.45ms     16.45ms     16.45ms         0us   100.00%
time to 1st byte:   226.50ms    226.50ms    226.50ms         0us   100.00%
req/s           :      28.98       28.98       28.98        0.00   100.00%

You can see that 4 of these request succeed (status code 2xx) and the other 4 failed (status code 4xx). This is exactly what we configured: starting from the 5th request in a period of 60 seconds, we don’t allow searches. In Apache’s error log we see:

[qos:error] [pid 1473291:tid 1473481] [remote 2001:db8::2] mod_qos(067): access denied, QS_ClientEventLimitCount rule: event=LimitWPSearch, max=5, current=5, age=0, c=2001:db8::2, id=Z4PiOQcCdoj4ljQCvAkxPwAAsD8

We can check the connection rate limit we have configured in our firewall, by benchmarking the creation of more connections than we allowed in the burst value:

# h2load -n10000 -c50 https://example.com/
starting benchmark...
spawning thread #0: 50 total client(s). 10000 total requests
TLS Protocol: TLSv1.3
Cipher: TLS_AES_128_GCM_SHA256
Server Temp Key: X25519 253 bits
Application protocol: h2
progress: 10% done
progress: 20% done
progress: 30% done
progress: 40% done
progress: 50% done
progress: 60% done
progress: 70% done
progress: 80% done
progress: 90% done
progress: 100% done

finished in 148.78s, 67.22 req/s, 695.57KB/s
requests: 10000 total, 10000 started, 10000 done, 798 succeeded, 9202 failed, 0 errored, 0 timeout
status codes: 798 2xx, 0 3xx, 9202 4xx, 0 5xx
traffic: 101.06MB (105968397) total, 181.31KB (185665) headers (space savings 95.80%), 100.66MB (105550608) data
                     min         max         mean         sd        +/- sd
time for request:    19.47ms    649.75ms     50.94ms     23.46ms    86.23%
time for connect:   127.26ms      69.74s      25.28s      26.08s    80.00%
time to 1st byte:   211.04ms      69.79s      25.34s      26.06s    80.00%
req/s           :       1.34       17.91        6.75        5.79    80.00%

As you can see, the “time for connect” went up to 69.74 seconds because connections were dropped because of the limit set in Foomuuri.

If logging of these connections is enabled in the firewall, we can see this:

http_limit IN=ens3 OUT= MAC=aa:aa:aa:aa:aa:aa:aa:aa:aa8:aa:aa:aa:aa:aa SRC=2001:db8::2 DST=2001:db8::3 LEN=80 TC=0 HOPLIMIT=45 FLOWLBL=327007 PROTO=TCP SPT=32850 DPT=443 WINDOW=64800 RES=0x00 SYN URGP=0

Also mod_qos kicks in and this results in only 798 succeeded requests, while the others where blocked.

Conclusion

DoS and DDoS attacks by hacktivist groups are very common and happen on a daily basis, often kicking websites of companies and public services offline. Unfortunately, there is not a lot of practical advise on the Wold Wide Web how to mitigate these attacks. I hope this article provides a some basic insight.

The proposed measures should protect well against a simple DoS attack. More difficult to judge is how well they protect against a DDoS attack. They will certainly make them more difficult, but certainly, the larger the botnet executing the attack is, the more difficult it becomes to mitigate this on the server level. If the attackers manage to saturate your bandwidth, there is nothing you can do and you will need measures on the network level to prevent the traffic hitting your server at all.

In any case, the proposed configuration should provide a basis for mitigating some attacks. You can easily adapt the rules to the attacks you are experiencing. I recommend implementing and testing this configuration before you experience a real attack.

Sources and more information

Frehi Debian package repository for Bookworm

While creating AppArmor profiles, I recently encountered a few problems with the packages on Debian 12 Bookworm. If you use a more recent Linux kernel than the one which is in Bookworm (Linux 6.1 from Bookworm works fine), apparmor_parser can hang on certain profiles and cause a null pointer dereference in the kernel. This bug is also being tracked as upstream bug 346 and a partial fix has been committed to the Apparmor git repository. Another problem I encountered, is that aa-logprof and aa-genprof would completely ignore any exec events from within a subprofile, because these tools don’t support nested profiles. An AppArmor developer created a merge request which would at least show these events in aa-genprof and aa-logprof, and give you at least the option to inherit the profile and run the new process unconfined. If you want to create a child profile, you will still have to do this manually but at least the are now other valid options are now available.

I also recently stumbled on the package libapache2-mod-qos which is completely broken in Debian Bookworm: it is built against an older libpcre version which conflicts with the one Apache is using, causing it to crash immediately at startup. The bug is fixed in Debian trixie/sid, but that does not help users of the stable Debian release.

So I decided to build Apparmor 3.0.12 from sid with the additional patches mentioned above for Debian Bookworm, as well as the new libapache2-mod-qos which fixes the crash at Apache startup. I have created a public repository you can use if you are interested in these fixes. The packages work for me, but I cannot guarantee that they won’t cause any problem for you, so use them at your own risk. I only build for AMD64, so other architectures are not available.

Setting up the bookworm-frehi repository on Debian

In order to use these packages, create a file /etc/apt/sources.list.d/bookworm-frehi.list with this content:

deb http://debian.frehi.be/debian bookworm-frehi main contrib non-free
deb-src http://debian.frehi.be/debian bookworm-frehi main contrib non-free

You can also use https in case you prefer that, but I try to use http because then I can cache packages with apt-cacher-ng.

Then create a file /etc/apt/preferences.d/bookworm-frehi:

Package: *
Pin: release n=bookworm-frehi
Pin-Priority: 99

This makes sure that by default you will still be using packages from the Debian repository, and it will only use packages from this repository when you explicitly request to do so.

Then you will have to request he public GPG key from pgp.surf.nl and add this to your trusted apt keys:

$ export GNUPGHOME="$(mktemp -d)"
$ gpg --keyserver pgp.surf.nl --recv-keys 1FBBAB8D2CA17863
$ gpg --export "1FBBAB8D2CA17863" > /tmp/bookworm-frehi.gpg
# mv /tmp/bookworm-frehi.gpg /etc/apt/trusted.gpg.d/
# rm -rf $GNUPGHOME

Now run:

# apt update

and you can use the repository, for example:

# apt-cache policy apparmor
# apt-cache policy libapache2-mod-qos
# apt install -t bookworm-frehi apparmor libapache2-mod-qos

Increasing PHP security with Snuffleupagus

In a previous article, I discussed how to set up ModSecurity with the Core Rule Set on Debian. This can be considered as a first line of defense against malicious HTTP traffic. In a defense in depth strategy of course we want to add additional layers of protection to your web servers. One such layer is Snuffleupagus. Snuffleupagus is a PHP module which protects your web applications against various attacks. Some of the hardening features it offers are encryption of cookies, disabling XML External Entity (XXE) processing, a white or blacklist for the functions which can be used in the eval() function and the possibility to selectively disable PHP functions with specific arguments (virtual-patching).

Installing Snuffleupagus on Debian

Unfortunately there is no package for Snuffleupagus included in Debian, but it is not too difficult to build one yourself:

$ apt install php-dev
$ mkdir snuffleupagus
$ cd snuffleupagus
$ git clone https://github.com/jvoisin/snuffleupagus
$ cd snuffleupagus
$ make debian

This will build the latest development code from the master branch. If you want to build the latest stable release, before running make debian, use these commands to view all tags and to checkout the latest table tag, which in this case was v0.8.2:

$ git tag
$ git checkout v0.8.2

If all went well, you should now have a file snuffleupagus_0.8.2_amd64.deb in the above directory, which you can install:

$ cd ..
$ apt install ./snuffleupagus_0.8.2_amd64.deb

Configuring Snuffleupagus

First we take the example configuration file and put it in PHP’s configuration directory. For example for PHP 7.4:

# zcat /usr/share/doc/snuffleupagus/examples/default.rules.gz > /etc/php/7.4/snuffleupagus.rules

Also take a look at the config subdirectory in the source tree for more example rules.

Edit the file /etc/php/7.4/fpm/conf.d/20-snuffleupagus.ini so that it looks like this:

extension=snuffleupagus.so
sp.configuration_file=/etc/php/7.4/snuffleupagus.rules

Now we will edit the file /etc/php/7.4/snuffleupagus.rules.

We need to set a secret key, which will be used for various cryptographic features:

sp.global.secret_key("YOU _DO_ NEED TO CHANGE THIS WITH SOME RANDOM CHARACTERS.");

You can generate a random key with this shell command:

$ echo $(head -c 512 /dev/urandom | tr -dc 'a-zA-Z0-9')

Simulation mode

Snuffleupagus can run rules in simulation mode. In this mode, the rule will not block further execution of the PHP file, but will just output a warning message in your log. Unfortunately there is no global simulation mode, but it has to be set per rule. You can run a rule in simulation mode by appending .simulation() to it. For example to run INI protection in simulation mode:

sp.ini_protection.simulation();

INI protection

To prevent PHP applications from modifying php.ini settings, you can set this in snuffleupagus.rules:

sp.ini_protection.enable();
sp.ini_protection.policy_readonly();

Cookie protection

The following configuration options sets the SameSite attribute to Lax on session cookies, which offers protection against CSFR on this cookie. We enforce setting the secure option on cookies, which instructs the web browser to only send them over an encrypted HTTPS connection and also enable encryption of the content of the session on the server. The encryption key being used is derived of the value of the global secret key you have set, the client’s user agent and the environment variable SSL_SESSION_ID.

sp.cookie.name("PHPSESSID").samesite("lax");

sp.auto_cookie_secure.enable();
sp.global.cookie_env_var("SSL_SESSION_ID");
sp.session.encrypt();

Note that the definition of cookie_env_var needs to happen before sp.session.encrypt(); which enables the encryption.

You have to make sure the variable SSL_SESSION_ID is passed to PHP. In Apache you can do so by having this in your virtualhost:

<FilesMatch "\.(cgi|shtml|phtml|php)$">
    SSLOptions +StdEnvVars
</FilesMatch>

eval white- or blacklist

eval() is used to evaluate PHP content, for example in a variable. This is very dangerous if the PHP code to be evaluated can contain user provided data. Therefore it is strongly recommended that you create a whitelist of functions which can be called by code evaluated by eval().

Start by putting this in snuffleupagus.rules and restart PHP:

sp.eval_whitelist.list().simulation();

Then test your websites and see which errors you get in the logs, and add them separated by commas to the eval_whitelist.list(). After that you need to remove .simulation() and restart PHP in order to activate this protection. For example

sp.eval_whitelist.list("array_pop,array_push");

You can also use a blacklist, which only blocks certain functions. For example:

sp.eval_blacklist.list("system,exec,shell_exec,proc_open");

Limit execution to read-only PHP files

The read_only_exec() feature of Snuffleupagus will prevent PHP from execution of PHP files on which the PHP process has write permissions. This will block any attacks where an attacker manages to upload a malicious PHP file via a bug in your website, and then attempts to execute this malicious PHP script.

It is a good practice to let your PHP scripts be owned by a different user than the PHP user, and give PHP only read-only permissions on your PHP files.

To test this feature, add this to snuffleupagus.rules:

sp.readonly_exec.simulation();

If you are sure all goes well, enable it:

sp.readonly_exec.enable();

Virtual patching

One of the main features of Snuffleupagus is virtual patching. Thjs feature will disable functions, depending on the parameters or and values they are given. The example rules file contains a good set of generic rules which blocks all kinds of dangerous behaviour. You might need to fine-tune the rules if your PHP applications hits certain rules.

Some examples of virtual-patching rules:

sp.disable_function.function("chmod").param("mode").value("438").drop();
sp.disable_function.function("chmod").param("mode").value("511").drop();

These rules will drop calls to the chmod function with octal values 438 and 511, which correspond to the dangerous 0666 and 0777 decimal permissions.

sp.disable_function.function("include_once").value_r(".(inc|phtml|php)$").allow();
sp.disable_function.function("include_once").drop();

These two rules will only allow the include_once function to include files which file name are ending with inc, phtml or php. All other include_once calls will be dropped.

Using generate-rules.php to automatically site-specific hardening rules

In the scripts subdirectoy of the Snuffleupagus source tree, there is a file named generate_rules.php. You can run this script from the command line, giving it a path to a directory with PHP files, and it will automatically generate rules which specifically allow all needed dangerous function calls, and then disable them globally. For example to generate rules for the /usr/share/tt-rss/www and /var/www directories:

# php generate_rules.php /usr/share/tt-rss/www/ /var/www/

This will generate rules:

sp.disable_function.function("function_exists").filename("/usr/share/tt-rss/www/api/index.php").hash("fa02a93e2724d7e818c5c13f4ba8b110c47bbe7fb65b74c0aad9cff2ed39cf7d").allow();
sp.disable_function.function("function_exists").filename("/usr/share/tt-rss/www/classes/pref/prefs.php").hash("43926a95303bc4e7adefe9d2f290dd8b66c9292be836908081e3f2bd8a198642").allow();
sp.disable_function.function("function_exists").drop();

The first two rules allow these two files to call function_exists and the last rule drops all requests to function_exists from any other files. Note that the first two rules limit the rules not only to the specified file name, but also define the SHA256 of the file. This way, if the file is changed, the function call will be dropped. This is the safest way, but it can be annoying if the files are often or automatically updated because it will break the site. In this case, you can call generate_rules.php with the --without-hash option:

# php generate_rules.php --without-hash /usr/share/tt-rss/www/ /var/www/

After you have generated the rules, you will have to add them to your snuffleupagus.rules file and restart PHP-FPM.

File Upload protection

The default Snuffleupagus rule file contains 2 rule which will block any attempts uploading a html or PHP file. However, I noticed that they were not working with PHP 7.4 and these rules would cause this error message:

PHP Warning: [snuffleupagus][0.0.0.0][config][log] It seems that you are filtering on a parameter 'destination' of the function 'move_uploaded_file', but the parameter does not exists. in /var/www/html/foobar.php on line 15PHP message: PHP Warning: [snuffleupagus][0.0.0.0][config][log] - 0 parameter's name: 'path' in /var/www/html/foobar.php on line 15PHP message: PHP Warning: [snuffleupagus][0.0.0.0][config][log] - 1 parameter's name: 'new_path' in /var/www/html/foobar.php on line 15'

The snuffleupagus rules use the parameter destination for the move_uploaded_file instead of the parameter new_path. You will have to change the rules like this:

sp.disable_function.function("move_uploaded_file").param("new_path").value_r("\.ph").drop();<br />sp.disable_function.function("move_uploaded_file").param("new_path").value_r("\.ht").drop();

Note that on PHP 8, the parameter name is to instead of new_path.

Enabling Snuffleupagus

To enable Snuffleupagus in PHP 7.4, link the configuration file to /etc/php/7.4/fpm/conf.d:

# cd /etc/php/7.4/fpm/conf.d
# ln -s ../../mods-available/snuffleupagus.ini 20-snuffleupagus.ini
# systemctl restart php7.4-fpm

After restarting PHP-FPM, always check the logs to see whether snuffleupagus does not give any warning or messages for example because of a syntax error in your configuration:

# journalctl -u php7.4-fpm -n 50

Snuffleupagus logs

By default Snuffleupagus logs via PHP. Then if you are using Apache with PHP-FPM, you will find Snuffleupagus logs, just like any PHP warnings and errors in the Apache error_log, for example /var/log/apache/error.log. If you encounter any problems with your website, go check this log to see what is wrong.

Snuffleupagus can also be configured to log via syslog, and actually even recommends this, because PHP’s logging system can be manipulated at runtime by malicious scripts. To log via syslog, add this to snuffleupagus.rules:

sp.log_media("syslog");

I give a few examples of errors you can encounter in the logs and how to fix them:

[snuffleupagus][0.0.0.0][xxe][log] A call to libxml_disable_entity_loader was tried and nopped in /usr/share/tt-rss/www/include/functions.php on line 22

tt-rss calls the function libxml_disable_entity_loader but this is blocked by the XXE protection. Commenting this in snuffleupagus.rules should fix this:

sp.xxe_protection.enable();

Another example:

[snuffleupagus][0.0.0.0][disabled_function][drop] Aborted execution on call of the function 'ini_set', because its argument '$varname' content (display_errors) matched a rule in /usr/share/tt-rss/www/include/functions.php on line 37'

Modifying the PHP INI option display_errors is not allowed by this rule:

sp.disable_function.function("ini_set").param("varname").value_r("display_errors").drop();

You can completely remove (or comment) this rule in order to disable it. But a better way is to add a rule before this rule which allows it for specially that PHP file. So add this rule before:

sp.disable_function.function("ini_set").filename("/usr/share/tt-rss/www/include/functions.php").param("varname").value_r("display_errors").allow();

If you get something like this:

[snuffleupagus][0.0.0.0][disabled_function][drop] Aborted execution on call of the function 'function_exists', because its argument '$function_name' content (exec) matched a rule in /var/www/wordpress/wp-content/plugins/webp-express/vendor/rosell-dk/exec-with-fallback/src/ExecWithFallback.php on line 35', referer: wp-admin/media-new.php

It’s caused by this rule:

sp.disable_function.function("function_exists").param("function_name").value("exec").drop();

You can add this rule before to allow this:

sp.disable_function.function("function_exists").filename("/var/www/wordpress/wp-admin/media-new.php").param("function_name").value("exec").allow();

More information

Snuffleupagus documentation

Snuffleupagus on Github

Julien Voisin blog archives

Web application firewall: Modsecurity and Core Rule Set

A web application firewall (WAF) filters HTTP traffic. By integrating this in your web server, you can make sure potentially dangerous requests are blocked before they arrive to your web application or sensitive data leaks out of your web server. This way you add an extra defensive layer potentially offering extra protection against zero-day vulnerabilities in your web server or web applications. In this blog post, I give a tutorial how to install and configure ModSecurity web application firewall and the Core Rule Set on Debian. With some minor adaptions you can also use this guide for setting up ModSecurity on Ubuntu or other distributions.

ModSecurity is the most well-known open source web application firewall. The future of ModSecurity does not look too bright but fortunately with Coraza WAF an alternative which is completely compatible with ModSecurity is in development. At this moment Coraza only integrates with the Caddy web server, and does not have a connector for Apache or NGinx so for that reason it is currently not yet usable as a replacement for ModSecurity.

While ModSecurity provides the framework for filtering HTTP traffic, you also need rules which define what to bloc and that’s where the Core Rule Set (CRS) comes in. CRS is a set of generic rules winch offer protection to a various range of common attacks via HTTP, such as SQL injection, code injection and cross-site scripting (XSS) attacks.

Install ModSecurity and the Core Rule Set on Debian

I install the Apache module for ModSecurity, the geoip-database, which can be used for blocking all requests from certain countries, and modsecurity-crs, which contains the Core Rule Set. I take this package from testing, because it has a newer version (version 3.3.2 at the time of writing). There is no risk in taking this package from testing, because it only contains the rules and does not depend on any other packages from testing/unstable. If you prefer faster updates, you can also use unstable.

# apt install libapache2-mod-security2 geoip-database
# apt install -t testing modsecurity-crs

Configuring ModSecurity

In order to load the ModSecurity module in Apache, run this command:

# a2enmod security2

Then copy the example ModSecurity configuration file to /etc/modsecurity/modsecurity.conf:

cp /etc/modsecurity/modsecurity.conf-recommended /etc/modsecurity/modsecurity.conf

Now edit /etc/modsecurity/modsecurity.conf. I highlight some of the options:

SecRuleEngine on
SecRequestBodyLimit 536870912
SecRequestBodyNoFilesLimit 131072
SecAuditLog /var/log/apache2/modsec_audit.log
#SecRule MULTIPART_UNMATCHED_BOUNDARY "!@eq 0" \
#"id:'200004',phase:2,t:none,log,deny,msg:'Multipart parser detected a possible unmatched boundary.'"
SecPcreMatchLimit 500000
SecPcreMatchLimitRecursion 500000
SecStatusEngine Off

The SecRuleEngine option controls whether rules should be processed. If set to Off, you completely disable all rules, with On you enable them and it will block malicious actions. If set to DetectionOnly, ModSecurity will only log potential malicious activity flagged by your rules, but will not block them. DetectionOnly can be useful for temporary trying out the rules in order to find false positives before you really start blocking potential malicious activity.

The SecAuditLog option defines a file which contains audit logs. This file will contain detailed logs about every request triggering a ModSecurity rule.

The SecPcreMatchLimit and SecPcreMatchLimitRecursion set the match limit and match limit recursion for the regular expression library PCRE. Setting this high enough will prevent errors that the PCRE limits were exceeded while analyzing data, but setting it too high can make ModSecurity vulnerable to a Denial of Service (DoS) attack. A Core Rule Set developer recommends a value of 50000 so that’s what I use here.

I change SecRequestBodyLimit to a higher value to allow large file uploads.

I disable the rule 200004 because it is known to cause false positives.

Set SecStatusEngine to Off to prevent ModSecurity sending version information back its developers.

After changing any configuration related to ModSecurity or the Core Rule Set, reload your Apache web server:

# systemctl reload apache2

Configuring the Core Rule Set

The Core Rule Set can be configured via the file /etc/modsecurity/crs/crs-setup.conf.

Anomaly Scoring

By default the Core Rule Set is using anomaly scoring mode. This means that individual rules add to a so called anomaly score, which at the end is evaluated. If the anomaly score exceeds a certain threshold, then the traffic is blocked. You can read more about this configuration in crs-setup.conf but the default configuration should be fine for most people.

Setting the paranoia level

The paranoia level is a number from 1 to 4 which determines which rules are active and contribute to the anomaly scoring. The higher the paranoia level, the more rules are activated and hence the more aggressive the Core Rule Set is, offering more protection but potentially also causing more false positives. By default the paranoia level is set to 1. If you work with sensitive data, it is recommended to increase the paranoia level.

The executing paranoia level defines the rules which will be executed but their score will not be added to the anomaly scoring. When HTTP traffic hits rules of the executing paranoia level, this traffic will only be logged but not be blocked. It is a especially useful to prepare for increasing the paranoia level and finding false positives on this higher level, without causing any disruption for your users.

To set the paranoia level to 1 and the executing paranoia level to 2, make sure you have these rules set in crs-setup.conf:

SecAction \
  "id:900000,\
   phase:1,\
   nolog,\
   pass,\
   t:none,\
   setvar:tx.paranoia_level=1"
SecAction \
  "id:900001,\
   phase:1,\
   nolog,\
   pass,\
   t:none,\
   setvar:tx.executing_paranoia_level=2"

Once you have fixed all false positives, you can raise the paranoia level to 2 to increase security.

Defining the allowed HTTP methods

By default the Core Rule Set only allows the GET, HEAD, POST and OPTIONS HTTP methods. For many standard sites this will be enough but if your web applications also use restful APIs or WebDAV, then you will need to add the required methods. Change rule 900200, and add the HTTP methods mentioned in the comments in crs-setup.conf.

SecAction \
 "id:900200,\
  phase:1,\
  nolog,\
  pass,\
  t:none,\
  setvar:'tx.allowed_methods=GET HEAD POST OPTIONS'"

Disallowing old HTTP versions

There is a rule which determines which HTTP versions you allow in HTTP requests. I uncomment it and modify it to only allow HTTP versions 1.1 and 2.0. Legitimate browsers and bots always use one of these modern HTTP versions and older versions usually are a sign of malicious activity.

SecAction \
 "id:900230,\
  phase:1,\
  nolog,\
  pass,\
  t:none,\
  setvar:'tx.allowed_http_versions=HTTP/1.1 HTTP/2 HTTP/2.0'"

Blocking specific countries

Personally I’m not a fan of completely blocking all traffic from a whole country, because you will also block legitimate visitors to your site, but in case you want to this, you can configure this in crs-setup.conf:

SecGeoLookupDB /usr/share/GeoIP/GeoIP.dat
SecAction \
 "id:900600,\
  phase:1,\
  nolog,\
  pass,\
  t:none,\
  setvar:'tx.high_risk_country_codes='"

Add the two-letter country codes you want to block to the last line (before the two quotes), multiple country codes separated by a space.

Make sure you have the package geoip-database installed.

Core Rule Set Exclusion rules for well-known web applications

The Core Rule Set contains some rule exclusions for some well-known web applications like WordPress, Drupal and NextCloud which reduces the number of false positives. I add the following section to crs-setup.conf which will allow me to enable the exclusions in the Apache configuration by setting the WEBAPPID variable in the Apache configuration whenever I need them.

SecRule WEBAPPID '@beginsWith wordpress' 'id:20000,phase:1,nolog,pass,t:none setvar:tx.crs_exclusions_wordpress=1'
SecRule WEBAPPID '@beginsWith drupal' 'id:20001,phase:1,nolog,pass,t:none setvar:tx.crs_exclusions_drupal=1'
SecRule WEBAPPID '@beginsWith dokuwiki' 'id:20002,phase:1,nolog,pass,t:none setvar:tx.crs_exclusions_dokuwiki=1'
SecRule WEBAPPID '@beginsWith nextcloud' 'id:20003,phase:1,nolog,pass,t:none setvar:tx.crs_exclusions_nextcloud=1'
SecRule WEBAPPID '@beginsWith cpanel' 'id:20004,phase:1,nolog,pass,t:none setvar:tx.crs_exclusions_cpanel=1'
SecRule WEBAPPID '@beginsWith xenforo' 'id:20005,phase:1,nolog,pass,t:none setvar:tx.crs_exclusions_xenforo=1'

Adding rules for Log4Shell and Spring4Shell detection

At the end of 2021 a critical vulnerability CVE-2021-44228, named Log4Shell, was detected in Log4j, which allows remote attackers to run code on a server with the vulnerable Log4j version. While the Core Rule Set offered some mitigation of this vulnerability out of the box, this protection was not complete. New improved detection rules against Log4Shell were developed. Because of the severity of this bug and the fact that it’s being exploited in the wild, I strongly recommend adding this protection manually when using ModSecurity version 3.3.2 (or older). Newer, not yet released versions, should have complete protection out of the box.

First modify /etc/apache2/mods-enabled/security2.conf so that it looks like this:

<IfModule security2_module>
        # Default Debian dir for modsecurity's persistent data
        SecDataDir /var/cache/modsecurity

        # Include all the *.conf files in /etc/modsecurity.
        # Keeping your local configuration in that directory
        # will allow for an easy upgrade of THIS file and
        # make your life easier
        IncludeOptional /etc/modsecurity/*.conf

        # Include OWASP ModSecurity CRS rules if installed
        IncludeOptional /usr/share/modsecurity-crs/*.load
        SecRuleUpdateTargetById 932130 "REQUEST_HEADERS"
</IfModule>

Then create the file /etc/modsecurity/99-CVE-2021-44228.conf with this content:

# Generic rule against CVE-2021-44228 (Log4j / Log4Shell)
# See https://coreruleset.org/20211213/crs-and-log4j-log4shell-cve-2021-44228/
SecRule REQUEST_LINE|ARGS|ARGS_NAMES|REQUEST_COOKIES|REQUEST_COOKIES_NAMES|REQUEST_HEADERS|XML://*|XML://@* "@rx (?:\${[^}]{0,4}\${|\${(?:jndi|ctx))" \
    "id:1005,\
    phase:2,\
    block,\
    t:none,t:urlDecodeUni,t:cmdline,\
    log,\
    msg:'Potential Remote Command Execution: Log4j CVE-2021-44228', \
    tag:'application-multi',\
    tag:'language-java',\
    tag:'platform-multi',\
    tag:'attack-rce',\
    tag:'OWASP_CRS',\
    tag:'capec/1000/152/137/6',\
    tag:'PCI/6.5.2',\
    tag:'paranoia-level/1',\
    ver:'OWASP_CRS/3.4.0-dev',\
    severity:'CRITICAL',\
    setvar:'tx.rce_score=+%{tx.critical_anomaly_score}',\
    setvar:'tx.anomaly_score_pl1=+%{tx.critical_anomaly_score}'"

In March 2022 CVE-2022-22963, another remote code execution (RCE) vulnerability was published in the Spring framework was published. The Core Rule Set developed a new rule to protect against this vulnerability which will be included in the next version, but the rule can be added manually if you are running the Core Rule Set version 3.3.2 or older.

To do so, create the file /etc/modsecurity/99-CVE-2022-22963.conf with this content:

# This rule is also triggered by the following exploit(s):
# - https://www.rapid7.com/blog/post/2022/03/30/spring4shell-zero-day-vulnerability-in-spring-framework/
# - https://www.ironcastle.net/possible-new-java-spring-framework-vulnerability-wed-mar-30th/
#
SecRule ARGS|ARGS_NAMES|REQUEST_COOKIES|!REQUEST_COOKIES:/__utm/|REQUEST_COOKIES_NAMES|REQUEST_BODY|REQUEST_HEADERS|XML:/*|XML://@* \
    "@rx (?:class\.module\.classLoader\.resources\.context\.parent\.pipeline|springframework\.context\.support\.FileSystemXmlApplicationContext)" \
    "id:1006,\
    phase:2,\
    block,\
    t:urlDecodeUni,\
    msg:'Remote Command Execution: Malicious class-loading payload',\
    logdata:'Matched Data: %{MATCHED_VAR} found within %{MATCHED_VAR_NAME}',\
    tag:'application-multi',\
    tag:'language-java',\
    tag:'platform-multi',\
    tag:'attack-rce',\
    tag:'OWASP_CRS',\
    tag:'capec/1000/152/248',\
    tag:'PCI/6.5.2',\
    tag:'paranoia-level/2',\
    ver:'OWASP_CRS/3.4.0-dev',\
    severity:'CRITICAL',\
    setvar:'tx.rce_score=+%{tx.critical_anomaly_score}',\
    setvar:'tx.anomaly_score_pl2=+%{tx.critical_anomaly_score}'"

Don’t forget to reload your Apache configuration after adding these rules.

Testing ModSecurity and checking the logs

We can now easily test ModSecurity by doing a request which tries to abuse a cross-site scripting (XSS) vulnerability:

$ curl -I "https://example.org/?search=<script>alert('CRS+Sandbox+Release')</script>"

This should return HTTP response 403 (Forbidden).

Whenever something hits your ModSecurity rules, this will be logged in your Apache error log. The above request has created these messages in the error log:

[Sat Apr 09 22:22:02.716558 2022] [:error] [pid 847584:tid 140613499016960] [client client-ip:49688] [client client-ip] ModSecurity: Warning. detected XSS using libinjection. [file "/usr/share/modsecurity-crs/rules/REQUEST-941-APPLICATION-ATTACK-XSS.conf"] [line "55"] [id "941100"] [msg "XSS Attack Detected via libinjection"] [data "Matched Data: XSS data found within ARGS:search: <script>alert('CRS Sandbox Release')</script>"] [severity "CRITICAL"] [ver "OWASP_CRS/3.3.2"] [tag "application-multi"] [tag "language-multi"] [tag "platform-multi"] [tag "attack-xss"] [tag "paranoia-level/1"] [tag "OWASP_CRS"] [tag "capec/1000/152/242"] [hostname "example.org"] [uri "/"] [unique_id "YlHq6gKxO9SgyEd0xH9N5gADLgA"]
[Sat Apr 09 22:22:02.716969 2022] [:error] [pid 847584:tid 140613499016960] [client client-ip:49688] [client client-ip] ModSecurity: Warning. Pattern match "(?i)<script[^>]*>[\\\\s\\\\S]*?" at ARGS:search. [file "/usr/share/modsecurity-crs/rules/REQUEST-941-APPLICATION-ATTACK-XSS.conf"] [line "82"] [id "941110"] [msg "XSS Filter - Category 1: Script Tag Vector"] [data "Matched Data: <script> found within ARGS:search: <script>alert('CRS Sandbox Release')</script>"] [severity "CRITICAL"] [ver "OWASP_CRS/3.3.2"] [tag "application-multi"] [tag "language-multi"] [tag "platform-multi"] [tag "attack-xss"] [tag "paranoia-level/1"] [tag "OWASP_CRS"] [tag "capec/1000/152/242"] [hostname "example.org"] [uri "/"] [unique_id "YlHq6gKxO9SgyEd0xH9N5gADLgA"]
[Sat Apr 09 22:22:02.717249 2022] [:error] [pid 847584:tid 140613499016960] [client client-ip:49688] [client client-ip] ModSecurity: Warning. Pattern match "(?i:(?:<\\\\w[\\\\s\\\\S]*[\\\\s\\\\/]|['\\"](?:[\\\\s\\\\S]*[\\\\s\\\\/])?)(?:on(?:d(?:e(?:vice(?:(?:orienta|mo)tion|proximity|found|light)|livery(?:success|error)|activate)|r(?:ag(?:e(?:n(?:ter|d)|xit)|(?:gestur|leav)e|start|drop|over)|op)|i(?:s(?:c(?:hargingtimechange ..." at ARGS:search. [file "/usr/share/modsecurity-crs/rules/REQUEST-941-APPLICATION-ATTACK-XSS.conf"] [line "199"] [id "941160"] [msg "NoScript XSS InjectionChecker: HTML Injection"] [data "Matched Data: <script found within ARGS:search: <script>alert('CRS Sandbox Release')</script>"] [severity "CRITICAL"] [ver "OWASP_CRS/3.3.2"] [tag "application-multi"] [tag "language-multi"] [tag "platform-multi"] [tag "attack-xss"] [tag "paranoia-level/1"] [tag "OWASP_CRS"] [tag "capec/1000/152/242"] [hostname "example.org"] [uri "/"] [unique_id "YlHq6gKxO9SgyEd0xH9N5gADLgA"]
[Sat Apr 09 22:22:02.718018 2022] [:error] [pid 847584:tid 140613499016960] [client client-ip:49688] [client client-ip] ModSecurity: Access denied with code 403 (phase 2). Operator GE matched 5 at TX:anomaly_score. [file "/usr/share/modsecurity-crs/rules/REQUEST-949-BLOCKING-EVALUATION.conf"] [line "93"] [id "949110"] [msg "Inbound Anomaly Score Exceeded (Total Score: 15)"] [severity "CRITICAL"] [ver "OWASP_CRS/3.3.2"] [tag "application-multi"] [tag "language-multi"] [tag "platform-multi"] [tag "attack-generic"] [hostname "example.org"] [uri "/"] [unique_id "YlHq6gKxO9SgyEd0xH9N5gADLgA"]
[Sat Apr 09 22:22:02.718596 2022] [:error] [pid 847584:tid 140613499016960] [client client-ip:49688] [client client-ip] ModSecurity: Warning. Operator GE matched 5 at TX:inbound_anomaly_score. [file "/usr/share/modsecurity-crs/rules/RESPONSE-980-CORRELATION.conf"] [line "91"] [id "980130"] [msg "Inbound Anomaly Score Exceeded (Total Inbound Score: 15 - SQLI=0,XSS=15,RFI=0,LFI=0,RCE=0,PHPI=0,HTTP=0,SESS=0): individual paranoia level scores: 15, 0, 0, 0"] [ver "OWASP_CRS/3.3.2"] [tag "event-correlation"] [hostname "example.org"] [uri "/"] [unique_id "YlHq6gKxO9SgyEd0xH9N5gADLgA"]

In the first 3 lines we see that we hit different filters which check for XSS vulnerabilities, more specifically rules 941100, 941110 and 941160 all of them having the tag paranoia-level/1.

Then the fourth line shows that we hit rule 949110 which caused the web server to return the HTTP 403 Forbidden response because the inbound anomaly score, 15, is higher than 5. Then rule 980130 gives us some more information about the scoring: we hit a score of 15 at the paranoia level 1, while rules at the other paranoia levels rules contributed 0 to the total score. We also see the scores for individual types of attack: in this case all 15 points where scored by rules detecting XSS attacks. This is the meaning of the different abbreviations used:

SQLISQL injection
XSScross-site scripting
RFIremote file inclusion
LFIlocal file inclusion
RCEremote code execution
PHPIPHP injection
HTTPHTTP violation
SESSsession fixation

More detailed logs about the traffic hitting the rules can be found in the file /var/log/apache2/modsec_audit.log.

Fixing false positives

First of all, in order to minimize the amount of false positives, you should set the WEBAPPID variable if you are using one of the known web applications for which the Core Rule Set has a default exclusion set. These web applications are currently WordPress, Drupal, Dokuwiki, Nextcloud, Xenforo and cPanel. You can do so by using the <a href="https://github.com/SpiderLabs/ModSecurity/wiki/Reference-Manual-(v2.x)#SecWebAppId">SecWebAppId</a> option in a VirtualHost of Location definition in the Apache configuration. For example if you have a VirtualHost which is used by Nextcloud, set this within the VirtualHost definition:

<Virtualhost nextcloud.example.org>
    ...OTHER OPTIONS HERE...
    <IfModule security2_module>
        SecWebAppId "nextcloud"
    </IfModule>
</VirtualHost>

If you have a WordPress installation in a subdirectory, then add SecWebAppId within Location tags.

<Location /wordpress>
    <IfModule security2_module>
        SecWebAppId "wordpress-mysite"
    </IfModule>
</Location>

If you have multiple WordPress sites, give each of them a unique WEBAPPID which name starts with wordpress. Add a different suffix for every instance so that each one run its in own application namespace in ModSecurity.

If you still encounter false positives, you can completely disable rules by using the configuration directive SecRuleRemoveById. I strongly recommend not disabling rules globally, but limiting its removal to the specific location from which you want them to be removed, for example by putting them with <Location> or <LocationMatch> tags in the Apache configuration. For example:

<LocationMatch ^/wp-admin/(admin-ajax|post)\.php>
    <IfModule security2_module>
        SecRuleRemoveById 941160 941100 941130 932105 932130 932100
    </IfModule>
</LocationMatch>

Pay attention not to disable any of the 949*, 959*, and 980* rules: disabling the 949* and 959* rules would disable all the blocking rules, while disabling the 980* rules would give you less information about what is happening in the logs.

Conclusion

ModSecurity and the Core Rule Set offer an additional security layer for web servers in your defence in depth strategy. I strongly recommend implementing this on your servers because it makes it harder to abuse security vulnerabilities.

Keep an eye on the Core Rule Set blog and Twitter account: sometimes they post new rules for specific new critical vulnerabilities, which can be worthwhile to add to your configuration.

Using HTTP headers to protect your visitor’s security and privacy

Recently there has been a lot of controversy over Google starting to use Federated Learning of Cohorts (FLoC) in its Chrome browser. This new technique is used to track users without using third party cookies, but has severe privacy implications because it actually makes fingerprinting users easier and can reveal your interests to websites.

To prevent tracking by FLoC and other tracking techniques, there is only one good solution: stop using Google Chrome. The best privacy friendly browser is Firefox, especially if you set it to strict enhanced tracking protection. If you really need to use Chrome, then at least install one of the open source extensions which disable FLoC and Privacy Badger for other tracking protection.

As a website owner, you can also do something to protect your users. You can opt out your website to be included in cohort computation by sending the header Permissions-Policy: interest-cohort=()

This can be easily done for all your websites by modifying your Apache configuration. While at it, you should set some other security and privacy related headers, notably:

  • X-Frame-Options "SAMEORIGIN": this makes sure that browsers will not allow your website to be included in an iframe on another domain.
  • X-Content-Type-Options "nosniff": This will prevent the browser from trying to automatically detect the file type of a downloaded file instead of using the MIME type sent by the server. This can mitigate attacks where a hacker manages to upload a malicious file by giving it a filename which makes it look like a harmless file type which is then served to your visitors.
  • Referrer-Policy "no-referrer-when-downgrade": when a visitor clicks on a link, the browser will only send the referrer when it’s not going from a HTTPS to a HTTP connection. This is fine if your URLs don’t contain any private information. If they do, then consider using strict-origin-when-cross-origin, so that only your domain name instead of the complete URL is sent as referrer if people click on a link leading to an external website, or even same-origin, which prevents any referrer being sent to external sites. You should probably do this for an internal website, web application or wiki, webmail, etc. More information about Referrer-Policy

To set these in Apache in Debian, create a file /etc/apache2/conf-available/security-headers.conf with these contents:

<IfModule mod_headers.c>
   Header always set X-Frame-Options "SAMEORIGIN"
   Header always set X-Content-Type-Options "nosniff"
   Header always set Referrer-Policy "no-referrer-when-downgrade"
   Header always set Permissions-Policy: interest-cohort=()
</IfModule>

Then make sure the mod_headers module is loaded and this file is enabled by running these commands:

# a2enmod headers
# a2enconf security-headers
# systemctl reload apache2

Another important header to set in your SSL virtualhosts is the HSTS header: it ensures that the browser will automatically use HTTPS every time when connecting to the website in the future. Place this in your SSL enabled virtualhost:

<IfModule mod_headers.c>
   Header always set Strict-Transport-Security "max-age=63072000"
</IfModule>

Then you should also add this to your non-SSL virtualhost to redirect all visitors using HTTP to HTTPS:

<IfModule mod_rewrite.c>
   RewriteEngine on
   RewriteCond %{HTTPS} !=on
   RewriteRule .* https://%{SERVER_NAME}%{REQUEST_URI} [R=301,L]
</IfModule>

Of course make sure mod_rewrite is enabled if that’s not yet the case:

# a2enmod rewrite
# systemctl reload apache2

You can check your server configuration on securityheaders.com. There you can also find more information about the headers Cross-Origin-Embedder-Policy, Cross-Origin-Opener-Policy and Cross-Origin-Resource-Policy, some other security related headers. Because they require more changes to your website to implement correctly, I’m not discussing them here.

More information

Running different PHP applications as different users

Often you run different web applications on the same web servers. For security reasons, it is strongly recommended to run them in separate PHP-FPM processes under different user accounts. This way permissions can be set so that the user account of one PHP application, cannot access the files from another PHP application. Also open_basedir can be set so that accessing any files outside the base directory becomes impossible.

To create a separate PHP-FPM process for a PHP application on Debian Stretch with PHP 7.0, create a file /etc/php/7.0/fpm/pool.d/webapp.conf with these contents:

[webapp]
user = webapp_php
group = webapp_php
listen = /run/php/php7.0-webapp-fpm.sock
listen.owner = www-data
listen.group = www-data
pm = dynamic
pm.max_children = 12
pm.start_servers = 1
pm.min_spare_servers = 1
pm.max_spare_servers = 2
pm.max_requests = 5000
rlimit_core = unlimited
php_admin_value[open_basedir] = /home/webapp/public_html

Replace webapp by a unique name for your web application. You can actually copy the default www.conf file and adapt it to your needs.

Create the webapp_php, with /bin/false as shell and login disabled to secure it against login attacks:

# adduser --system --disabled-login webapp_php --shell /bin/false --no-create-home --home /home/webapp webapp_php

In the above example the webapp is located in /home/webapp, but you can of course also use a directory somewhere in /var/www.

I strongly recommend against making all your PHP files in /home/webapp owned by webapp_php. This is a dangerous situation, because PHP can overwrite the code itself. This makes it possible for malware to overwrite your PHP files with malicious code. Only make the directories where PHP really needs to be able to write into (for example a directory where files uploaded in your web applications are stored), writable for the webapp_php user. Your code itself should be owned by a different user than webapp_php. It can be a dedicated user account, or just root.

Finally we need to configure Apache to contact the right php-fpm instance for the web application. Create a file /etc/apache2/conf-available/php7.0-webapp-fpm.conf:

&lt;Directory /home/webapp/public_html&gt;

# Redirect to local php-fpm if mod_php is not available
    &lt;IfModule proxy_fcgi_module&gt;
        # Enable http authorization headers
        &lt;IfModule setenvif_module&gt;
        SetEnvIfNoCase ^Authorization$ &quot;(. )&quot; HTTP_AUTHORIZATION=$1
        &lt;/IfModule&gt;

        &lt;FilesMatch &quot;. \.ph(p[3457]?|t|tml)$&quot;&gt;
            SetHandler &quot;proxy:unix:/run/php/php7.0-webapp-fpm.sock|fcgi://localhost-webapp&quot;
        &lt;/FilesMatch&gt;
        &lt;FilesMatch &quot;. \.phps$&quot;&gt;
            # Deny access to raw php sources by default
            # To re-enable it's recommended to enable access to the files
            # only in specific virtual host or directory
            Require all denied
        &lt;/FilesMatch&gt;
        # Deny access to files without filename (e.g. '.php')
        &lt;FilesMatch &quot;^\.ph(p[3457]?|t|tml|ps)$&quot;&gt;
            Require all denied
        &lt;/FilesMatch&gt;
    &lt;/IfModule&gt;
&lt;/Directory&gt;

This file is based on the default php7.0-fpm.conf. You will need to create a symlink to make sure this gets activated:

# cd /etc/apache2/conf-enabled
# ln -s ../conf-available/php7.0-webapp-fpm.conf .

Now restart your Apache and PHP-FPM services and you should be ready. You can see the user your code in /home/webapp/public_html is being run as in the output of the phpinfo() function.