Denial-of-service (DoS) and Distributed Denial-of-service (DDoS) attacks are some of the most common cyberattacks these days. They are fairly easy to execute and the consequences can vary from annoying to very problematic, for example if a crucial web service of a company or public service becomes inaccessible. In the current geopolitical situation DDoS attacks are a very popular method used by hacktivists.
In this guide I propose a configuration which can help against some of these attacks. I recommend implementing such measures on all web servers. Even if they might not completely prevent an attack, at least they from a basis which you can easily adapt to future attacks. These measures can also help against spammers and scrapers of your website and improve general performance of your server.
Firewall protection
Blocking connections from known bad IP addresses
We don’t want to waste CPU time to packets coming from known bad IP addresses, so we block them as soon as possible. Follow my earlier article to block known bad IP addresses.
Geoblocking connections
If you need a quick way to mitigate a DoS or DDoS attack you are experiencing, you can consider geoblocking connections in your firewall. For example you could temporarily only allow connections to your web server from IP addresses of your own country if that’s where most of your visitors come from. You can download lists of all IPv4 and IPv6 addresses per country. See the previously mentioned article about using iplists to implement geoblocking with Foomuuri for configuring the firewall.
Rate limiting connections
Then we can rate limit new connections per source IP address in our firewall.
In Foomuuri, you can set something like this in the public-localhost section:
http saddr_rate "3/minute burst 20" saddr_rate_name http_limit
https saddr_rate "3/minute burst 20" saddr_rate_name http_limit
http drop counter "http_limit" log "http_limit"
https drop counter "http_limit" log "http_limit"
This will allow a remote host to open 20 new connections per source IP to both port 80 and port 443 together, after which new connections will be limited to 3 per minute. Other connection attempts will be dropped, counted and logged with prefix http_limit. You will need to adapt these numbers to what your own server requires. When under attack, I recommend removing the last 2 lines: you don’t want to waste time logging everything. When setting up rules which define limits per IP address, take into account users which are sharing a public IP via NAT, such as on corporate networks.
The above rules still allows one source IP address to have more than 20 simultaneous connections open with your web server, because it only limits the rate at which new connections can be made. If you want to limit the total number of open connections, you can use something like this:
http saddr_rate "ct count 20" saddr_rate_name http_limit
https saddr_rate "ct count 20" saddr_rate_name http_limit
http drop counter "http_limit" log "http_limit"
https drop counter "http_limit" log "http_limit"
It is also possible to implement a global connection limit to the http and https ports, irrespective of the source IP address. While this will be very effective against DDoS attacks, you will also block unmalicious visitors of your site. If you implement global rate limiting, add rules before to allow your own IP address, so that you will not be blocked yourself. In case of emergency and if all else fails, you could combine global rate limiting with geoblocking: first create rules which allow connections from certain countries and regions and after that place the global rate limit rules which will then be applied to connections from other countries and regions.
For example:
http saddr_rate "ct count 20" saddr_rate_name http_limit saddr @belgium4
https saddr_rate "ct count 20" saddr_rate_name http_limit saddr @belgium6
http saddr @belgium4 drop
http saddr @belgium6 drop
http global_rate "10/second burst 20"
https global_rate "10/second burst 20"
If the iplists @belgium4 and @belgium6 contain all Belgian Pv4 and IPv6 addresses, then these rules will allow up to 20 connections per source IP from Belgium. More connections per source IP from Belgium will be dropped. From other parts of the world, we allow a burst of 20 new connections after which there will be a global limit of 10 new connections per second, irrespective the source address.
Enable SYN cookies
SYN cookies are a method which helps to mitigate SYN attacks, in which a server is flooded by SYN requests. When the backlog queue where SYN requests are stored, is full, SYN cookies will be sent in the server’s SYN + ACK response, instead of creating an entry in the backlog queue. The SYN cookie contains a cryptographic encoding of a SYN entry in the backlog and can be reconstructed by the server from the client’s ACK response.
SYN cookies are enabled by default in all Debian Linux kernels because they set CONFIG_SYN_COOKIES=y. If you want to check whether SYN cookies are enabled, run
# sysctl net.ipv4.tcp_syncookies
If the value is 0, you can enable them by running
sysctl -w net.ipv4.tcp_syncookies=1
To make this setting persist after a reboot, you create the file /etc/sysctld.d/syncookies.conf with this content:
net.ipv4.tcp_syncookies=1
Configuring Apache to block Denial-of-Service attacks
Tuning Apache and the event MPM
In /etc/apache2/apache2.conf check these settings:
Timeout 60
KeepAlive On
MaxKeepAliveRequests 100
KeepAliveTimeout 5
These are the default in recent upstream Apache versions, but they might be different if you are using an older packaged Apache version. Also Debian still sets a much higher Timeout value by default. If Timeout and KeepAliveTimeout are set too high, clients can keep a connection open for way too much time, filling up the available workers. If you are under attack, you can consider setting the Timeout value even lower and disabling KeepAlive completely. See further to read how to do the latter automatically with mod_qos.
Make sure you are using the Event MPM in Apache, because it’s the most performant. You can check with this command:
# apache2ctl -M | grep mpm
mpm_event_module (shared)
Then we need to configure the Event MPM in /etc/apache2/mods-enabled/mpm_event.conf:
StartServers 5
ServerLimit 16
MinSpareThreads 100
MaxSpareThreads 400
ThreadLimit 120
ThreadsPerChild 100
MaxRequestWorkers 1000
MaxConnectionsPerChild 50000
We start 5 (StartServers) child server processes, with each child having up to 100 (ThreadsPerChild) threads to deal with connections.We want to have 100-400 (MinSpareThreads–MaxSparehreads) spare threads to be available to handle new requests. The MaxRequestWorkers sets an upper limit of 1000 requests we can handle simultaneously. ServerLimit defines the maximum amount of child server processes. You only need to increase the default value of 16 if MaxRequestWorkers / ThreadsPerChild is higher than 16. The ThreadLimit is the maximum value you can set ThreadsPerChild to by just restarting Apache. If you need to increase the value of ThreadsPerChild to a value higher than the current ThreadLimit, you will need to modify ThreadLimit too and stop and start the parent process manually. Don’t set the ThreadLimit value much higher than the ThreadsPerChild value, because it increases memory consumption. After 50000 connections, a child process will exit and a new one will be spawn. This is useful in case of memory leaks.
Enabling compression and client caching of content
Enable the deflate and brotli modules in Apache to serve compressed files to clients which support this:
# a2enmod deflate
# a2enmod brotli
Then in /etc/apache2/mods-enabled/deflate.conf put this:
<IfModule mod_filter.c>
AddOutputFilterByType DEFLATE text/html text/plain text/xml
AddOutputFilterByType DEFLATE text/css
AddOutputFilterByType DEFLATE application/x-javascript application/javascript application/ecmascript
AddOutputFilterByType DEFLATE application/rss+xml
AddOutputFilterByType DEFLATE application/xml
AddOutputFilterByType DEFLATE text/json application/json
AddOutputFilterByType DEFLATE application/wasm
</IfModule>
and in /etc/apache2/mods-enabled/brotli.conf this:
<IfModule mod_filter.c>
AddOutputFilterByType BROTLI_COMPRESS text/html text/plain text/xml
AddOutputFilterByType BROTLI_COMPRESS text/css
AddOutputFilterByType BROTLI_COMPRESS application/x-javascript application/javascript application/ecmascript
AddOutputFilterByType BROTLI_COMPRESS application/rss+xml
AddOutputFilterByType BROTLI_COMPRESS application/xml
AddOutputFilterByType BROTLI_COMPRESS text/json application/json
AddOutputFilterByType BROTLI_COMPRESS application/wasm
</IfModule>
In order to let browsers cache static files, so that they don’t have to redownload all images if a user comes back, we set some expires headers. First make sure the expires module is enabled:
# a2enmod expires
Then in /etc/apache2/mods-enabled/expires.conf set this:
<IfModule mod_expires.c>
ExpiresActive On
ExpiresDefault A120
ExpiresByType image/x-icon A604800
ExpiresByType application/x-javascript A604800
ExpiresByType application/javascript A604800
ExpiresByType text/css A604800
ExpiresByType image/gif A604800
ExpiresByType image/png A604800
ExpiresByType image/jpeg A604800
ExpiresByType image/webp A604800
ExpiresByType image/avif A604800
ExpiresByType image/x-ms-bmp A604800
ExpiresByType image/svg+xml A604800
ExpiresByType image/vnd.microsoft.icon A604800
ExpiresByType text/plain A604800
ExpiresByType application/x-shockwave-flash A604800
ExpiresByType video/x-flv A604800
ExpiresByType video/mp4 A604800
ExpiresByType application/pdf A604800
ExpiresByType application/font-woff A604800
ExpiresByType font/woff A604800
ExpiresByType font/woff2 A604800
ExpiresByType application/vnd.ms-fontobject A604800
ExpiresByType text/html A120
</IfModule>
This will allow the mentioned MIME types to be cached for a week, while HTML and other files will be cached up to 2 minutes.
Modsecurity to block web attacks
Configure Modsecurity with the OWASP CoreRuleSet in order to prevent wasting time to web attacks. Block old HTTP versions: this will already block some stupid security scanners and bots.
mod_reqtimeout to mitigate Slowloris attacks
The Slowloris attack is an easy attack in which the attacker opens many connections to your web server, but intentionally slows down completing its request. Your server worker threads get busy waiting for completion of the requests, which does not happen This way, one client can occupy all worker threads of the web server without using a lot of bandwidth.
Debian has enabled protection in Apache by default by means of Apache mod_reqtimeout. The configuration can be found in /etc/apache2/mods-enabled/reqtimeout.conf:
RequestReadTimeout header=20-40,minrate=500
RequestReadTimeout body=10,minrate=500
The first line will limit the wait time until the first byte of the request line is received to 20 seconds, after that it will require at least 500 bytes per second, with a total limit of 40 seconds until all request headers are received.
The second line limits the wait time until the first byte of the request body is received to 10 seconds, after which 500 bytes per second are required.
If a connection is slower than the above requirements, it will be closed by the server.
mod_qos to limit requests on your server
First an important caveat: the libapache2-mod-qos package included in Debian 12 Bookworm is broken. If you install this one, your Apache web server will crash at startup. You can find a fixed libapache12-mod-qos package for Debian 12 in my bookworm-frehi repository.
Enable the module with
# a2enmod qos
Then we set some configuration options in /etc/apache2/mods-enabled/qos.conf:
# allows keep-alive support till the server reaches 80% of all MaxRequestWorkers
QS_SrvMaxConnClose 80%
# don't allow a single client to open more than 10 TCP connections if
# the server has more than 600 connections:
QS_SrvMaxConnPerIP 10 600
# minimum data rate (bytes/sec) when the server
# has 100 or more open TCP connections:
QS_SrvMinDataRate 1000 32000 100
# disables connection restrictions for certain clients:
QS_SrvMaxConnExcludeIP 192.0.2.
QS_SrvMaxConnExcludeIP 2001:db8::2
QS_ClientEntries 50000
We disable keepalive as soon as we reach 80% of MaxRequestWorkers, so that thread are not occupied any more by keepalive requests. You can also use an absolute number instead of a percentage if you prefer that. As soon as we reach 600 simultaneous connections, we only only allow 10 connections per IP address. As soon as there are more than 100 connections, we require that there is minimum of data rate of 1000 bytes per second. The required transfer rate is increased linearly to 32000 bytes per second when the amount of connections reaches the value of MaxRequestWorkers. The QS_ClientEntries setting defines how many different IP addresses mod_qos keeps track of. By default it’s 50000. On very busy servers you will need to increase this, but keep in mind that this increases memory usage. Use QS_SrvMaxConnExcludeIP to exclude certain IP addresses from these limitations.
Then we want to limit the amount of requests a client can make. We focus on the resource intensive requests. Very often attackers will abuse the search form, because this not only stresses your web server, but also your database and your web application itself. If one of them, or the combination of these three, can’t cope with requests, your application will be knocked off line. Also other forms, such as contact forms or authentication forms, are often targeted.
First I add this to the VirtualHost I want to protect:
RewriteEngine on
RewriteCond "%{REQUEST_URI}" "^/$"
RewriteCond "%{QUERY_STRING}" "s="
RewriteRule ^ - [E=LimitWPSearch]
RewriteCond "%{REQUEST_URI}" "^/wp-login.php$"
RewriteCond "%{REQUEST_METHOD}" "POST"
RewriteRule ^ - [E=LimitWPLogin]
RewriteCond "%{REQUEST_URI}" "^/wp-json/"
RewriteRule ^ - [E=LimitWPJson]
RewriteCond "%{REQUEST_URI}" "^/wp-comments-post.php"
RewriteRule ^ - [E=LimitWPComment]
RewriteCond "%{REQUEST_URI}" "^/wp-json/contact-form-7/"
RewriteCond "%{REQUEST_METHOD}" "POST"
RewriteRule ^ - [E=LimitWPFormPost]
RewriteCond "%{REQUEST_URI}" "^/xmlrpc.php"
RewriteRule ^ - [E=LimitWPXmlrpc]
RewriteCond "%{REQUEST_METHOD}" "POST"
RewriteRule ^ - [E=LimitWPPost]
RewriteCond "%{REQUEST_URI}" ".*"
RewriteRule ^ - [E=LimitWPAll]
The website I want to protect runs on WordPress. I use mod_rewrite to set a variable when certain requests are made, for example LimitWPSearch when a search is done, LimitWPLogin when data is submitted to the login form, LimitWPJson when something in /wp-json/ is accessed, LimitWPComment when a comment is submitted, LimitWPXmlrpc when /xmlrpc.php is accessed, LimitWPFormPost when a form created with the Contact Form 7 plugin is posted, LimitWPPost when anything is sent via POST, and LimitWPAll on any request. A request can trigger multiple rules.
Then we go back to our global server configuration to define the exact values for these limits. You can again do this in /etc/apache2/mods-enabled/qos.conf or /etc/apache2/conf-enabled/qos.conf or something like that:
QS_ClientEventLimitCount 20 40 LimitWPPost
QS_ClientEventLimitCount 400 120 LimiWPAll
QS_ClientEventLimitCount 5 60 LimitWPSearch
QS_ClientEventLimitCount 50 600 LimitWPJson
QS_ClientEventLimitCount 10 3600 LimitWPLogin
QS_ClientEventLimitCount 3 60 LimitWPComment
QS_ClientEventLimitCount 6 60 LimitWPXmlrpc
QS_ClientEventLimitCount 3 60 LimitWPFormPost
For example when a client hits the LimitWPSearch rule 5 times in 60 seconds, the server will return a HTTP error code. In practice, this means that a client can do up to 4 successful searches per minute. You will have to adapt these settings to your own web applications. In case you get hit by an attack, you can easily adapt the values as necessary or add new limits.
Using Fail2ban to block attackers
Now I want to go further and block offenders of my QOS rules for a longer time in my firewall with Fail2ban.
Create /etc/fail2ban/filter.d/apache-mod_qos.conf:
[INCLUDES]
# Read common prefixes. If any customizations available -- read them from
# apache-common.local
before = apache-common.conf
[Definition]
failregex = mod_qos\(\d+\): access denied,.+\sc=<HOST>
ignoreregex =
Then create /etc/fail2ban/jail.d/apache-mod_qos.conf:
[apache-mod_qos]
port = http,https
backend = pyinotify
journalmatch =
logpath = /var/log/apache2/*error.log
/var/log/apache2/*/error.log
maxretry = 3
enabled = true
Make sure you have the package python3-pyinotify installed.
Note that a stateful firewall will by default only block new connections, and you might still see some violations over the existing connection even after an IP is banned by Fail2ban.
Optimizing your web applications
Databases (MariaDB, MySQL, PostgreSQL)
Ideally you should run your DBMS on a separate server. If you run MariaDB (or MySQL), you can use mysqltuner to get automatic performance tuning recommendations.
Generic configuration recommendations for PostgreSQL can be created on PgTune.
WordPress
If you are using WordPress, I recommend setting up the Autoptimize, WebP Express, Avif Express and WP Super Cache modules. Autoptimize can aggregate and minify your HTML, CSS and Javascript files, reducing bandwidth usage and reducing the number of request needed to load your site. WebP Express and Avif Express will automatically convert your JPEG, GIF and PNG images to the more efficient WebP and AVIF formats, which again reduces bandwidth.
WP Super Cache can cache your pages, so that they don’t have to be dynamically generated for every request. I strongly recommend that in the settings of WP Super Cache, in the Advanced page, you choose the Expert cache delivery method. You will need to set up some rewrite rules in your .htaccess file. In this mode, Apache will directly serve the cached pages without any involvement of PHP. You can easily check this by stopping the php8.2-fpm service and visiting your website. The cached pages will just load.
Drupal
I’m not really a Drupal specialist, but at least make sure that in Administration – Configuration – Performance caching and the bandwidth optimization options are enabled.
Mediawiki
There are some performance tuning tips for Mediawiki in the documentation.
Testing
I like to use h2load to do benchmarking of web servers. It’s part of the nghttp2-client package in Debian:
# apt install nghttp2-client
Ideally, you run benchmarks before making any of the above changes, and you retest after each modification to see the effect. Also when testing, you should do them from a host where it does not matter if it gets locked out. It can be a good idea to stop fail2ban, because it can get annoying if you are blocked on the firewall. You can also run these benchmarks on the host itself, bypassing the firewall rules.
First we test our mod_qos rule which limits the amount of searches we can do:
# h2load -n8 -c1 https://example.com/?s=test
starting benchmark...
spawning thread #0: 1 total client(s). 8 total requests
TLS Protocol: TLSv1.3
Cipher: TLS_AES_128_GCM_SHA256
Server Temp Key: X25519 253 bits
Application protocol: h2
progress: 12% done
progress: 25% done
progress: 37% done
progress: 50% done
progress: 62% done
progress: 75% done
progress: 87% done
progress: 100% done
finished in 277.41ms, 28.84 req/s, 544.07KB/s
requests: 8 total, 8 started, 8 done, 4 succeeded, 4 failed, 0 errored, 0 timeout
status codes: 4 2xx, 0 3xx, 4 4xx, 0 5xx
traffic: 150.93KB (154553) total, 636B (636) headers (space savings 83.13%), 150.03KB (153628) data
min max mean sd +/- sd
time for request: 2.97ms 210.18ms 32.38ms 71.98ms 87.50%
time for connect: 16.45ms 16.45ms 16.45ms 0us 100.00%
time to 1st byte: 226.50ms 226.50ms 226.50ms 0us 100.00%
req/s : 28.98 28.98 28.98 0.00 100.00%
You can see that 4 of these request succeed (status code 2xx) and the other 4 failed (status code 4xx). This is exactly what we configured: starting from the 5th request in a period of 60 seconds, we don’t allow searches. In Apache’s error log we see:
[qos:error] [pid 1473291:tid 1473481] [remote 2001:db8::2] mod_qos(067): access denied, QS_ClientEventLimitCount rule: event=LimitWPSearch, max=5, current=5, age=0, c=2001:db8::2, id=Z4PiOQcCdoj4ljQCvAkxPwAAsD8
We can check the connection rate limit we have configured in our firewall, by benchmarking the creation of more connections than we allowed in the burst value:
# h2load -n10000 -c50 https://example.com/
starting benchmark...
spawning thread #0: 50 total client(s). 10000 total requests
TLS Protocol: TLSv1.3
Cipher: TLS_AES_128_GCM_SHA256
Server Temp Key: X25519 253 bits
Application protocol: h2
progress: 10% done
progress: 20% done
progress: 30% done
progress: 40% done
progress: 50% done
progress: 60% done
progress: 70% done
progress: 80% done
progress: 90% done
progress: 100% done
finished in 148.78s, 67.22 req/s, 695.57KB/s
requests: 10000 total, 10000 started, 10000 done, 798 succeeded, 9202 failed, 0 errored, 0 timeout
status codes: 798 2xx, 0 3xx, 9202 4xx, 0 5xx
traffic: 101.06MB (105968397) total, 181.31KB (185665) headers (space savings 95.80%), 100.66MB (105550608) data
min max mean sd +/- sd
time for request: 19.47ms 649.75ms 50.94ms 23.46ms 86.23%
time for connect: 127.26ms 69.74s 25.28s 26.08s 80.00%
time to 1st byte: 211.04ms 69.79s 25.34s 26.06s 80.00%
req/s : 1.34 17.91 6.75 5.79 80.00%
As you can see, the “time for connect” went up to 69.74 seconds because connections were dropped because of the limit set in Foomuuri.
If logging of these connections is enabled in the firewall, we can see this:
http_limit IN=ens3 OUT= MAC=aa:aa:aa:aa:aa:aa:aa:aa:aa8:aa:aa:aa:aa:aa SRC=2001:db8::2 DST=2001:db8::3 LEN=80 TC=0 HOPLIMIT=45 FLOWLBL=327007 PROTO=TCP SPT=32850 DPT=443 WINDOW=64800 RES=0x00 SYN URGP=0
Also mod_qos kicks in and this results in only 798 succeeded requests, while the others where blocked.
Conclusion
DoS and DDoS attacks by hacktivist groups are very common and happen on a daily basis, often kicking websites of companies and public services offline. Unfortunately, there is not a lot of practical advise on the Wold Wide Web how to mitigate these attacks. I hope this article provides a some basic insight.
The proposed measures should protect well against a simple DoS attack. More difficult to judge is how well they protect against a DDoS attack. They will certainly make them more difficult, but certainly, the larger the botnet executing the attack is, the more difficult it becomes to mitigate this on the server level. If the attackers manage to saturate your bandwidth, there is nothing you can do and you will need measures on the network level to prevent the traffic hitting your server at all.
In any case, the proposed configuration should provide a basis for mitigating some attacks. You can easily adapt the rules to the attacks you are experiencing. I recommend implementing and testing this configuration before you experience a real attack.