Increasing PHP security with Snuffleupagus

In a previous article, I discussed how to set up ModSecurity with the Core Rule Set on Debian. This can be considered as a first line of defense against malicious HTTP traffic. In a defense in depth strategy of course we want to add additional layers of protection to your web servers. One such layer is Snuffleupagus. Snuffleupagus is a PHP module which protects your web applications against various attacks. Some of the hardening features it offers are encryption of cookies, disabling XML External Entity (XXE) processing, a white or blacklist for the functions which can be used in the eval() function and the possibility to selectively disable PHP functions with specific arguments (virtual-patching).

Installing Snuffleupagus on Debian

Unfortunately there is no package for Snuffleupagus included in Debian, but it is not too difficult to build one yourself:

$ apt install php-dev
$ mkdir snuffleupagus
$ cd snuffleupagus
$ git clone https://github.com/jvoisin/snuffleupagus
$ cd snuffleupagus
$ make debian

This will build the latest development code from the master branch. If you want to build the latest stable release, before running make debian, use these commands to view all tags and to checkout the latest table tag, which in this case was v0.8.2:

$ git tag
$ git checkout v0.8.2

If all went well, you should now have a file snuffleupagus_0.8.2_amd64.deb in the above directory, which you can install:

$ cd ..
$ apt install ./snuffleupagus_0.8.2_amd64.deb

Configuring Snuffleupagus

First we take the example configuration file and put it in PHP’s configuration directory. For example for PHP 7.4:

# zcat /usr/share/doc/snuffleupagus/examples/default.rules.gz > /etc/php/7.4/snuffleupagus.rules

Also take a look at the config subdirectory in the source tree for more example rules.

Edit the file /etc/php/7.4/fpm/conf.d/20-snuffleupagus.ini so that it looks like this:

extension=snuffleupagus.so
sp.configuration_file=/etc/php/7.4/snuffleupagus.rules

Now we will edit the file /etc/php/7.4/snuffleupagus.rules.

We need to set a secret key, which will be used for various cryptographic features:

sp.global.secret_key("YOU _DO_ NEED TO CHANGE THIS WITH SOME RANDOM CHARACTERS.");

You can generate a random key with this shell command:

$ echo $(head -c 512 /dev/urandom | tr -dc 'a-zA-Z0-9')

Simulation mode

Snuffleupagus can run rules in simulation mode. In this mode, the rule will not block further execution of the PHP file, but will just output a warning message in your log. Unfortunately there is no global simulation mode, but it has to be set per rule. You can run a rule in simulation mode by appending .simulation() to it. For example to run INI protection in simulation mode:

sp.ini_protection.simulation();

INI protection

To prevent PHP applications from modifying php.ini settings, you can set this in snuffleupagus.rules:

sp.ini_protection.enable();
sp.ini_protection.policy_readonly();

Cookie protection

The following configuration options sets the SameSite attribute to Lax on session cookies, which offers protection against CSFR on this cookie. We enforce setting the secure option on cookies, which instructs the web browser to only send them over an encrypted HTTPS connection and also enable encryption of the content of the session on the server. The encryption key being used is derived of the value of the global secret key you have set, the client’s user agent and the environment variable SSL_SESSION_ID.

sp.cookie.name("PHPSESSID").samesite("lax");

sp.auto_cookie_secure.enable();
sp.global.cookie_env_var("SSL_SESSION_ID");
sp.session.encrypt();

Note that the definition of cookie_env_var needs to happen before sp.session.encrypt(); which enables the encryption.

You have to make sure the variable SSL_SESSION_ID is passed to PHP. In Apache you can do so by having this in your virtualhost:

<FilesMatch "\.(cgi|shtml|phtml|php)$">
    SSLOptions +StdEnvVars
</FilesMatch>

eval white- or blacklist

eval() is used to evaluate PHP content, for example in a variable. This is very dangerous if the PHP code to be evaluated can contain user provided data. Therefore it is strongly recommended that you create a whitelist of functions which can be called by code evaluated by eval().

Start by putting this in snuffleupagus.rules and restart PHP:

sp.eval_whitelist.list().simulation();

Then test your websites and see which errors you get in the logs, and add them separated by commas to the eval_whitelist.list(). After that you need to remove .simulation() and restart PHP in order to activate this protection. For example

sp.eval_whitelist.list("array_pop,array_push");

You can also use a blacklist, which only blocks certain functions. For example:

sp.eval_blacklist.list("system,exec,shell_exec,proc_open");

Limit execution to read-only PHP files

The read_only_exec() feature of Snuffleupagus will prevent PHP from execution of PHP files on which the PHP process has write permissions. This will block any attacks where an attacker manages to upload a malicious PHP file via a bug in your website, and then attempts to execute this malicious PHP script.

It is a good practice to let your PHP scripts be owned by a different user than the PHP user, and give PHP only read-only permissions on your PHP files.

To test this feature, add this to snuffleupagus.rules:

sp.readonly_exec.simulation();

If you are sure all goes well, enable it:

sp.readonly_exec.enable();

Virtual patching

One of the main features of Snuffleupagus is virtual patching. Thjs feature will disable functions, depending on the parameters or and values they are given. The example rules file contains a good set of generic rules which blocks all kinds of dangerous behaviour. You might need to fine-tune the rules if your PHP applications hits certain rules.

Some examples of virtual-patching rules:

sp.disable_function.function("chmod").param("mode").value("438").drop();
sp.disable_function.function("chmod").param("mode").value("511").drop();

These rules will drop calls to the chmod function with octal values 438 and 511, which correspond to the dangerous 0666 and 0777 decimal permissions.

sp.disable_function.function("include_once").value_r(".(inc|phtml|php)$").allow();
sp.disable_function.function("include_once").drop();

These two rules will only allow the include_once function to include files which file name are ending with inc, phtml or php. All other include_once calls will be dropped.

Using generate-rules.php to automatically site-specific hardening rules

In the scripts subdirectoy of the Snuffleupagus source tree, there is a file named generate_rules.php. You can run this script from the command line, giving it a path to a directory with PHP files, and it will automatically generate rules which specifically allow all needed dangerous function calls, and then disable them globally. For example to generate rules for the /usr/share/tt-rss/www and /var/www directories:

# php generate_rules.php /usr/share/tt-rss/www/ /var/www/

This will generate rules:

sp.disable_function.function("function_exists").filename("/usr/share/tt-rss/www/api/index.php").hash("fa02a93e2724d7e818c5c13f4ba8b110c47bbe7fb65b74c0aad9cff2ed39cf7d").allow();
sp.disable_function.function("function_exists").filename("/usr/share/tt-rss/www/classes/pref/prefs.php").hash("43926a95303bc4e7adefe9d2f290dd8b66c9292be836908081e3f2bd8a198642").allow();
sp.disable_function.function("function_exists").drop();

The first two rules allow these two files to call function_exists and the last rule drops all requests to function_exists from any other files. Note that the first two rules limit the rules not only to the specified file name, but also define the SHA256 of the file. This way, if the file is changed, the function call will be dropped. This is the safest way, but it can be annoying if the files are often or automatically updated because it will break the site. In this case, you can call generate_rules.php with the --without-hash option:

# php generate_rules.php --without-hash /usr/share/tt-rss/www/ /var/www/

After you have generated the rules, you will have to add them to your snuffleupagus.rules file and restart PHP-FPM.

File Upload protection

The default Snuffleupagus rule file contains 2 rule which will block any attempts uploading a html or PHP file. However, I noticed that they were not working with PHP 7.4 and these rules would cause this error message:

PHP Warning: [snuffleupagus][0.0.0.0][config][log] It seems that you are filtering on a parameter 'destination' of the function 'move_uploaded_file', but the parameter does not exists. in /var/www/html/foobar.php on line 15PHP message: PHP Warning: [snuffleupagus][0.0.0.0][config][log] - 0 parameter's name: 'path' in /var/www/html/foobar.php on line 15PHP message: PHP Warning: [snuffleupagus][0.0.0.0][config][log] - 1 parameter's name: 'new_path' in /var/www/html/foobar.php on line 15'

The snuffleupagus rules use the parameter destination for the move_uploaded_file instead of the parameter new_path. You will have to change the rules like this:

sp.disable_function.function("move_uploaded_file").param("new_path").value_r("\.ph").drop();<br />sp.disable_function.function("move_uploaded_file").param("new_path").value_r("\.ht").drop();

Note that on PHP 8, the parameter name is to instead of new_path.

Enabling Snuffleupagus

To enable Snuffleupagus in PHP 7.4, link the configuration file to /etc/php/7.4/fpm/conf.d:

# cd /etc/php/7.4/fpm/conf.d
# ln -s ../../mods-available/snuffleupagus.ini 20-snuffleupagus.ini
# systemctl restart php7.4-fpm

After restarting PHP-FPM, always check the logs to see whether snuffleupagus does not give any warning or messages for example because of a syntax error in your configuration:

# journalctl -u php7.4-fpm -n 50

Snuffleupagus logs

By default Snuffleupagus logs via PHP. Then if you are using Apache with PHP-FPM, you will find Snuffleupagus logs, just like any PHP warnings and errors in the Apache error_log, for example /var/log/apache/error.log. If you encounter any problems with your website, go check this log to see what is wrong.

Snuffleupagus can also be configured to log via syslog, and actually even recommends this, because PHP’s logging system can be manipulated at runtime by malicious scripts. To log via syslog, add this to snuffleupagus.rules:

sp.log_media("syslog");

I give a few examples of errors you can encounter in the logs and how to fix them:

[snuffleupagus][0.0.0.0][xxe][log] A call to libxml_disable_entity_loader was tried and nopped in /usr/share/tt-rss/www/include/functions.php on line 22

tt-rss calls the function libxml_disable_entity_loader but this is blocked by the XXE protection. Commenting this in snuffleupagus.rules should fix this:

sp.xxe_protection.enable();

Another example:

[snuffleupagus][0.0.0.0][disabled_function][drop] Aborted execution on call of the function 'ini_set', because its argument '$varname' content (display_errors) matched a rule in /usr/share/tt-rss/www/include/functions.php on line 37'

Modifying the PHP INI option display_errors is not allowed by this rule:

sp.disable_function.function("ini_set").param("varname").value_r("display_errors").drop();

You can completely remove (or comment) this rule in order to disable it. But a better way is to add a rule before this rule which allows it for specially that PHP file. So add this rule before:

sp.disable_function.function("ini_set").filename("/usr/share/tt-rss/www/include/functions.php").param("varname").value_r("display_errors").allow();

If you get something like this:

[snuffleupagus][0.0.0.0][disabled_function][drop] Aborted execution on call of the function 'function_exists', because its argument '$function_name' content (exec) matched a rule in /var/www/wordpress/wp-content/plugins/webp-express/vendor/rosell-dk/exec-with-fallback/src/ExecWithFallback.php on line 35', referer: wp-admin/media-new.php

It’s caused by this rule:

sp.disable_function.function("function_exists").param("function_name").value("exec").drop();

You can add this rule before to allow this:

sp.disable_function.function("function_exists").filename("/var/www/wordpress/wp-admin/media-new.php").param("function_name").value("exec").allow();

More information

Snuffleupagus documentation

Snuffleupagus on Github

Julien Voisin blog archives

Web application firewall: Modsecurity and Core Rule Set

A web application firewall (WAF) filters HTTP traffic. By integrating this in your web server, you can make sure potentially dangerous requests are blocked before they arrive to your web application or sensitive data leaks out of your web server. This way you add an extra defensive layer potentially offering extra protection against zero-day vulnerabilities in your web server or web applications. In this blog post, I give a tutorial how to install and configure ModSecurity web application firewall and the Core Rule Set on Debian. With some minor adaptions you can also use this guide for setting up ModSecurity on Ubuntu or other distributions.

ModSecurity is the most well-known open source web application firewall. The future of ModSecurity does not look too bright but fortunately with Coraza WAF an alternative which is completely compatible with ModSecurity is in development. At this moment Coraza only integrates with the Caddy web server, and does not have a connector for Apache or NGinx so for that reason it is currently not yet usable as a replacement for ModSecurity.

While ModSecurity provides the framework for filtering HTTP traffic, you also need rules which define what to bloc and that’s where the Core Rule Set (CRS) comes in. CRS is a set of generic rules winch offer protection to a various range of common attacks via HTTP, such as SQL injection, code injection and cross-site scripting (XSS) attacks.

Install ModSecurity and the Core Rule Set on Debian

I install the Apache module for ModSecurity, the geoip-database, which can be used for blocking all requests from certain countries, and modsecurity-crs, which contains the Core Rule Set. I take this package from testing, because it has a newer version (version 3.3.2 at the time of writing). There is no risk in taking this package from testing, because it only contains the rules and does not depend on any other packages from testing/unstable. If you prefer faster updates, you can also use unstable.

# apt install libapache2-mod-security2 geoip-database
# apt install -t testing modsecurity-crs

Configuring ModSecurity

In order to load the ModSecurity module in Apache, run this command:

# a2enmod security2

Then copy the example ModSecurity configuration file to /etc/modsecurity/modsecurity.conf:

cp /etc/modsecurity/modsecurity.conf-recommended /etc/modsecurity/modsecurity.conf

Now edit /etc/modsecurity/modsecurity.conf. I highlight some of the options:

SecRuleEngine on
SecRequestBodyLimit 536870912
SecRequestBodyNoFilesLimit 131072
SecAuditLog /var/log/apache2/modsec_audit.log
#SecRule MULTIPART_UNMATCHED_BOUNDARY "!@eq 0" \
#"id:'200004',phase:2,t:none,log,deny,msg:'Multipart parser detected a possible unmatched boundary.'"
SecPcreMatchLimit 500000
SecPcreMatchLimitRecursion 500000
SecStatusEngine Off

The SecRuleEngine option controls whether rules should be processed. If set to Off, you completely disable all rules, with On you enable them and it will block malicious actions. If set to DetectionOnly, ModSecurity will only log potential malicious activity flagged by your rules, but will not block them. DetectionOnly can be useful for temporary trying out the rules in order to find false positives before you really start blocking potential malicious activity.

The SecAuditLog option defines a file which contains audit logs. This file will contain detailed logs about every request triggering a ModSecurity rule.

The SecPcreMatchLimit and SecPcreMatchLimitRecursion set the match limit and match limit recursion for the regular expression library PCRE. Setting this high enough will prevent errors that the PCRE limits were exceeded while analyzing data, but setting it too high can make ModSecurity vulnerable to a Denial of Service (DoS) attack. A Core Rule Set developer recommends a value of 50000 so that’s what I use here.

I change SecRequestBodyLimit to a higher value to allow large file uploads.

I disable the rule 200004 because it is known to cause false positives.

Set SecStatusEngine to Off to prevent ModSecurity sending version information back its developers.

After changing any configuration related to ModSecurity or the Core Rule Set, reload your Apache web server:

# systemctl reload apache2

Configuring the Core Rule Set

The Core Rule Set can be configured via the file /etc/modsecurity/crs/crs-setup.conf.

Anomaly Scoring

By default the Core Rule Set is using anomaly scoring mode. This means that individual rules add to a so called anomaly score, which at the end is evaluated. If the anomaly score exceeds a certain threshold, then the traffic is blocked. You can read more about this configuration in crs-setup.conf but the default configuration should be fine for most people.

Setting the paranoia level

The paranoia level is a number from 1 to 4 which determines which rules are active and contribute to the anomaly scoring. The higher the paranoia level, the more rules are activated and hence the more aggressive the Core Rule Set is, offering more protection but potentially also causing more false positives. By default the paranoia level is set to 1. If you work with sensitive data, it is recommended to increase the paranoia level.

The executing paranoia level defines the rules which will be executed but their score will not be added to the anomaly scoring. When HTTP traffic hits rules of the executing paranoia level, this traffic will only be logged but not be blocked. It is a especially useful to prepare for increasing the paranoia level and finding false positives on this higher level, without causing any disruption for your users.

To set the paranoia level to 1 and the executing paranoia level to 2, make sure you have these rules set in crs-setup.conf:

SecAction \
  "id:900000,\
   phase:1,\
   nolog,\
   pass,\
   t:none,\
   setvar:tx.paranoia_level=1"
SecAction \
  "id:900001,\
   phase:1,\
   nolog,\
   pass,\
   t:none,\
   setvar:tx.executing_paranoia_level=2"

Once you have fixed all false positives, you can raise the paranoia level to 2 to increase security.

Defining the allowed HTTP methods

By default the Core Rule Set only allows the GET, HEAD, POST and OPTIONS HTTP methods. For many standard sites this will be enough but if your web applications also use restful APIs or WebDAV, then you will need to add the required methods. Change rule 900200, and add the HTTP methods mentioned in the comments in crs-setup.conf.

SecAction \
 "id:900200,\
  phase:1,\
  nolog,\
  pass,\
  t:none,\
  setvar:'tx.allowed_methods=GET HEAD POST OPTIONS'"

Disallowing old HTTP versions

There is a rule which determines which HTTP versions you allow in HTTP requests. I uncomment it and modify it to only allow HTTP versions 1.1 and 2.0. Legitimate browsers and bots always use one of these modern HTTP versions and older versions usually are a sign of malicious activity.

SecAction \
 "id:900230,\
  phase:1,\
  nolog,\
  pass,\
  t:none,\
  setvar:'tx.allowed_http_versions=HTTP/1.1 HTTP/2 HTTP/2.0'"

Blocking specific countries

Personally I’m not a fan of completely blocking all traffic from a whole country, because you will also block legitimate visitors to your site, but in case you want to this, you can configure this in crs-setup.conf:

SecGeoLookupDB /usr/share/GeoIP/GeoIP.dat
SecAction \
 "id:900600,\
  phase:1,\
  nolog,\
  pass,\
  t:none,\
  setvar:'tx.high_risk_country_codes='"

Add the two-letter country codes you want to block to the last line (before the two quotes), multiple country codes separated by a space.

Make sure you have the package geoip-database installed.

Core Rule Set Exclusion rules for well-known web applications

The Core Rule Set contains some rule exclusions for some well-known web applications like WordPress, Drupal and NextCloud which reduces the number of false positives. I add the following section to crs-setup.conf which will allow me to enable the exclusions in the Apache configuration by setting the WEBAPPID variable in the Apache configuration whenever I need them.

SecRule WEBAPPID '@beginsWith wordpress' 'id:20000,phase:1,nolog,pass,t:none setvar:tx.crs_exclusions_wordpress=1'
SecRule WEBAPPID '@beginsWith drupal' 'id:20001,phase:1,nolog,pass,t:none setvar:tx.crs_exclusions_drupal=1'
SecRule WEBAPPID '@beginsWith dokuwiki' 'id:20002,phase:1,nolog,pass,t:none setvar:tx.crs_exclusions_dokuwiki=1'
SecRule WEBAPPID '@beginsWith nextcloud' 'id:20003,phase:1,nolog,pass,t:none setvar:tx.crs_exclusions_nextcloud=1'
SecRule WEBAPPID '@beginsWith cpanel' 'id:20004,phase:1,nolog,pass,t:none setvar:tx.crs_exclusions_cpanel=1'
SecRule WEBAPPID '@beginsWith xenforo' 'id:20005,phase:1,nolog,pass,t:none setvar:tx.crs_exclusions_xenforo=1'

Adding rules for Log4Shell and Spring4Shell detection

At the end of 2021 a critical vulnerability CVE-2021-44228, named Log4Shell, was detected in Log4j, which allows remote attackers to run code on a server with the vulnerable Log4j version. While the Core Rule Set offered some mitigation of this vulnerability out of the box, this protection was not complete. New improved detection rules against Log4Shell were developed. Because of the severity of this bug and the fact that it’s being exploited in the wild, I strongly recommend adding this protection manually when using ModSecurity version 3.3.2 (or older). Newer, not yet released versions, should have complete protection out of the box.

First modify /etc/apache2/mods-enabled/security2.conf so that it looks like this:

<IfModule security2_module>
        # Default Debian dir for modsecurity's persistent data
        SecDataDir /var/cache/modsecurity

        # Include all the *.conf files in /etc/modsecurity.
        # Keeping your local configuration in that directory
        # will allow for an easy upgrade of THIS file and
        # make your life easier
        IncludeOptional /etc/modsecurity/*.conf

        # Include OWASP ModSecurity CRS rules if installed
        IncludeOptional /usr/share/modsecurity-crs/*.load
        SecRuleUpdateTargetById 932130 "REQUEST_HEADERS"
</IfModule>

Then create the file /etc/modsecurity/99-CVE-2021-44228.conf with this content:

# Generic rule against CVE-2021-44228 (Log4j / Log4Shell)
# See https://coreruleset.org/20211213/crs-and-log4j-log4shell-cve-2021-44228/
SecRule REQUEST_LINE|ARGS|ARGS_NAMES|REQUEST_COOKIES|REQUEST_COOKIES_NAMES|REQUEST_HEADERS|XML://*|XML://@* "@rx (?:\${[^}]{0,4}\${|\${(?:jndi|ctx))" \
    "id:1005,\
    phase:2,\
    block,\
    t:none,t:urlDecodeUni,t:cmdline,\
    log,\
    msg:'Potential Remote Command Execution: Log4j CVE-2021-44228', \
    tag:'application-multi',\
    tag:'language-java',\
    tag:'platform-multi',\
    tag:'attack-rce',\
    tag:'OWASP_CRS',\
    tag:'capec/1000/152/137/6',\
    tag:'PCI/6.5.2',\
    tag:'paranoia-level/1',\
    ver:'OWASP_CRS/3.4.0-dev',\
    severity:'CRITICAL',\
    setvar:'tx.rce_score=+%{tx.critical_anomaly_score}',\
    setvar:'tx.anomaly_score_pl1=+%{tx.critical_anomaly_score}'"

In March 2022 CVE-2022-22963, another remote code execution (RCE) vulnerability was published in the Spring framework was published. The Core Rule Set developed a new rule to protect against this vulnerability which will be included in the next version, but the rule can be added manually if you are running the Core Rule Set version 3.3.2 or older.

To do so, create the file /etc/modsecurity/99-CVE-2022-22963.conf with this content:

# This rule is also triggered by the following exploit(s):
# - https://www.rapid7.com/blog/post/2022/03/30/spring4shell-zero-day-vulnerability-in-spring-framework/
# - https://www.ironcastle.net/possible-new-java-spring-framework-vulnerability-wed-mar-30th/
#
SecRule ARGS|ARGS_NAMES|REQUEST_COOKIES|!REQUEST_COOKIES:/__utm/|REQUEST_COOKIES_NAMES|REQUEST_BODY|REQUEST_HEADERS|XML:/*|XML://@* \
    "@rx (?:class\.module\.classLoader\.resources\.context\.parent\.pipeline|springframework\.context\.support\.FileSystemXmlApplicationContext)" \
    "id:1006,\
    phase:2,\
    block,\
    t:urlDecodeUni,\
    msg:'Remote Command Execution: Malicious class-loading payload',\
    logdata:'Matched Data: %{MATCHED_VAR} found within %{MATCHED_VAR_NAME}',\
    tag:'application-multi',\
    tag:'language-java',\
    tag:'platform-multi',\
    tag:'attack-rce',\
    tag:'OWASP_CRS',\
    tag:'capec/1000/152/248',\
    tag:'PCI/6.5.2',\
    tag:'paranoia-level/2',\
    ver:'OWASP_CRS/3.4.0-dev',\
    severity:'CRITICAL',\
    setvar:'tx.rce_score=+%{tx.critical_anomaly_score}',\
    setvar:'tx.anomaly_score_pl2=+%{tx.critical_anomaly_score}'"

Don’t forget to reload your Apache configuration after adding these rules.

Testing ModSecurity and checking the logs

We can now easily test ModSecurity by doing a request which tries to abuse a cross-site scripting (XSS) vulnerability:

$ curl -I "https://example.org/?search=<script>alert('CRS+Sandbox+Release')</script>"

This should return HTTP response 403 (Forbidden).

Whenever something hits your ModSecurity rules, this will be logged in your Apache error log. The above request has created these messages in the error log:

[Sat Apr 09 22:22:02.716558 2022] [:error] [pid 847584:tid 140613499016960] [client client-ip:49688] [client client-ip] ModSecurity: Warning. detected XSS using libinjection. [file "/usr/share/modsecurity-crs/rules/REQUEST-941-APPLICATION-ATTACK-XSS.conf"] [line "55"] [id "941100"] [msg "XSS Attack Detected via libinjection"] [data "Matched Data: XSS data found within ARGS:search: <script>alert('CRS Sandbox Release')</script>"] [severity "CRITICAL"] [ver "OWASP_CRS/3.3.2"] [tag "application-multi"] [tag "language-multi"] [tag "platform-multi"] [tag "attack-xss"] [tag "paranoia-level/1"] [tag "OWASP_CRS"] [tag "capec/1000/152/242"] [hostname "example.org"] [uri "/"] [unique_id "YlHq6gKxO9SgyEd0xH9N5gADLgA"]
[Sat Apr 09 22:22:02.716969 2022] [:error] [pid 847584:tid 140613499016960] [client client-ip:49688] [client client-ip] ModSecurity: Warning. Pattern match "(?i)<script[^>]*>[\\\\s\\\\S]*?" at ARGS:search. [file "/usr/share/modsecurity-crs/rules/REQUEST-941-APPLICATION-ATTACK-XSS.conf"] [line "82"] [id "941110"] [msg "XSS Filter - Category 1: Script Tag Vector"] [data "Matched Data: <script> found within ARGS:search: <script>alert('CRS Sandbox Release')</script>"] [severity "CRITICAL"] [ver "OWASP_CRS/3.3.2"] [tag "application-multi"] [tag "language-multi"] [tag "platform-multi"] [tag "attack-xss"] [tag "paranoia-level/1"] [tag "OWASP_CRS"] [tag "capec/1000/152/242"] [hostname "example.org"] [uri "/"] [unique_id "YlHq6gKxO9SgyEd0xH9N5gADLgA"]
[Sat Apr 09 22:22:02.717249 2022] [:error] [pid 847584:tid 140613499016960] [client client-ip:49688] [client client-ip] ModSecurity: Warning. Pattern match "(?i:(?:<\\\\w[\\\\s\\\\S]*[\\\\s\\\\/]|['\\"](?:[\\\\s\\\\S]*[\\\\s\\\\/])?)(?:on(?:d(?:e(?:vice(?:(?:orienta|mo)tion|proximity|found|light)|livery(?:success|error)|activate)|r(?:ag(?:e(?:n(?:ter|d)|xit)|(?:gestur|leav)e|start|drop|over)|op)|i(?:s(?:c(?:hargingtimechange ..." at ARGS:search. [file "/usr/share/modsecurity-crs/rules/REQUEST-941-APPLICATION-ATTACK-XSS.conf"] [line "199"] [id "941160"] [msg "NoScript XSS InjectionChecker: HTML Injection"] [data "Matched Data: <script found within ARGS:search: <script>alert('CRS Sandbox Release')</script>"] [severity "CRITICAL"] [ver "OWASP_CRS/3.3.2"] [tag "application-multi"] [tag "language-multi"] [tag "platform-multi"] [tag "attack-xss"] [tag "paranoia-level/1"] [tag "OWASP_CRS"] [tag "capec/1000/152/242"] [hostname "example.org"] [uri "/"] [unique_id "YlHq6gKxO9SgyEd0xH9N5gADLgA"]
[Sat Apr 09 22:22:02.718018 2022] [:error] [pid 847584:tid 140613499016960] [client client-ip:49688] [client client-ip] ModSecurity: Access denied with code 403 (phase 2). Operator GE matched 5 at TX:anomaly_score. [file "/usr/share/modsecurity-crs/rules/REQUEST-949-BLOCKING-EVALUATION.conf"] [line "93"] [id "949110"] [msg "Inbound Anomaly Score Exceeded (Total Score: 15)"] [severity "CRITICAL"] [ver "OWASP_CRS/3.3.2"] [tag "application-multi"] [tag "language-multi"] [tag "platform-multi"] [tag "attack-generic"] [hostname "example.org"] [uri "/"] [unique_id "YlHq6gKxO9SgyEd0xH9N5gADLgA"]
[Sat Apr 09 22:22:02.718596 2022] [:error] [pid 847584:tid 140613499016960] [client client-ip:49688] [client client-ip] ModSecurity: Warning. Operator GE matched 5 at TX:inbound_anomaly_score. [file "/usr/share/modsecurity-crs/rules/RESPONSE-980-CORRELATION.conf"] [line "91"] [id "980130"] [msg "Inbound Anomaly Score Exceeded (Total Inbound Score: 15 - SQLI=0,XSS=15,RFI=0,LFI=0,RCE=0,PHPI=0,HTTP=0,SESS=0): individual paranoia level scores: 15, 0, 0, 0"] [ver "OWASP_CRS/3.3.2"] [tag "event-correlation"] [hostname "example.org"] [uri "/"] [unique_id "YlHq6gKxO9SgyEd0xH9N5gADLgA"]

In the first 3 lines we see that we hit different filters which check for XSS vulnerabilities, more specifically rules 941100, 941110 and 941160 all of them having the tag paranoia-level/1.

Then the fourth line shows that we hit rule 949110 which caused the web server to return the HTTP 403 Forbidden response because the inbound anomaly score, 15, is higher than 5. Then rule 980130 gives us some more information about the scoring: we hit a score of 15 at the paranoia level 1, while rules at the other paranoia levels rules contributed 0 to the total score. We also see the scores for individual types of attack: in this case all 15 points where scored by rules detecting XSS attacks. This is the meaning of the different abbreviations used:

SQLISQL injection
XSScross-site scripting
RFIremote file inclusion
LFIlocal file inclusion
RCEremote code execution
PHPIPHP injection
HTTPHTTP violation
SESSsession fixation

More detailed logs about the traffic hitting the rules can be found in the file /var/log/apache2/modsec_audit.log.

Fixing false positives

First of all, in order to minimize the amount of false positives, you should set the WEBAPPID variable if you are using one of the known web applications for which the Core Rule Set has a default exclusion set. These web applications are currently WordPress, Drupal, Dokuwiki, Nextcloud, Xenforo and cPanel. You can do so by using the <a href="https://github.com/SpiderLabs/ModSecurity/wiki/Reference-Manual-(v2.x)#SecWebAppId">SecWebAppId</a> option in a VirtualHost of Location definition in the Apache configuration. For example if you have a VirtualHost which is used by Nextcloud, set this within the VirtualHost definition:

<Virtualhost nextcloud.example.org>
    ...OTHER OPTIONS HERE...
    <IfModule security2_module>
        SecWebAppId "nextcloud"
    </IfModule>
</VirtualHost>

If you have a WordPress installation in a subdirectory, then add SecWebAppId within Location tags.

<Location /wordpress>
    <IfModule security2_module>
        SecWebAppId "wordpress-mysite"
    </IfModule>
</Location>

If you have multiple WordPress sites, give each of them a unique WEBAPPID which name starts with wordpress. Add a different suffix for every instance so that each one run its in own application namespace in ModSecurity.

If you still encounter false positives, you can completely disable rules by using the configuration directive SecRuleRemoveById. I strongly recommend not disabling rules globally, but limiting its removal to the specific location from which you want them to be removed, for example by putting them with <Location> or <LocationMatch> tags in the Apache configuration. For example:

<LocationMatch ^/wp-admin/(admin-ajax|post)\.php>
    <IfModule security2_module>
        SecRuleRemoveById 941160 941100 941130 932105 932130 932100
    </IfModule>
</LocationMatch>

Pay attention not to disable any of the 949*, 959*, and 980* rules: disabling the 949* and 959* rules would disable all the blocking rules, while disabling the 980* rules would give you less information about what is happening in the logs.

Conclusion

ModSecurity and the Core Rule Set offer an additional security layer for web servers in your defence in depth strategy. I strongly recommend implementing this on your servers because it makes it harder to abuse security vulnerabilities.

Keep an eye on the Core Rule Set blog and Twitter account: sometimes they post new rules for specific new critical vulnerabilities, which can be worthwhile to add to your configuration.

Linux security hardening recommendations

In a previous blog post, I wrote how to secure OpenSSH against brute force attacks. However, what if someone manages to get a shell on your system, despite all your efforts? You want to protect your system from your users doing nasty things? It is important to harden your system further according to the principle of defense in depth in order.

Software updates

Make sure you are running a supported distribution, and by preference the most recent version one. For example, Debian Jessie is still supported, however upgrading to Debian Stretch is strongly recommended, because it offers various security improvements (more recent kernel with new security hardening, PHP 7 with new security related features, etc…)

Install amd64-microcode (for AMD CPU’s) or intel-microcode (for Intel CPU’s) which are needed to protect against hardware vulnerabilities such as Spectre, Meltdown and L1TF. I recommend installing it from stretch-backports in order to have the latest firmware.

Automatic updates and needrestart

I recommend installing unattened-upgrades . You can configure it to just download updates or to download and install them automatically. By default, unattended-upgrades will only install updates from the official security repositories. This way it is relatively safe to let it do this automatically. If you have already installed it, you can run this command to reconfigure it:

# dpkg-reconfigure unattended-upgrades

When you update system libraries, you should also restart all daemons which are using these libraries to make them use the newly installed version. This is exactly what needrestart does. After you have run apt-get, it will check whether there are any daemons running with older libraries, and will propose you to restart them. If you use it with unattended-upgrades, you should set this option in /etc/needrestart/needrestart.conf to make sure that all services which require a restart are indeed restarted:

$nrconf{restart} = 'a';

Up-to-date kernel

Running an up-to-date kernel is very important, because also the kernel can be vulnerable. In the worst case, an outdated kernel can be exploited to gain root permissions. Do not forget to reboot after updating the kernel.

Every new kernel version also contains various extra security hardening measures. Kernel developer Kees Cook has an overview of security related changes in the kernel.

In case you build your own kernel, you can use kconfig-hardened-check to get recommendation for a hardened kernel configuration.null

Firewall: filtering outgoing traffic

It is very obvious to install a firewall which filters incoming traffic. However, have you considered also filtering outgoing traffic? This is a bit more difficult to set up because you need to whitelist all outgoing hosts to which connections are needed (I think of your distribution’s repositories, NTP servers, DNS servers,…), but it is a very effective measure which will help limiting damage in case a user account gets compromised, despite all your other protective efforts.

Ensuring strong passwords

Prevent your users from setting bad passwords by installing libpam-pwquality, together with some word lists for your language and a few common languages. These will be used for verifying that the user is not using a common word as his password. libpam-quality will be enabled automatically after installation with some default settings.

# apt-get install libpam-pwquality wbritish wamerican wfrench wngerman wdutch

Please note that by default, libpam-pwquality will only enforce strong passwords when a non-root user changes its password. If root is setting a password, it will give a warning if a weak password is set, but will still allow it. If you want to enforce it for root too (which I recommend), then add enforce_for_root in the pam_pwquality line in /etc/pam.d/common-password:

password	requisite			pam_pwquality.so retry=3 enforce_for_root

Automatically log out inactive users

In order to log out inactive users, set a timeout of 600 seconds on the Bash shell. Create /etc/profile.d/tmout.sh:

export TMOUT=600
readonly TMOUT

Prevent creating cron jobs

Make sure users cannot set cron jobs. In case an attacker gets a shell on your system, often cron will be used to ensure the malware continues running after a reboot. In order to prevent normal users to set up cron jobs, create an empty /etc/cron.allow.

Protect against fork bombs and excessive logins and CPU usage

Create a new file in /etc/security/limits.d to impose some limits to user sessions. I strongly recommend setting a value for nproc, in order to prevent fork bombs. maxlogins is the maximum number of logins per user, and cpu is used to set a limit on the CPU time a user can use (in minutes):

*	hard	nproc		1024
*	hard	maxlogins 	4
1000:	hard	cpu		180

Hiding processes from other users

By mounting the /proc filesystem with the hidepid=2 option, users cannot see the PIDs of processes by other users in /proc, and hence these processes also become invisible when using tools like top and ps. Put this in /etc/fstab to mount /proc by default with this option:

none	/proc	proc	defaults,hidepid=2	0	0

Restricting /proc/kallsyms

It is possible to restrict access to /proc/kallsyms at boot time by setting 004 permissions. Put this in /etc/rc.local:

chmod 400 /proc/kallsyms

/proc/kallsyms contains information about how the kernel’s memory is laid out. With this information it becomes easier to attack the kernel itself, so hiding this information is always a good idea. It should be noted though that attackers can get this information from other sources too, such as from the System.map files in /boot.

Harden kernel configuration with sysctl

Several kernel settings can be set at run time using sysctl. To make these settinsg permanent, put these settings in files with the .conf extension in /etc/sysctl.d.

It is possible to hide the kernel messages (which can be read with the dmesg command) from other users than root by setting the sysctl kernel.dmesg_restrict to 1. On Debian Stretch and later this should already be the default value:

kernel.dmesg_restrict = 1

From Linux kernel version 4.19 on it’s possible to disallow opening FIFOs or regular files not owned by the user in world writable sticky directories. This setting would have prevented vulnerabilities found in different user space programs the last couple of years. This protection is activated automatically if you use systemd version 241 or higher with Linux 4.19 or higher. If your kernel supports this feature but you are not using systemd 241, you can activate it yourself by setting the right sysctl settings:

fs.protected_regular = 1
fs.protected_fifos = 1

Also check whether the following sysctl’s have the right value in order to enable protection hard links and symlinks. These work with Linux 3.6 and higher, and likely will already be enabled by default on your system:

fs.protected_hardlinks = 1
fs.protected_symlinks = 1

Also by default on Debian Stretch only root users can access perf events:

kernel.perf_event_paranoid = 3

Show kernel pointers in /proc as null for non-root users:

kernel.kptr_restrict = 1

Disable the kexec system call, which allows you to boot a different kernel without going through the BIOS and boot loader:

kernel.kexec_load_disabled = 1

Allow ptrace access (used by gdb and strace) for non-root users only to child processes. For example strace ls will still work, but strace -p 8659 will not work as non-root user:

kernel.yama.ptrace_scope = 1

The Linux kernel includes eBPF, the extended Berkeley Packet Filter, which is a VM in which unprivileged users can load and run certain code in the kernel. If you are sure no users need to call bpf(), it can be disabled for non-root users:

kernel.unprivileged_bpf_disabled = 1

In case the BPF Just-In-Time compiler is enabled (it is disabled by default, see sysctl net/core/bpf_jit_enable), it is possible to enable some extra hardening against certain vulnerabilities:

net.core.bpf_jit_harden = 2

Take a look at the Kernel Self Protection Project Recommended settings page to find an up to date list of recommended settings.

Lynis

Finally I want to mention Lynis, a security auditing tool. It will check the configuration of your system, and make recommendations for further security hardening.

Further ideas

Securing OpenSSH

Security hardening the OpenSSH server is one of the first things that should be done on any newly installed system. Brute force attacks on the SSH daemon are very common and unfortunately I see it going wrong all too often. That’s why I think it’s useful to give a recapitulation here with some best practices, even though this should be basic knowledge for any system administrator.

Firewall

The first thing to think about: should the be SSH server be accessible from the whole world, or can we limit it to certain IP addresses or subnets. This is the most simple and effective form of protection: if your SSH daemon is is only accessible from specific IP addresses, then there is no risk any more from attacks coming from elsewhere.

I prefer to use Shorewall as a firewall, as it’s easy to configure. Be sure to also configure shorewall6 if you have an IPv6 address.

However as defense in depth is an essential security practice, you should not stop here even if you protected your SSH daemon with a firewall. Maybe your firewall one day fails to come up at boot automatically, leaving your system unprotected. Or maybe one day the danger comes from within your own network. That’s why in any case you need to carefully review the next recommendations too.

SSHd configuration

Essential security settings

The SSH server configuration can be found in the file /etc/ssh/sshd_config. We review some essential settings:

  • PermitRootLogin: I strongly recommend setting this to No. This way, you always log in to your system with a normal user account. If you need root access, use su or sudo. The root account is then protected from brute force attacks. And you can always easily find out who used the root account.
  • PasswordAuthentication: This setting really should be No. You will first need to add your SSH public key to your ~/.ssh/authorized_keys . Disabling password authentication is the most effective protection against brute force attacks.
  • X11Forwarding: set this to No, except if your users need to be able to run X11 (graphical) applications remotely.
  • AllowTcpForwarding: I strongly recommend setting this to No. If this is allowed, any user who can ssh into your system, can establish connections from the client to any other system using your host as a proxy. This is even the case even if your users can only use SFTP to transfer files. I have seen this being abused in the past to connect to the local MTA and send spam via the host this way.
  • PermitOpen: this allows you to set the hosts to which TCP forwarding is allowed. Use this if you set AllowTcpForwarding to indicate to which hosts TCP forwarding is limited.
  • ClientAliveInterval, ClientAliveCountMax: These values will determine when a connection will be interrupted when it’s unresponsive (for example in case of network problems). I set ClientAliveInterval to 600 and ClientAliveCountMax to 0. Note that this does not drop the connection when the user is simply inactive. If you want to set a timeout for that, you can set the TMOUT environment variable in Bash.
  • MaxAuthTries: the maximum number of authentication attempts permitted per connection. Set this to 3.
  • AllowUsers: only the users in this space separated list are allowed to log in. I strongly recommend using this (or AllowGroups) to whitelist users that can log in by SSH. It protects against possible disasters when a test user or a system users with a weak password is created.
  • AllowGroups: only the users from the groups in this space separated list are allowed to log in.
  • DenyUsers: users in this space separated list are not allowed to log in
  • DenyGroups: users from the groups in this space separated list are not allowed to log in.
  • These values should already be fine by default, but I recommend verifying them: PermitEmptyPasswords (no), UsePrivilegeSeparation (sandbox), UsePAM (yes), PermitUserEnvironment (no), StrictModes (yes), IgnoreRhosts (yes)

So definitely disable PasswordAuthentication and TCP and X11 forwarding by default and use the AllowUsers or AllowGroups to whitelist who is allowed to log in by SSH.

Match conditional blocks

With Match conditional blocks you can modify some of the default settings for certain users, groups or IP addresses. I give a few examples to illustrate the usage of Match blocks.

To allow TCP forwarding to a specific host for one user:

Match User username
        AllowTcpForwarding yes
        PermitOpen 192.168.0.120:8080

To allow PasswordAuthentication for a trusted IP address (make sure the user has a strong password, even if you trust the host!) :

Match Address 192.168.0.20
        PasswordAuthentication yes

The Address can also be a subnet in CIDR notation, such as 192.168.0.0/24.

To only allow SFTP access for a group of users, disabling TCP, X11 and streamlocal forwarding:

Match group sftponly
        ForceCommand internal-sftp
        AllowTcpForwarding no
        X11Forwarding no
        AllowStreamLocalForwarding no

chroot

You can chroot users to a certain directory, so that they cannot see and access what’s on the file system outside that directory. This is a a great way to protect your system for users who only need SFTP access to a certain location. For this to work, you need to make the user’s home directory being owned by root:root. This means they cannot write directly in their home directory. You can create a subdirectory within the user’s home directory with the appropriate ownership and permissions where the user can write into. Then you can use a Match block to apply this configuration to certain users or groups:

Match Group chrootsftp
        ChrootDirectory %h
        ForceCommand internal-sftp
        AllowTcpForwarding no
        X11Forwarding no
        AllowStreamLocalForwarding no

If you use authentication with keys, you will have to set a custom location for the authorized_keys file:

AuthorizedKeysFile /etc/ssh/authorized_keys/%u .ssh/authorized_keys

Then the keys for every user have to be installed in a file /etc/ssh/authorized_keys/username

Fail2ban

Fail2ban is a utility which monitors your log files for failed logins, and will block IPs if too many failed log in attempts are made within a specified time. It cannot only watch for failed login attempts on the SSH daemon, but also watch other services, like mail (IMAP, SMTP, etc.) services, Apache and others. It is a useful protection against brute force attacks. However, versions of Fail2ban before 0.10.0, only support IPv4, and so don’t offer any protection against attacks from IPv6 addresses. Furthermore, attackers often slow down their brute force attacks so that they don’t trigger the Fail2ban threshold. And then there are distributed attacks: by using many different source IPs, Fail2ban will never be triggered. For this reasons, you should not rely on Fail2ban alone to protect against brute force attacks.

If you want to use Fail2ban on Debian Stretch, I strongly recommend using the one from Debian-backports, because this version has IPv6 support.

# apt-get install -t stretch-backports fail2ban python3-pyinotify python3-systemd

I install python3-systemd in order read the log messages from Systemd’s Journal, while python3-pyinotify is needed to efficiently watch log files.

First we will increase the value for dbpurgeage which is set to 1 day in /etc/fail2ban/fail2ban.conf. We can do this by creating the file /etc/fail2ban/fail2ban.d/local.conf:

[Definition]
dbpurgeage = 10d

This lets us ban an IP for a much longer time than 1 day.

Then the services to protect, the thresholds and the action to take when these are exceeded are defined in /etc/fail2ban/jail.conf. By default all jails, except the sshd jail, are disabled and you have to enable the ones you want to use. This can be done by creating a file /etc/fail2ban/jail.d/local.conf:

[DEFAULT]
banaction = iptables-multiport
banaction_allports = iptables-allports
destemail = email@example.com
sender = root@example.com

[sshd]
mode = aggressive
enabled = true
backend = systemd

[sshd-slow]
filter   = sshd[mode=aggressive]
maxretry = 10
findtime = 3h
bantime  = 8h
backend = systemd
enabled = true

[recidive]
enabled=true
maxretry = 3
action = %(action_mwl)s

First we override some default settings valid for all jails. We configure it to use iptables to block banned users. If you use Shorewall as firewall, then set banaction and banaction_allports to shorewall in order to use the blacklist functionality of Shorewall. In that case, read the instructions in /etc/fail2ban/action.d/shorewall.conf to configure Shorewall to also block established blacklisted connections. Other commonly used values for banactions and banactions_allports are ufw and firewallcmd-ipset, if you use UFW respectively Firewalld. We also define the sender address and destination address where emails should be sent when a host is banned.

Then we set up 3 jails. The sshd and recidive jail are jails which are already defined in /etc/fail2ban/jails.conf and we enable them here. The sshd jail will give a 10 minute ban to IPs which do 5 unsuccessful login attempts on the SSH server in a time span of 10 minutes. The recidive jail gives a one week ban to IPs getting banned 3 times by another Fail2ban jail in a time span of 1 day. Furthermore I define another jail sshd-slow, which gives a 8 hour ban to IPs doing 10 failed attempts on the SSH server in a time span of 3 hours. This catches many attempts which try to evade the default Fail2ban settings by slowing down their brute force attack. In both the sshd and sshd-slow jails I use the aggressive mode which catches more errors, such as probes without trying to log in, and attempts with wrong (outdated) ciphers. See /etc/fail2ban/filter.d/sshd.conf for the complete lists of log message it will search for. The recidive jail will sent a mail to the defined address in case a host gets banned. I enable this only for recidive in order not to receive too much e-mails.

Two-factor authentication

It is possible to enable two-factor authentication (2FA) using the libpam-google-authenticator package. Then you can use an application like FreeOTP+ (Android), AndOTP (Android), Authenticator (iOS), KeepassXC (Linux) to generate the time based token you need to log in.

First install the required PAM module on your SSH server:

# apt-get install libpam-google-authenticator

Then edit the file /etc/ssh/sshd_config:

ChallengeResponseAuthentication yes
AuthenticationMethods publickey keyboard-interactive:pam

You can also put this in a Match block to only enable this for certain users or groups.

This will allow you to log in with either key based authentication, either by password and your time-based token.

Now you need to set up the new secret for the user account you want to use OTP authentication using the google-authenticator command. Run it as the user. Choose time-based authentication tokes, disallow multiple uses of the same authentication token, and don’t choose to increase the time window to 4 minute and enable rate-limiting.

$ google-authenticator
Do you want authentication tokens to be time-based (y/n) y
                                                          
Do you want me to update your "/home/username/.google_authenticator" file (y/n) y

Do you want to disallow multiple uses of the same authentication
token? This restricts you to one login about every 30s, but it increases
your chances to notice or even prevent man-in-the-middle attacks (y/n) Do you want to disallow multiple uses of the same authentication
token? This restricts you to one login about every 30s, but it increases
your chances to notice or even prevent man-in-the-middle attacks (y/n) y

By default, tokens are good for 30 seconds. In order to compensate for
possible time-skew between the client and the server, we allow an extra
token before and after the current time. If you experience problems with
poor time synchronization, you can increase the window from its default
size of +-1min (window size of 3) to about +-4min (window size of
17 acceptable tokens).
Do you want to do so? (y/n) n

If the computer that you are logging into isn't hardened against brute-force
login attempts, you can enable rate-limiting for the authentication module.
By default, this limits attackers to no more than 3 login attempts every 30s.
Do you want to enable rate-limiting (y/n) y

Now enter the code given by this command in your OTP client or scan the QR code.

Edit the file /etc/pam.d/sshd and add this line:

auth required pam_google_authenticator.so noskewadj

You need to make sure to add this line before the line

@include common-auth

Otherwise an attacker can still brute force the password, and then abuse it on other services. That is because of the auth requisite pam_deny.so line in common-auth: this will immediately return a failure message when the password is wrong. The time-based token would only be asked when the password is correct.

The noskewadj option increases security by disabling the option to automatically detect and adjust time skew between client and server.

Now restart the sshd service, and in another shell, try the OTP authentication. Don’t close your existing SSH connection yet, because otherwise you might lock yourself out if something went wrong.

The biggest disadvantage of pam_googleauthenticator is that it allows every individual user to set values for the window size, rate limiting, whether to use HOTP or TOTP, etc. By modifying some of these, the user can reduce the security of the one-time-password. For this reason, I recommend only enabling this for users you trust.

Enabling jumbo frames on your network

Jumbo frames are Ethernet frames with up to 9000 bytes of payload, in contrast to normal frames which have up to 1500 bytes per payload. They are useful on fast (Gigabit Ethernet and faster) networks, because they reduce the overhead. Not only will it result in a higher throughput, it will also reduce CPU usage.

To use jumbo frames, you whole network needs to support it. That means that your switch needs to support jumbo frames (it might need to be enabled by hand), and also all connected hosts need to support jumbo frames. Jumbo frames should also only be used on reliable networks, as the higher payload will make it more costly to resend a frame if packets get lost.

So first you need to make sure your switch has jumbo frame support enabled. I’m using a HP Procurve switch and for this type of switch you can find the instructions here:

$ netcat 10.141.253.1 telnet
Username:
Password:
ProCurve Switch 2824# config
ProCurve Switch 2824(config)# show vlans
ProCurve Switch 2824(config)# vlan $VLAN_ID jumbo
ProCurve Switch 2824(config)# show vlans
ProCurve Switch 2824(config)# write memory

Now that your switch is configured properly, you can configure the hosts.

For hosts which have a static IP configured in /etc/network/interfaces you need to add the line

    mtu 9000

to the iface stanza of the interface on which you want to enable jumbo frames. This does not work for interfaces getting an IP via DHCP, because they will use the MTU value sent by the DHCP server.

To enable jumbo frames via DHCP, edit the /etc/dhcp/dhcpd.conf file on the DHCP server, and add this to the subnet stanza:

option interface-mtu 9000;

Now bring the network interface offline and online, and jumbo frames should be enabled. You can verify with the command

# ip addr show

which will show the mtu values for all network interfaces.

FS-CACHE for NFS clients

FS-CACHE is a system which caches files from remote network mounts on the local disk. It is a very easy to set up facility to improve performance on NFS clients.

I strongly recommend a recent kernel if you want to use FS-CACHE though. I tried this with the 4.9 based Debian Stretch kernel a year ago, and this resulted in a kernel oops from time to time, so I had to disable it again. I’m currently using it again with a 4.19 based kernel, and I did not encounter any stability issues up to now.

First of all, you will need a dedicated file system where you will store the cache on. I prefer to use XFS, because it has nice performance and stability. Mount the file system on /var/cache/fscache.

Then install the cachefilesd package and edit the file /etc/default/cachefilesd so that it contains:

RUN=yes

Then edit the file /etc/cachefilesd.conf. It should look like this:

dir /var/cache/fscache
tag mycache
brun 10%
bcull 7%
bstop 3%
frun 10%
fcull 7%
fstop 3%

These numbers define when cache culling (making space in the cache by discarding less recently used files) happens: when the amount of available disk space or the amount of available files drops below 7%, culling will start. Culling will stop when 10% is available again. If the available disk space or available amount of files drops below 3%, no further cache allocation is done until more than 3% is available again. See also man cachefilesd.conf.

Start cachefilesd by running

# systemctl start cachefilesd

If it fails to start with these messages in the logs:

cachefilesd[1724]: About to bind cache
kernel: CacheFiles: Security denies permission to nominate security context: error -2
cachefilesd[1724]: CacheFiles bind failed: errno 2 (No such file or directory)
cachefilesd[1724]: Starting FilesCache daemon : cachefilesd failed!
systemd[1]: cachefilesd.service: Control process exited, code=exited status=1
systemd[1]: cachefilesd.service: Failed with result 'exit-code'.
systemd[1]: Failed to start LSB: CacheFiles daemon.

then you are hitting this bug. This happens when you are using a kernel with AppArmor enabled (like Debian’s kernel from testing). You can work around it by commenting out the line defining the security context in /etc/cachefilesd.conf:

#secctx system_u:system_r:cachefiles_kernel_t:s0

and starting cachefilesd again.

Now in /etc/fstab add the fsc option to all NFS file systems where you want to enable caching for. For example for NFS4 your fstab line might look like this:

nfs.server:/home	/home	nfs4	_netdev,fsc,noatime,vers=4.2,nodev,nosuid	0	0

Now remount the file system. Just using the remount option is probably not enough: you have to completely umount and mount the NFS file system. Check with the mount command whether the fsc option is present. You can also run

# cat /proc/fs/nfsfs/volumes 

and check whether the FSC column is set to YES.

Try copying some large files from your NFS mount to the local disk. Then you can check the statistics by running

# cat /proc/fs/fscache/stats

You should sea that not all values are 0 any more. You will also see with

$ df -h /var/cache/fscache

your file system is being filled with cached data.

Debian Stretch on AMD EPYC (ZEN) with an NVIDIA GPU for HPC

Recently at work we bought a new Dell PowerEdge R7425 server for our HPC cluster. These are some of the specifications:

  • 2 AMD EPYC 7351 16-Core Processors
  • 128 GB RAM (16 DIMMs of 8 GB)
  • Tesla V100 GPU
Dell Poweredge R7425 front with cover
Dell Poweredge R7425 front without cover
Dell Poweredge R7425 inside

Our FAI configuration automatically installed Debian stretch on it without any problem. All hardware was recognized and working. The installation of the basic operating system took less than 20 minutes. FAI also sets up Puppet on the machine. After booting the system, Puppet continues setting up the system: installing all needed software, setting up the Slurm daemon (part of the job scheduler), mounting the NFS4 shared directories, etc. Everything together, the system was automatically installed and configured in less than 1 hour.

Linux kernel upgrade

Even though the Linux 4.9 kernel of Debian Stretch works with the hardware, there are still some reasons to update to a newer kernel. Only in more recent kernel versions, the k10temp kernel module is capable of reading out the CPU temperature sensors. We also had problems with fscache (used for NFS4 caching) with the 4.9 kernel in the past, which are fixed in a newer kernel. Furthermore there have been many other performance optimizations which could be interesting for HPC.

You can find a more recent kernel in Debian’s Backports repository. At the time of writing it is a 4.18 based kernel. However, I decided to build my own 4.19 based kernel.

In order to build a Debian kernel package, you will need to have the package kernel-package installed. Download the sources of the Linux kernel you want to build, and configure it (using make menuconfig or any method you prefer). Then build your kernel using this command:

$ make -j 32 deb-pkg

Replace 32 by the number of parallel jobs you want to run; the number of CPU cores you have is a good amount. You can also add the LOCALVERSION and KDEB_PKGVERSION variables to set a custom name and version number. See the Debian handbook for a more complete howto. When the build is finished successfully, you can install the linux-image and linux-headers package using dpkg.

We mentioned temperature monitoring support with the k10temp driver in newer kernels. If you want to check the temperatures of all NUMA nodes on your CPUs, use this command:

$ cat /sys/bus/pci/drivers/k10temp/*/hwmon/hwmon*/temp1_input

Divide the value by 1000 to get the temperature in degrees Celsius. Of course you can also use the sensors command of the lm-sensors package.

Kernel settings

VM dirty values

On systems with lots of RAM, you will encounter problems because the default values of vm.dirty_ratio and vm.dirty_background_ratio are too high. This can cause stalls when all dirty data in the cache is being flushed to disk or to the NFS server.

You can read more information and a solution in SuSE’s knowledge base. On Debian, you can create a file /etc/sysctl.d/dirty.conf with this content:

vm.dirty_bytes = 629145600
vm.dirty_background_bytes = 314572800

Run

# systctl -p /etc/sysctl.d/dirty.conf

to make the settings take effect immediately.

Kernel parameters

In /etc/default/grub, I added the options

transparant_hugepage=always cgroup_enable=memory

to the GRUB_CMDLINE_LINUX variable.

Transparant hugepages can improve performance in some cases. However, it can have a very negative impact on some specific workloads too. Applications like Redis, MongoDB and Oracle DB recommend not enabling transparant hugepages by default. So make sure that it’s worthwhile for your workload before adding this option.

Memory cgroups are used by Slurm to prevent jobs using more memory than what they reserved. Run

# update-grub

to make sure the changes will take effect at the next boot.

I/O scheduler

If you’re using a configuration based on recent Debian’s kernel configuration, you will likely be using the Multi-Queue Block IO Queueing Mechanism with the mq-deadline scheduler as default. This is great for SSDs (especially NVME based ones), but might not be ideal for rotational hard drives. You can use the BFQ scheduler as an alternative on such drives. Be sure to test this properly tough, because with Linux 4.19 I experienced some stability problems which seemed to be related to BFQ. I’ll be reviewing this scheduler again for 4.20 or 4.21.

First if BFQ is built as a module (wich is the case in Debian’s kernel config), you will need to load the module. Create a file /etc/modules-load.d/bfq.conf with contents

bfq

Then to use this scheduler by default for all rotational drives, create the file /etc/udev/rules.d/60-io-scheduler.rules with these contents:

# set scheduler for non-rotating disks
ACTION=="add|change", KERNEL=="sd[a-z]|mmcblk[0-9]*|nvme[0-9]*", ATTR{queue/rotational}=="0", ATTR{queue/scheduler}="mq-deadline"
# set scheduler for rotating disks
ACTION=="add|change", KERNEL=="sd[a-z]", ATTR{queue/rotational}=="1", ATTR{queue/scheduler}="bfq"

Run

# update-initramfs -u

to rebuild the initramfs so it includes this udev rule and it will be loaded at the next boot sequence.

The Arch wiki has more information on I/O schedulers.

CPU Microcode update

We want the latest microcode for the CPU to be loaded. This is needed to mitigate the Spectre vulnerabilities. Install the amd64-microcode package from stretch-backports.

Packages with AMD ZEN support

hwloc

hwloc is a utility which reads out the NUMA topology of your system. It is used by the Slurm workload manager to to bind tasks to certain cores.

The version of hwloc in Stretch (1.11.5) does not have support for the AMD ZEN architecture. However, hwloc 1.11.12 is available in stretch-backports, and this version does have AMD ZEN support. So make sure you have the packages hwloc libhwloc5 libhwloc-dev libhwloc-plugins installed from stretch-backports.

BLAS libraries

There is no BLAS library in Debian Stretch which supports AMD ZEN architecture. Unfortunately, at the moment of writing there is is also no good BLAS implementation for ZEN available in stretch-backports. This will likely change in the near future though, as BLIS has now entered Debian Unstable and will likely be backported too in the stretch-backports repository.

NVIDIA drivers and CUDA libraries

NVIDIA drivers

I installed the NVIDIA drivers an CUDA libraries from the tarballs downloaded from the NVIDIA website because at the time of writing all packages available in the Debian repositories are outdated.

First make sure you have the linux-headers package installed which corresponds with the linux-image kernel package you are running. We will be using DKMS to rebuild the driver automatically whenever we install a new kernel, so also make sure you have the dkms package installed.

Download the NVIDIA driver for your GPU from the NVIDIA website. Remove the nouveau driver with the

# rmmod nouveau

command. And create a file /etc/modules-load.d/blacklist-nouveau.conf with these contents:

blacklist nouveau

and rebuild the initramfs by running

# update-initramfs -u

This will ensure the nouveau module will not be loaded automatically.

Now install the driver, by using a command similar to this:

# NVIDIA-Linux-x86_64-410.79.run -s --dkms

This will do a silent installation, integrating the driver with DKMS so it will get built automatically every time you install a new linux-image together with its corresponding linux-headers package.

To make sure that the necessary device files in /dev exist after rebooting this system, I put this script /usr/local/sbin/load-nvidia:

#!/bin/sh
/sbin/modprobe nvidia
if [ "$?" -eq 0 ]; then
  # Count the number of NVIDIA controllers found.
  NVDEVS=`lspci | grep -i NVIDIA`
  N3D=`echo "$NVDEVS" | grep "3D controller" | wc -l`
  NVGA=`echo "$NVDEVS" | grep "VGA compatible controller" | wc -l`
  N=`expr $N3D + $NVGA - 1`
  for i in `seq 0 $N`; do
    mknod -m 666 /dev/nvidia$i c 195 $i
  done
  mknod -m 666 /dev/nvidiactl c 195 255
else
  exit 1
fi
/sbin/modprobe nvidia-uvm
if [ "$?" -eq 0 ]; then
  # Find out the major device number used by the nvidia-uvm driver
  D=`grep nvidia-uvm /proc/devices | awk '{print $1}'`
  mknod -m 666 /dev/nvidia-uvm c $D 0
else
  exit 1
fi

In order to start this script at boot up, I created a systemd service. Create the file /etc/systemd/system/load-nvidia.service with this content:

[Unit]
Description=Load NVidia driver and creates nodes in /dev
Before=slurmd.service

[Service]
ExecStart=/usr/local/sbin/load-nvidia
Type=oneshot
RemainAfterExit=true

[Install]
WantedBy=multi-user.target

Now run these commands to enable the service:

# systemctl daemon-reload
# systemctl enable load-nvidia.service

You can verify whether everything is working by running the command

$ nvidia-smi

CUDA libraries

Download the CUDA libraries. For Debian, choose Linux x86_64 Ubuntu 18.04 runfile (local).

Then install the libraries with this command:

# cuda_10.0.130_410.48_linux --silent --override --toolkit

This will install the toolkit in silent mode. With the override option it is possible to install the toolkit on systems which don’t have an NVIDIA GPU, which might be useful for compilation purposes.

To make sure your users have the necessary binaries and libraries available in their path, create the file /etc/profile.d/cuda.sh with this content:

export CUDA_HOME=/usr/local/cuda
export PATH=$CUDA_HOME/bin:$PATH
export LD_LIBRARY_PATH=$CUDA_HOME/lib64:$LD_LIBRARY_PATH


Leap second causing ksoftirqd and java to use lots of cpu time

Today there was a leap second at 23:59:60 UTC. On one of my systems, this caused a high CPU load starting from around 02h00 GMT+2 (which corresponds with the time of the leap second). ksoftirqd and some java (glassfish) process where using lots of CPU time. This system was running Debian Squeeze with kernel 2.6.32-45. The problem is very easy to fix: just run

# date -s "`date`"

and everything will be fine again. I found this solution on the Linux Kernel Mailing List: http://marc.info/?l=linux-kernel&m=134113389621450&w=2. Apparently a similar problem can happen with Firefox, Thunderbird, Chrome/Chromium, Java, Mysql, Virtualbox and probably other processes.

I was a bit suprised that this problem only happened on this particular machine, because I have several other servers running similar kernel versions.

MegaCLI: useful commands

Recently I installed a server with a Supermicro SMC2108 RAID adapter, which is actually a LSI MegaRAID SAS 9260. LSI created a command line utility called MegaCLI for Linux to manage this adapter. You can download it from their support pages. The downloaded archive contains an RPM file. I installed mc and rpm on Debian with apt-get, and then extracted the MegaCli64 binary (for x86_64) to /usr/local/sbin, and the libsysfs.so.2.0.2 from the Lib_utils RPM to /opt/lsi/3rdpartylibs/x86_64/ (that’s the location where MegaCli64 looks for this library).

Here are some useful commands:

View information about the RAID adapter

For checking the firmware version, battery back-up unit presence, installed cache memory and the capabilities of the adapter:

# MegaCli64 -AdpAllInfo -aAll

View information about the battery backup-up unit state

# MegaCli64 -AdpBbuCmd -aAll

View information about virtual disks

Useful for checking RAID level, stripe size, cache policy and RAID state:

# MegaCli64 -LDInfo -Lall -aALL

View information about physical drives

# MegaCli64 -PDList -aALL

Patrol read

Patrol read is a feature which tries to discover disk error before it is too late and data is lost. By default it is done automatically (with a delay of 168 hours between different patrol reads) and will take up to 30% of IO resources.

To see information about the patrol read state and the delay between patrol read runs:
# MegaCli64 -AdpPR -Info -aALL

To find out the current patrol read rate, execute
# MegaCli64 -AdpGetProp PatrolReadRate -aALL

To reduce patrol read resource usage to 2% in order to minimize the performance impact:
# MegaCli64 -AdpSetProp PatrolReadRate 2 -aALL

To disable automatic patrol read:
# MegaCli64 -AdpPR -Dsbl -aALL

To start a manual patrol read scan:
# MegaCli64 -AdpPR -Start -aALL

To stop a patrol read scan:
# MegaCli64 -AdpPR -Stop -aALL

You could use the above commands to run patrol read in off-peak times.

Migrate from one RAID level to another

In this example, I migrate the virtual disk 0 from RAID level 6 to RAID 5, so that the disk space of one additional disk becomes available. The second command is used to make Linux detect the new size of the RAID disk.

# /usr/local/sbin/MegaCli64 -LDRecon -Start -r5 -L0 -a0
# echo 1 > /sys/block/sda/device/rescan

Extending an existing RAID array with a new disk

./MegaCli64 -LDRecon -Start -r5 -Add -PhysDrv[32:3] -L0 -a0

Create a new RAID 5 virtual disk from a set of new hard drives

First we need to now the enclosure and slot number of the hard drives we want to use for the new RAID disk. You can find them out by the first command. Then I add a virtual disk using RAID level 5, followed by the list of drives I want to use, specified by enclosure:slot syntax.

# MegaCli64 -PDList -aALL | egrep 'Adapter|Enclosure|Slot|Inquiry'
# MegaCli64 -CfgLdAdd -r5'[252:5,252:6,252:7]' -a0

Extending an existing RAID array with a new disk

First check the enclosure device ID and the slot number of the newly added disk with the command above. Then we reconstruct the logical drive, adding the new drive. For a RAID 5 array this command is used:

# MegaCli64 -LDRecon -Start -r5 -Add -PhysDrv[32:3] -L0 -a0

View reconstruction progress

When reconstructing a RAID array, you can check its progress with this command.
# MegaCli64 -LDRecon ShowProg L0 -a0

(replace L0 by L1 for the second virtual disk, and so on)

Configure write-cache to be disabled when battery is broken

# MegaCli64 -LDSetProp NoCachedBadBBU -LALL -aALL

Change physical disk cache policy

If your system is not connected to a UPS, you should disable the physical disk cache in order to prevent data loss.

# MegaCli -LDGetProp -DskCache -LAll -aALL

To enable it (only do this if you have a UPS and redundant power supplies):

# MegaCli -LDGetProp -DskCache -LAll -aALL

More information

http://ftzdomino.blogspot.com/2009/03/some-useful-megacli-commands.html
https://twiki.cern.ch/twiki/bin/view/FIOgroup/DiskRefPerc
http://hwraid.le-vert.net/wiki/LSIMegaRAIDSAS
http://kb.lsi.com/KnowledgebaseArticle16516.aspx

Linux performance improvements

Two years ago I wrote an article presenting some Linux performance improvements. These performance improvements are still valid, but it is time to talk about some new improvements available. As I am using Debian now, I will focus on that distribution, but you should be able to easily implement these things on other distributions too. Some of these improvements are best suited for desktop systems, other for server systems and some are useful for both. Continue reading “Linux performance improvements”