Linux performance improvements

Two years ago I wrote an article presenting some Linux performance improvements. These performance improvements are still valid, but it is time to talk about some new improvements available. As I am using Debian now, I will focus on that distribution, but you should be able to easily implement these things on other distributions too. Some of these improvements are best suited for desktop systems, other for server systems and some are useful for both. Continue reading “Linux performance improvements”

Improving Mediawiki performance

Now that I am on the subject of improving performance, I configured some performance improvements for a Mediawiki installation here:

  • Make sure you run the latest Mediawiki version. Mediawiki 1.16 introduced a new localisation caching system which is supposed to improve performance, so you definitely want this to get the best performance.
  • Create a directory where Mediawiki can store the localisation cache (make sure it is writable by your web server). By preference store it on a tmpfs (at least if you are sure it will be big enough to store the cache), and configure it in LocalSettings.php:
    $wgCacheDirectory = "/tmp/mediawiki";
    Iif /tmp is on a tmpfs, you might add creation of this directory with the right permissions to /etc/rc.local, so that it still exists after a reboot.
  • Enable file caching in Mediawiki’s LocalSettings.php:
    $wgFileCacheDirectory = "{$wgCacheDirectory}/html";
    $wgUseFileCache = true;
    $wgShowIPinHeader = false;
    $wgUseGzip = true;
  • Make sure you have installed some PHP accelerator for caching. I have APC installed and configured it in Mediawiki’s LocalSettings.php:
    $wgMainCacheType = CACHE_ACCEL;

Here is a benchmark before implementing the above configuration (with CACHE_NONE, but APC still installed):

$ ab -kt 30 http://site/wiki/index.php/Page
This is ApacheBench, Version 2.3 < $Revision: 655654 $>
Copyright 1996 Adam Twiss, Zeus Technology Ltd, http://www.zeustech.net/
Licensed to The Apache Software Foundation, http://www.apache.org/

Benchmarking site (be patient)
Finished 255 requests

Server Software: Apache/2.2.16
Server Hostname: site
Server Port: 80

Document Path: /wiki/index.php/Page
Document Length: 12750 bytes

Concurrency Level: 1
Time taken for tests: 30.084 seconds
Complete requests: 255
Failed requests: 0
Write errors: 0
Keep-Alive requests: 0
Total transferred: 3344070 bytes
HTML transferred: 3251250 bytes
Requests per second: 8.48 [#/sec] (mean)
Time per request: 117.978 [ms] (mean)
Time per request: 117.978 [ms] (mean, across all concurrent requests)
Transfer rate: 108.55 [Kbytes/sec] received

Connection Times (ms)
min mean[+/-sd] median max
Connect: 3 6 2.8 7 21
Processing: 88 112 11.1 112 163
Waiting: 66 90 9.1 89 125
Total: 95 118 11.9 118 170

Percentage of the requests served within a certain time (ms)
50% 118
66% 122
75% 125
80% 127
90% 132
95% 138
98% 145
99% 156
100% 170 (longest request)

And here a benchmark after implementing the changes:

ab -kt 30 http://site/wiki/index.php/Page
This is ApacheBench, Version 2.3 < $Revision: 655654 $>
Copyright 1996 Adam Twiss, Zeus Technology Ltd, http://www.zeustech.net/
Licensed to The Apache Software Foundation, http://www.apache.org/

Benchmarking site (be patient)
Finished 649 requests

Server Software: Apache/2.2.16
Server Hostname: site
Server Port: 80

Document Path: /wiki/index.php/Page
Document Length: 12792 bytes

Concurrency Level: 1
Time taken for tests: 30.015 seconds
Complete requests: 649
Failed requests: 0
Write errors: 0
Keep-Alive requests: 0
Total transferred: 8538244 bytes
HTML transferred: 8302008 bytes
Requests per second: 21.62 [#/sec] (mean)
Time per request: 46.248 [ms] (mean)
Time per request: 46.248 [ms] (mean, across all concurrent requests)
Transfer rate: 277.80 [Kbytes/sec] received

Connection Times (ms)
min mean[+/-sd] median max
Connect: 3 9 3.7 8 29
Processing: 23 37 6.0 37 62
Waiting: 13 23 4.9 24 41
Total: 28 46 7.8 45 82

Percentage of the requests served within a certain time (ms)
50% 45
66% 47
75% 49
80% 50
90% 56
95% 62
98% 68
99% 73
100% 82 (longest request)

So Mediawiki can deal with more than 2,5 times as much requests now.

Some people use Apache’s mod_disk_cache to cache Mediawiki pages, but I prefer Mediawiki’s own caching system because it is more standard and does not require patching Mediawiki, even if it might not get as much benefit as a real proxy or mod_disk_cache.

Improving performance by using tmpfs

Today I analyzed disk reads and writes on a server with iotop and strace and found some interesting possible optimizations.

With iotop you can check which processes are reading and writing from the disks. I always press the o, p and a keys in iotop so that it only shows me processes doing I/O and so that it will show accumulated I/O instead of the bandwidth. With the left and right arrows I select on which columns to sort the list.

Once you have identified the processes wich are doing much I/O, you can check what they are reading or writing with strace, for example
# strace  -f -p $PID  -e trace=open,read,write

(you can leave out read and/or write if this gives too much noise)

This way I identified some locations where processes do lots of read and write operations on temporary files.

For nagios I placed /var/lib/nagios3/spool and /var/cache/nagios3 on a tmpfs, for Amavis /var/lib/amavis/tmp and for PostgreSQL /var/lib/postgresql/8.4/main/pg_stat_tm.

Other candidates you might want to consider: /tmp, /var/tmp and /var/lib/php5. There are probably many others, depending on which services you use.

Speeding up my Linux system

My Mandriva 2009.1 system at home had become a bit slow lately, and so I decided to do some attempts to make it a bit faster again. This is not the most powerful system anymore (Asus A8N-SlI NForce4 motherboard, Athlon 64 3500+, 3 GB RAM, 250 GB SATA-1 disk, NVidia 6600 GT graphics card), but it sometimes felt very slow because of lots of disk activity, especially during start-up. I succeeded in improving the performance noticeably: the disk activity now stops much earlier after log-in and after starting Evolution.

I did some different changes at once and have not always measured what was the impact of each individual change. So your mileage may vary.

  • I updated from Mandriva 2009.1 to Mandriva Cooker. Actually I don’t know if this has had any direct effect on the performance. However, it’s a pre-requisite or a recommendation for some of the later changes (GNote and ext4).
  • I removed several of the GNOME panel applets, which probably helps in reducing GNOME start up time. I remove the system monitor applet, one of the weather applets, and Deskbar.
  • I removed Tomboy (which was also active as an applet in my GNOME panel) and installed GNote. GNote looks exactly the same as Tomboy and transparently replaces it (it will immediately start showing your Tomboy notes), but it’s written in C++. The fact that now the Mono .NET runtime environment does not need to be started during GNOME start-up, might have improved the GNOME log-in performance.
  • I cleaned up my mailboxes a bit by removing old mails I don’t need anymore. After that, I manually vacuumed the sqlite database used by Evolution. To do so, close Evolution, and run the following commands in the shell (you will need to have the package sqlite3-tools installed):
    $ evolution --force-shutdown
    $ for i in $(find ~/.evolution/mail -name folders.db); do echo "VACUUM;" | sqlite3 $i; done

    This reduced the size of the folders.db for main IMAP account from more than 300 MB to about 150 MB! After this operation much less disk activity happened while starting up Evolution and the system remained much more responsive. It seems I’m not the only one who was suffering from this problem. This is a serious regression since Evolution switched from berkeleydb to sqlite. Apart from this problem, Evolution’s IMAP implementation is currently also very slow with IMAP if you have big folders and no work seems to be done on that… I have the feeling Mutt‘s motto is correct: all mail clients suck, this one just sucks less. Still, I prefer a GUI mail client.
  • I removed Beagle from my system. All in all I don’t used it very often, and it looks like Tracker might become much more interesting in the future.
  • I switched from Firefox 3.0 to Firefox 3.5, which is also a bit faster. Packages are available in cooker’s main/testing repository, or you can just download a build from mozilla.org. A long time ago I experienced slowdowns in Firefox, which I fixed at that time by disabling reporting of attack sites and web forgeries in Firefox’ preferences – Security. It’s better to not disable this if Firefox is working nicely for you.
  • I switched from ext3 to ext4 for my / and /usr partition. You can just switch from ext3 to ext4 by replacing ext3 by ext4 in /etc/fstab. However, then you won’t take advantage of all new features. To do so, switch to runlevel 1 (init 1 in the console), umount the partition you want to migrate (if you want to migrate /, you can mount it as read-only by running mount -o remount,ro /. Then run these commands on the device:
    # tune2fs -O extents,uninit_bg,dir_index /dev/device
    # fsck -pDf /dev/device

    Then reboot your system.
    Don’t migrate your /boot partition or your / partition if you don’t have a separate /boot partition, because this might lead to an unbootable system because I’m not sure whether grub in Mandriva has complete ext4 support.
    I would also recommend running an up to date Linux kernel, because ext4 has undergone many improvements lately. Cooker’s current kernel 2.6.30.1 is working nicely for me.
    For more ext4 information, I recommend reading the Linux kernel newbies ext4 page.
  • My /home partition is using XFS. If you are using XFS, you can run xfs_fsr to defragment files.

After all these changes, my system feels much snappier now than one month ago.

Fix bad performance with NVidia 177.80 drivers

Since I upgraded to Nvidia’s beta driver series which were supposed to improve performance for KDE 4 (including the now stable version 177.80), my GNOME desktop on my system with a Geforce 6600 GT graphics card, felt a lot slower. It was most noticeable when browsing the web with Firefox. When quickly scrolling a web page with my mouse’s scroll wheel, X started eating 100% of CPU time and the image on the screen started lagging behind a lot. Also just rendering a page seemed to be much slower. Disabling smooth scrolling in Firefox, did not help at all.

Searching on the web, I found out that I’m not the only one with this problem. However, setting the InitialPixmapPlacement to 0 made the Compiz/Emerald window manager crash. I found out that setting InitialPixmapPlacement to 1 also seemed to fix the problem, without compiz/emerald crashing.

So if you also suffer from bad performance in GNOME with the proprietary NVidia drivers, create a script called fix-broken-nvidia.sh in /usr/local/bin with contents:


#!/bin/sh
/usr/bin/nvidia-settings -a InitialPixmapPlacement=1

Then go to GNOME’s System – Preference menu and start up Sessions. In the startup programs tab page, click on Add, and choose /usr/local/bin/fix-broken-nvidia.sh as the command. Save the settings, and restart X. Firefox now works a lot faster for me: web pages now appear instantaneous and I can scroll web pages without my CPU getting overloaded.

Thanks to NVidia for bringing me such great performance with their new drivers. Out of gratefulness, I’ll make sure my next graphics card is an Intel or ATI one.