High swap usage in spite of low RAM usage

After installing Monit, I became aware of weird behavior with my memory (at least it seems weird to me). The physical memory (1GB) typically runs at about 30-35% used, but the swap is around 70-90%! Why isn't it using more physical memory first?

My Linode is not heavily used at all - I have a mostly-ignored website and a couple of PHP/MySQL web apps whose users are few in number and all asleep right now. (Of course the hacker computers pound it 24/7 looking in vain for a way in, but that's true for everyone.) But the output of free -m is:

           total      used      free    shared  buff/cache   available
Mem:         979       334       319        18         325         586
Swap:        255       189        66

The biggest user of memory by far is MySQL (MariaDB) - here are the first few lines of ps -eo pmem,pcpu,rss,vsize,args --sort -rss:

%MEM %CPU   RSS    VSZ COMMAND
18.2  0.0 183420 1233472 /usr/sbin/mysqld
 4.3  5.4 43612 401680 php-fpm: pool www
 3.1  0.0 31496 232044 linode-longview
 0.8  0.0  8656 145716 sshd: root@pts/0
 0.7  0.0  7252 258384 /usr/sbin/rsyslogd -n

I wrote to Linode support, and the first thing they said was that it was odd that the graphs on my Linode Manager dashboard don't show swap (although they didn't say what I should do about that). Then they asked me for the output of a few commands, and then suggested using MySQLTuner to help me improve my configuration. But I don't see anything in MySQLTuner's output related to memory vs. swap, nor about overuse of memory in general. And tweaking one process doesn't really explain the basic question of what is ignoring main memory and using swap instead - does someone here have any ideas?

11 Replies

It might be worthwhile to take a look at your Linode's "swappiness", or how likely it is to use swap. To check this, you can run the following command:

cat /proc/sys/vm/swappiness

The default value for swappiness is 60. If you're seeing excessive swap usage when you have free RAM to use instead, you could try running the following command:

echo 40 > /proc/sys/vm/swappiness

After that, you should reboot the Linode and take a look to see how that impacts your swap usage. You can set a different value if you like, 40 is just a possibility that should decrease swap usage compared to RAM usage. The scale is 0 (no swap usage) to 100 (aggressive swap usage). Here's a decent guide that explains what it is.

Hmm, swappiness is already 30 (I didn't set it, as I'd never heard of it before; if that's not the Linode default, perhaps it was set by the pre-built Puppet manifest I used). With a setting of 30, should it be using that much swap? It still seems like something is wrong.

Right now these are the stats (from Monit):

Memory usage  312.5 MB [31.9%]
Swap usage    207.5 MB [81.1%]

That's just about average for the way I find it every time I check it.

A side question: How do I get this forum to send me email when there is a reply to my thread? I didn't know there was a reply to my question until I checked the page manually. I googled and found what appears to be a relevant FAQ page (https://forum.linode.com/app.php/help/faq#f8r1), but I don't see either "Notify me when a reply is posted" or "Board preferences" that are mentioned there.

Hm, that is odd that you are using that much swap with a relatively low swappiness. I've done a little bit of digging, and I believe I may have found a script you can run to determine what is generating the swap for you. You can read about the script at this blog but the script itself follows below:

#!/bin/bash
# Get current swap usage for all running processes
# Erik Ljungstrom 27/05/2011
SUM=0
OVERALL=0
for DIR in `find /proc/ -maxdepth 1 -type d | egrep "^/proc/[0-9]"` ; do
        PID=`echo $DIR | cut -d / -f 3`
        PROGNAME=`ps -p $PID -o comm --no-headers`
        for SWAP in `grep Swap $DIR/smaps 2>/dev/null| awk '{ print $2 }'`
        do
                let SUM=$SUM+$SWAP
        done
        echo "PID=$PID - Swap used: $SUM - ($PROGNAME )"
        let OVERALL=$OVERALL+$SUM
        SUM=0

done
echo "Overall swap used: $OVERALL"

You will need to run the above as the root user to get the most accurate information. If it is actually MySQL that is causing the most swap, I would definitely recommend running mysqltuner. While it doesn't specifically mention swap memory, what I suspect might be happening is MySQL is storing cache in the swap memory. You can also take a look at this guide for best practices on configuring MySQL to avoid using swap disk.

With regards to receiving email from the community page, we don't have that functionality at the moment. That being said, we do hear you and I have passed this request along.

While you were doing your research, I was doing my own, and I found a different way of seeing what's using swap (VmSwap value in /proc/[PID]/status). MySQL is definitely the heavy user. Here are the top five, to which I added the process name in parentheses:

$ grep VmSwap /proc/*/status | sort -n -r --key=2.1 | head -5
/proc/3593/status:VmSwap:         124788 kB  (mysqld)
/proc/3617/status:VmSwap:          18740 kB  (miniserv.pl)
/proc/3442/status:VmSwap:          10356 kB  (tuned)
/proc/9410/status:VmSwap:           4536 kB  (php-fpm)
/proc/3103/status:VmSwap:           3200 kB  (polkitd)

Looking more closely at just mysqld:

$ grep Vm /proc/3593/status
VmPeak:  1236188 kB
VmSize:  1233472 kB
VmLck:         0 kB
VmPin:         0 kB
VmHWM:    219360 kB
VmRSS:    175244 kB
VmData:   713192 kB
VmStk:       132 kB
VmExe:     17552 kB
VmLib:     10236 kB
VmPTE:       928 kB
VmPMD:        16 kB
VmSwap:   124568 kB

I'm confused by that. mysqld's VmSize alone is far greater than the total amount of memory (RAM+Swap) used by all processes (according to both free -m and Monit). And of course mysqld is not the only process using memory - here are the top five for VmSize:

$ grep VmSize /proc/*/status | sort -n -r --key=2.1 | head -5
/proc/3593/status:VmSize:        1233472 kB  (mysqld)
/proc/3442/status:VmSize:         562484 kB  (tuned)
/proc/3103/status:VmSize:         534268 kB  (polkitd)
/proc/3113/status:VmSize:         472708 kB  (NetworkManager)
/proc/9410/status:VmSize:         373100 kB  (php-fpm)

When my Linode only has 1GB of RAM and 256MB of swap (and RAM always claims to be only 30-35% used), how can this be? I clearly don't understand what these numbers really mean.

As for mysqltuner, I still don't understand what is applicable. Here is the complete set of its recommendations:

General recommendations:
    Control warning line(s) into /var/log/mariadb/mariadb.log file
    Control error line(s) into /var/log/mariadb/mariadb.log file
    Configure your accounts with ip or subnets only, then update your configuration with skip-name-resolve=1
    When making adjustments, make tmp_table_size/max_heap_table_size equal
    Reduce your SELECT DISTINCT queries which have no LIMIT clause
    Increase table_open_cache gradually to avoid file descriptor limits
    Read this before increasing table_open_cache over 64: http://bit.ly/1mi7c4C
    This is MyISAM only table_cache scalability problem, InnoDB not affected.
    See more details here: https://bugs.mysql.com/bug.php?id=49177
    This bug already fixed in MySQL 5.7.9 and newer MySQL versions.
    Beware that open_files_limit (16364) variable
    should be greater than table_open_cache (2000)
    Performance should be activated for better diagnostics
    Consider installing Sys schema from https://github.com/mysql/mysql-sys
    Read this before changing innodb_log_file_size and/or innodb_log_files_in_group: http://bit.ly/2wgkDvS

Variables to adjust:
    query_cache_size (=0)
    query_cache_type (=0)
    tmp_table_size (> 16M)
    max_heap_table_size (> 16M)
    table_open_cache (> 2000)
    performance_schema = ON enable PFS
    innodb_log_file_size should be (=16M) if possible, so InnoDB total log files size equals to 25% of buffer pool size.

The only one of those variables that is currently set in my .cnf files is query_cache_size (16M), probably set by the Puppet manifest I used. But it is meaningless, because query cache is disabled by default (I'm running MariaDB 10.2.9). The defaults for everything else are the minimum values in mysqltuner's recommendations, so I don't see any recommended setting changes that would reduce my memory usage.

Ah, Longview is running again (yesterday it was read-only and not giving any data). See https://imgur.com/a/S6BTZ for a screenshot. It thinks MySQL is only using 189MB of memory (total, presumably). Why can't the various memory usage reporting tools agree? I know I'm probably comparing apples to oranges, but I don't know enough to discern the difference.

I decided to try changing the only setting whose recommendation by mysqltuner was lower than the default: innodb_log_file_size = 16M (default was 48M). I thought that was just the size of an actual file (and all the discussions I found on the web say the only downside of it being too big is slower restart times after a crash), but it did weird things to the memory usage:

$ grep Vm status
VmPeak:  1236188 kB
VmSize:  1204500 kB
VmLck:         0 kB
VmPin:         0 kB
VmHWM:     99740 kB
VmRSS:     99712 kB
VmData:   590060 kB
VmStk:       132 kB
VmExe:     17552 kB
VmLib:     10236 kB
VmPTE:       516 kB
VmPMD:        20 kB
VmSwap:        0 kB

HWM and Data went down, but what seems really strange is Swap is zero! My web stuff still runs, so I didn't run afoul of the problem described on https://www.percona.com/blog/2011/07/09/how-to-change-innodblogfile_size-safely/ even though I made the change exactly the way they say not to do (I didn't see that page until after the fact). That page is quite old, so it's possible that InnoDB can recover without help these days. But why does reducing one setting to 1/3 cause it to stop using swap completely? And is that bad, or okay?

Other processes are still using swap, though:

$ free -m
       total     used     free   shared  buff/cache   available
Mem:     979      210      296       17         472         711
Swap:    255       90      165

I still don't understand why any swap at all is used if RAM is mostly un-used, but that might have to remain a mystery.

Sorry for so many posts in a row, but I keep learning things. Now I'm going to bed and have a busy day tomorrow, so I'll get quieter. ;)

If you really want to dig into the specifics of how your memory is being managed, I'd recommend using slabtop. A few links to check out:

https://access.redhat.com/documentation/en-US/RedHatEnterpriseLinux/4/html/ReferenceGuide/s2-proc-slabinfo.html

https://linux.die.net/man/1/slabtop

I'm not really interested in digging deep - I just want it to work reasonably well. I'm a programmer, not a server admin.

This came to my attention because of Monit. After I had an inactive mail server without realizing it and lost some mail, I decided to install Monit to make sure my mail server, web server, etc. are restarted if they die. I built upon a baseline set of parameters suggested by the documentation, which include an alert if RAM exceeds 80% or swap exceeds 20% - those seem like logical settings to me. But as soon as I got it running, the swap alert immediately started screaming, so I investigated and discovered the server's odd preference for swap over primary (physical) memory.

Your first link is broken, but I tried reading the slabtop man page and running it to see what it shows, but I have no idea what any of it means. I don't even know what a slab is, and I have never heard of any of the slab-using processes in the list (e.g. the top three are dentry, kernfsnodecache, and buffer_head). I'm way out of my league, and doubt it will answer the seemingly simply question of why my server doesn't like to use RAM.

Since I last time I looked, the swap is rising again:

        total      used      free    shared  buff/cache   available
Mem:      979       265       335        21         378         652
Swap:     255       160        95

Any more thoughts? I forgot about this issue for awhile, but today I started getting alerts from Monit again.

$ free -m
          total     used     free   shared  buff/cache   available
Mem:        979      254      320       28         405         655
Swap:       255      205       50

The stats I see now are similar to what I reported in this post above, so I won't repeat them. And my questions are still unanswered about how to make sense of the numbers.

Does anyone else's server resources look like this? Is it just my machine that has such a taste for swap over regular memory, or does it happen to others? I did what was recommended by MySQLTuner, but it didn't seem to make any long-term difference.

Reply

Please enter an answer
Tips:

You can mention users to notify them: @username

You can use Markdown to format your question. For more examples see the Markdown Cheatsheet.

> I’m a blockquote.

I’m a blockquote.

[I'm a link] (https://www.google.com)

I'm a link

**I am bold** I am bold

*I am italicized* I am italicized

Community Code of Conduct