io_tokens?

I get to the point where I cannot ssh into my server. I need to do a manual "restart" through my member console. Some people have mentioned it could be my iotokens. I noticed right after my reboot …. I run the "cat /proc/iostatus" and get?

io_count=5372 io_rate=0 io_tokens=72548 token_refill=512 token_max=400000
[root@li8-42 ~]# cat /proc/io_status
io_count=5464 io_rate=64 io_tokens=75528 token_refill=512 token_max=400000
[root@li8-42 ~]# cat /proc/io_status
io_count=5535 io_rate=0 io_tokens=78529 token_refill=512 token_max=400000
[root@li8-42 ~]# cat /proc/io_status
io_count=5574 io_rate=39 io_tokens=80538 token_refill=512 token_max=400000
[root@li8-42 ~]# cat /proc/io_status
io_count=5613 io_rate=0 io_tokens=92787 token_refill=512 token_max=400000

Why would my tokens be so low after a reboot? And what can I do to help my linode? Pretty much I have one message board with an average of 15-20 people online at a time …. a few coppermine photo galleries, and 2 email accounts using SquirrelMail?

13 Replies

Post the output of vmstat -s

Or the following (as root) to see what process could be responsible for chewing through your io_tokens:

ps aux

@pclissold:

Post the output of vmstat -s ````
[root@li8-42 ~]# vmstat -s
75388 total memory
24420 used memory
7512 active memory
8492 inactive memory
50968 free memory
3352 buffer memory
12604 swap cache
365560 total swap
0 used swap
365560 free swap
313 non-nice user cpu ticks
0 nice user cpu ticks
1052 system cpu ticks
12593 idle cpu ticks
0 IO-wait cpu ticks
0 IRQ cpu ticks
0 softirq cpu ticks
14209 pages paged in
7840 pages paged out
1 pages swapped in
0 pages swapped out
19674 interrupts
6919 CPU context switches
1141667865 boot time
606 forks

````

@untitled9:

Or the following (as root) to see what process could be responsible for chewing through your io_tokens:

ps aux

root         1  0.0  0.7  1608  600 ?        S    12:57   0:00 init [3]
root         2  0.0  0.0     0    0 ?        S    12:57   0:00 [keventd]
root         3  0.0  0.0     0    0 ?        SN   12:57   0:00 [ksoftirqd_CPU0]
root         4  0.0  0.0     0    0 ?        S    12:57   0:00 [kswapd]
root         5  0.0  0.0     0    0 ?        S    12:57   0:00 [bdflush]
root         6  0.0  0.0     0    0 ?        S    12:57   0:00 [kupdated]
root         7  0.0  0.0     0    0 ?        S    12:57   0:00 [jfsIO]
root         8  0.0  0.0     0    0 ?        S    12:57   0:00 [jfsCommit]
root         9  0.0  0.0     0    0 ?        S    12:57   0:00 [jfsSync]
root        10  0.0  0.0     0    0 ?        S    12:57   0:00 [xfsbufd]
root        11  0.0  0.0     0    0 ?        S    12:57   0:00 [xfslogd/0]
root        12  0.0  0.0     0    0 ?        S    12:57   0:00 [xfsdatad/0]
root        13  0.0  0.0     0    0 ?        S<   12:57   0:00 [mdrecoveryd]
root        14  0.0  0.0     0    0 ?        S    12:57   0:00 [kjournald]
root       399  0.0  1.3  2028  992 ?        Ss   12:58   0:00 /sbin/dhclient -1
root       445  0.0  0.9  1612  696 ?        Ss   12:58   0:00 syslogd -m 0
root       449  0.0  0.6  1452  460 ?        Ss   12:58   0:00 klogd -x
root       479  0.1  2.1  3920 1652 ?        Ss   12:58   0:00 /usr/sbin/sshd
root       489  0.0  1.1  2064  832 ?        Ss   12:58   0:00 xinetd -stayalive
root       498  0.0  1.2  3624  968 ?        S    12:58   0:00 /usr/sbin/vsftpd
root       516  0.0  4.1  7620 3160 ?        Ss   12:58   0:00 sendmail: accepti
smmsp      524  0.0  3.4  6736 2636 ?        Ss   12:58   0:00 sendmail: Queue r
root       534  0.0  1.0  3672  808 ?        Ss   12:58   0:00 crond
xfs        557  0.0  1.6  2712 1216 ?        Ss   12:58   0:00 xfs -droppriv -da
daemon     566  0.0  0.8  1552  628 ?        Ss   12:58   0:00 /usr/sbin/atd
root       573  0.0  0.5  1436  428 tty0     Ss+  12:58   0:00 /sbin/mingetty tt
root       574  0.1  2.8  6776 2156 ?        Ss   12:59   0:00 sshd: root@pts/0
root       576  0.0  1.9  4348 1452 pts/0    Ss   12:59   0:00 -bash
root       608  0.0  1.0  2292  768 pts/0    R+   13:01   0:00 ps aux

From a quick glance, there doesn't seem to be anything there that would cause it to be low.

My guess is that UML might retain the value of iotokens even across a reboot (although that's only a guess, please don't quote me on that), in which case you should do these commands again when the iotokens get low again.

The other option might be to take a look at the boot-up sequence. There could be something there.

Thanks for the quick response!!!!

I have noticed a weird trend! When I go to use SquirrelMail … my server slows down a LOT!!! And even get's to the point where I can't ssh into it. Could my multiple uses or SquirrelMail be causing it.?

Squirrelmail can quickly eat all your tokens. This can happen if:

1) you have a large mailbox (>10M) and you open it.

2) You have "Enable Unread Message Notification" set to all mailboxes and you have a lot of mailboxes.

Ie. you use IO-tokens for regular file system I/O as well, not only swapping activity.

Cheers,

Risto

One thing that you can try, if you're willing to accept the tradeoff of needing a long fsck when booting after a system crash, is to mount a heavily-used file system as ext2 instead of ext3. Ext3's journaling can result in a lot of overhead.

I used this for a partition that holds seeds for BitTorrent, and it made a huge difference - I only start running low on tokens when the seed gets very close to 100%, and I don't run completely out; under ext3 it was running OUT of tokens long before.

@rodman:

Thanks for the quick response!!!!

I have noticed a weird trend! When I go to use SquirrelMail … my server slows down a LOT!!! And even get's to the point where I can't ssh into it. Could my multiple uses or SquirrelMail be causing it.?

Most likely it's your imap server not squirrelmail. Use dovecot instead of the default imapd, it's better.

@rodman:

Why would my tokens be so low after a reboot?

Because they start from zero when the machine comes up?

I'd have assumed it was because it takes a lot of I/O to bring the system up.

Either way, it's likely to be low right after boot.

As someone mentioned, dovecot will help with your squirrelmail problems a bit, but the best thing you can do is install an imap proxy (in debian just apt-get install imapproxy).

Squirrelmail opens a new connection to your IMAP server for just about every operation - this very quickly spawns a lot of IMAP processes which will eat memory and io_tokens.

Using an imap proxy means squirrelmail generally only spawns 1 imap service per user, as the proxy keeps that session open and reuses it for each new request from squirrelmail.

@TehDan:

Using an imap proxy means squirrelmail generally only spawns 1 imap service per user, as the proxy keeps that session open and reuses it for each new request from squirrelmail.

I didn't know about that, that's handy, thanks.

If you are using CourierIMAP with vpopmail running on a MySQL database, it can make for a major workload on your Linode for every request/action taken in Squirrelmail for the fact that your Linode would be paging in and out memory for Apache, PHP, CourierIMAP, vpopmail, and MySQL with every single request. I'd use both the suggestion that TehDan made, as well as reconfiguring your vpopmail to use a different database format if you don't have a lot of users. MySQL with vpopmail is only really necessary on systems with lots of users, thousand+ I'd guess, which as you mentioned is not your case. That would also remove vpopmail's dependancy on MySQL keeping it running if MySQL isn't available.

Reply

Please enter an answer
Tips:

You can mention users to notify them: @username

You can use Markdown to format your question. For more examples see the Markdown Cheatsheet.

> I’m a blockquote.

I’m a blockquote.

[I'm a link] (https://www.google.com)

I'm a link

**I am bold** I am bold

*I am italicized* I am italicized

Community Code of Conduct