Why so much RAM used on website?
Its really bugging me, I want to reduce server resources. Sometimes the server runs out of memory and Apache crashes, but I dont have that many sites on it. I put it down to all this high memory used by WP.
Yesterday we went live with a website that has 0 visitors as its a new domain, and not been advertised. During 0 visitors Memory is still up around 300Mb and 3% CPU according to Longview. I have disabled Wordpress built in cron so with zero visitors I dont understand why the user for this website is using so much RAM. There is no email on it, just the website.
Here is htop right now, 0 visitors:
PID USER PRI NI VIRT RES SHR S CPU% MEM% TIME+ Command
8722 storiesfr 20 0 291M 94724 16328 S 0.0 1.2 0:25.79 /opt/rh/rh-php70/root/usr/bin/php-cgi
7686 storiesfr 20 0 290M 94304 16484 S 0.0 1.2 0:24.47 /opt/rh/rh-php70/root/usr/bin/php-cgi
16941 storiesfr 20 0 278M 84912 15196 S 0.0 1.0 0:03.21 /opt/rh/rh-php70/root/usr/bin/php-cgi
Lonview reports average 200Mb to 300Mb past 30 minutes.
Does anyone know why RAM and CPU is high in Longview for a site with 0 visitors? What other things could this user on the server be doing to use so much? Or is it a red herring as I can see from top its VIRT that is high, but RES is less than 100Mb. So would RES be the important one? Is Longview not good enough to use as its showing total RAM including VIRT?
Saying all that, one of my Expression Engine sites with more visitors on at the moment, is showing 26Mb RAM in Longview, but in top its similar figures to the one above, 292Mb VIRT, 26900 RES, 19552 SHR, but 0 CPU, 0.3% MEM.
So its all a bit confusing how to identify what the actual RAM usage is and then whats causing it.
2994 mysql 20 0 1463.8m 310.9m 9.6m S 0.0 15.6 18:35.35 mysqld 20077 www-data 20 0 884.3m 147.2m 7.6m S 0.0 7.4 25:03.92 mega-cmd-server 29252 www-data 20 0 479.2m 49.8m 37.8m S 0.0 2.5 0:00.08 php-fpm7.2 20826 www-data 20 0 565.5m 38.7m 6.5m S 0.0 1.9 0:44.48 nginx 3467 root 20 0 115.2m 32.3m 4.9m S 0.0 1.6 3:45.77 linode-longview 20810 root 20 0 474.1m 30.1m 24.2m S 0.0 1.5 0:03.14 php-fpm7.2 3431 memcache 20 0 342.8m 27.5m 1.8m S 0.0 1.4 0:34.58 memcached 20827 www-data 20 0 286.2m 13.5m 2.6m S 0.0 0.7 0:00.20 nginx 20824 root 20 0 286.2m 12.6m 2.0m S 0.0 0.6 0:00.00 nginx ...
Remove the 147 MB used by MEGAcmd (I use my MEGA account as an extra layer of backups -
I started with EasyEngine
my tool of choice and my control panel at the same time is…..
ansible = WMSD (weapon of mass server destruction)
just joking, its a great tool if used properly.
1) Allow more FPM children but lower the limit of requests per child. In general, PHP tends to be memory-leaky, and restarting children often is almost always a good idea. Frequent children restarts will keep this problem in check.
2) Profile the PHP code itself, you well may have a plugin that does something really stupid
3) Increase the OS swap
4) Test performance with Apache ab to see where's your concurrency cutoff point, then tweak pm.* settings to improve performance
eventually, I found Virtualmin/Webmin an overkill to run a single website, so I converted the server to plain vanilla CentOS 7, without a control panel.
> 1) Allow more FPM children but lower the limit of requests per child. In general, PHP tends to be memory-leaky, and restarting children often is almost always a good idea. Frequent children restarts will keep this problem in check.
Actually MySQL (Mariadb) seems to the one that is crashing most, its often oom kills on it.
> 2) Profile the PHP code itself, you well may have a plugin that does something really stupid
There are not any plugins for Wordpress that do this anymore. There used to be one GoDaddy made everyone raved about, but they stopped development and it no longer works.
> 3) Increase the OS swap
I thought I read some time ago I should not do that on Linodes?
> 4) Test performance with Apache ab to see where's your concurrency cutoff point, then tweak pm.* settings to improve performance
I use apachebench but doesnt recommend any changes, it outputs the config I have.
I would still blame PHP before anything else because it's rather trivial to write scripts that, under multiple executions, hog all available memory. If this is compounded with locking or blocking IO (e.g. calling a very slow foreign API) and the webserver FPM memory contraints not configured properly, then the exact thing you're describing would happen.
Yes, you should have spare swap memory because your only other alternative is a very strict configuration that carefully constrains memory use; simply put without swap any spontaneous load spike is liable to crash your services, which in turn often creates a snowball effects as requests pile up and systemd restarts the services. There is no meaningful performance impact in having swap enabled; you can finetune this impact by playing with kernel swappiness settings (under vm.* in sysctl IIRC)
AB will not recommend you changes; it will show you performance given some concurrency and velocity. You determine a cutoff point by assuming some values for -c and -n (for example -c 10 -n 50) and then increasing both values slightly until you see a significant increase in response times. Do this twice, one for static files and one for some PHP script that adequately represents an average request to your PHP code.
Once you know where the cutoff point is, you can start experimenting with the webserver and FPM settings and see how they affect the cutoff.