Should PHP run as www-data or as the user who owns the website?

I've always set up PHP to run as www-data, which is the same user as nginx.

Recently, I've having some issues with some bulletin board software, uploads, etc, and in order to get everything working I either need to

  • chmod -R 777 ., which seems insane

  • change PHP for that virtual host to run as the user who owns the files

  • try and piecemeal which directories should be writeable or not.]

I believe (3) is the correct choice but is somewhat infeasible (always seems to be another directory that needs writing, especially if you're using a sprawling system with adding from everywhere)

I'm trying (2) now, it seems OK.

Is there any other options?

11 Replies

PHP should NEVER have write access to or ownership of the files it executes, or the directories they reside in. I cannot emphasize this strongly enough. It just takes one security vulnerability, and now you're phishing, hosting a reverse shell, spamming, participating in an outbound DoS, or otherwise compromised, because an attacker used PHP to write a malicious script into a place where it could be executed. #3 is the best solution, best implemented by changing the group of the upload directory to www-data and giving group write access. Also ensure that nginx will not ask PHP to execute scripts stored in that directory.

My personal opinion, is to never use a system-wide user such as www-data, apache or nobody. I've found the most secure approach, is to use separate user accounts per virtualhost.

I believe that each virtualhost should have its own separate home, separate tmp directory (for storing session files, etc), mail storage within the home and have a completely isolated environment (like a jail shell), with limited access to system resources.

While I don't like some aspects of cPanel, I do agree with their security model and I use something similar in my own custom systems, for example:

/home/virtualhost/ (home)

/home/virtualhost/tmp/ (tmp directory)

/home/virtualhost/public_html/ (web directory, with a www symbolic link)

/home/virtualhost/etc/ (passwd, aliases, quota, etc)

/home/virtualhost/mail (email storage for dovecot)

All apache and PHP execution runs as the user virtualhost and has strict limits via SELinux and PHP's open_basedir. This allows me to place very strict file permissions like 0600/0700 and my web application will work fine.

This system also gives me several extra abilities, for example I can just move a /home/virtualhost/ directory to another server quite easily, or even have multple home directories like /home1/ and /home2/ which may be stored on separate GlusterFS clusters. Overall, administration is much easier in the above structure, than storing emails under /opt/Maildir/ and web files under /var/www/ and temporary files under /tmp/. In some cases, I even store mysql/mariadb databases within a /home/virtualhost/mysql/ directory, but that is not always easy to do and requires extra work.

That is just my personal view, others prefer to do things differently.

@dwfreed:

PHP should NEVER have write access to or ownership of the files it executes, or the directories they reside in. .

That's what I'm thinking as well. But it seems almost unworkable for some software.

I've got feelers out on the forums for the software as well. Seems very unsafe, but…. I understand why some programs are set up that way…… If you assume all your users are SFTP gods or ssh gurus, you can make your installation instructions say "copy X Y and Z to /vhosts/software/plugins/foo, and then run an installer".

If instead you assume your users need more hand handholding, and you'd like them to just be able to upload something in the software's Admin CP, and then let the software install it, well, then, you're stuck……

@IfThenElse:

My personal opinion, is to never use a system-wide user such as www-data, apache or nobody. I've found the most secure approach, is to use separate user accounts per virtualhost.

How does your view square with dwfreed's in this thread?

Sounds like you're advocating what I'm trying right now (PHP pool runs as the user that owns the files).

Doesn't this leave you somewhat open to someone grabbing complete control of your server by getting a PHP file executed from a bad spot?

@IfThenElse:

My personal opinion, is to never use a system-wide user such as www-data, apache or nobody. I've found the most secure approach, is to use separate user accounts per virtualhost.

I believe that each virtualhost should have its own separate home, separate tmp directory (for storing session files, etc), mail storage within the home and have a completely isolated environment (like a jail shell), with limited access to system resources.

While running each virtualhost as its own user does improve isolation a bit, in that with proper filesystem permissions (or SELinux restrictions, but really, just use the filesystem permission capabilities that are already there), it would be impossible for one virtualhost's PHP scripts to access another virtualhost's content, unless you're using SELinux to forbid write access by PHP to the scripts it's executing, you're still vulnerable to an attacker using PHP to write a malicious script that would then be capable of being executed, or injecting malicious code into an existing script. I've seen your previous post about monitoring for file changes, and while that helps detect malicious changes after the fact, it does nothing to prevent them in the first place (and an ounce of prevention is worth a pound of cure).

@IfThenElse:

storing emails under /opt/Maildir/

Who uses /opt/Maildir ? /opt is for vendor-provided self-contained software installation (eg, Google Chrome installs here), or other vendor-provided software that wants its own filesystem hierarchy (which can be useful for things like not having to worry about distro filesystem layout specifics). When using Maildir layout, $HOME/Maildir is typical, provided the mail user has an account on the system, otherwise somewhere in /var like /var/spool/mail is used (/var/spool/mail is the default when using a single file to store all mail).

I am not sure what you want me to answer, because each admin/devops chooses his/hers own setup based on certain specifications and requirements. For my clients, and the applications that I design and code, this sort of setup is suitable and appropriate.

If there is a breach, then its contained and remains isolated within a single virtualhost. The immediate action is to take a snapshot of the compromised /home/virtualhost account, then delete it entirely and restore from an offline backup, while at the same time the attack vector is identified and neutralised. Quite easy to do when dealing with home directories, everything is contained within that directory tree.

For my large distributed installations over a cluster, my response is different, I just delete the entire linode server and deploy a new one, again while identifying and neutralising the attack vector. I use ansible for the configuration, management and deployment.

If you have a different strategy and system implementation and it works for you then no problem, there is no single correct answer.

Per-vhost user works great for many reasons, except it doesn't work under modapache. You can configure FPM to su to the correct user per pool, but for modphp every single "solution" is a hack that doesn't pay in the long term.

Another option is containers as in lxc/docker.

I run nginx and php-fpm in the following fashion…

Site owner: Bob

User: bob

Group: bob

This is a regular user account which may have a shell (or not).

There's a php-fpm pool for Bob's website. It runs as:

User: www-bob

Group: bob

www-bob is an unpriviledged user, no shell, no password.

Finally, nginx runs as user 'nginx'. This user is added as a member of group 'bob'.

Everything under Bob's website is owned bob:bob.

Directories are chmod 750, files 640.

If a directory needs to be php-writable (e.g. 'uploads'), it's made group-writable (by root or bob):

chmod g+w uploads

A good practice is, also, for writable directories, to prohibit execution of code from them (in fact it should be a mounted location with noexec flag set so that nothing could be executed off it either)

This is an interesting topic. For some of my servers (email, nextcloud etc.), I am using Apache and PHP-FPM in Ubuntu with the following setup:

* Apache user: www-data

  • User account: /home/

  • User account permissions: bob:bob and then added to www-data group with usermod -a -G www-data bob

  • Doc root: /home/bob/public_html

  • PHP-FPM: Site-specific php-fpm pool where user = bob and group = bob, with the PHP socket for the pool being /var/run/php5-fpm-domain.com.com.sock. In the fpm pool file, listen.owner = www-data and listen.group = www-data

Does this seem like a sensible setup or is there a security gotcha that I am missing? The user accounts have shell access.

arkayv this IS in fact a sensible set up and is probably the best setup you can go with on a box that has to host multiple unrelated projects. It is also a very sound base for high performance configurations. The only thing that is slightly problematic in this approach is that FPM pool per user/project means increased memory and process/file handle consumption.

I would also recommend moving the log and the conf file, as well as the FPM socket, into /home/bob (and if you trust the users not to bungle it up, you could even allow them to restart reload FPM via sudo)

Reply

Please enter an answer
Tips:

You can mention users to notify them: @username

You can use Markdown to format your question. For more examples see the Markdown Cheatsheet.

> I’m a blockquote.

I’m a blockquote.

[I'm a link] (https://www.google.com)

I'm a link

**I am bold** I am bold

*I am italicized* I am italicized

Community Code of Conduct