S3 “folder-per-client” permissions strategy

Linode Staff

I was originally planning on an S3 bucket per client, with bucket policies restricting access to that bucket to a single IP (the clients' single-tenant server). However, most S3 compatible service providers have limits of 100 buckets per account, which didn't make sense until I came across this message from another company…

Because bucket operations work against a centralized, global resource space, it is not appropriate to create or delete buckets on the high-availability code path of your application."

I need to adjust our strategy for a single bucket, and a folder-per-client. In case a client server is ever compromised, I don't want someone with the credentials from that server to have access to all other client folders in the bucket, so I need to apply a restriction that says "Only give this IP access to this folder". Easy enough, there is a simple policy for that…

{
  "Version": "2012-10-17",
  "Id": "S3PolicyId1",
  "Statement": [
    {
      "Sid": "S3PolicyId1",
      "Effect": "Deny",
      "Principal": "*",
      "Action": "s3:*",
      "Resource": [
        "bucket location"
      ],
      "Condition": {
        "NotIpAddress": {
          "aws:SourceIp": [
            "ipv4.address/32",
            "ipv6.address/64"
          ]
        }
      }
    }
  ]
}

Problem… some companies limit S3 bucket policies to 20KB in size, most providers are same or similar. The example above is ~500 bytes, and the portion that I would have to repeat to apply similar limits to additional client folders within the bucket is ~400 bytes of that.

This means a limit of ~50 clients per bucket due to the restriction on policy size. Further, at a limit of 100 buckets, or even 1000 if lets say I was granted an increase…. that would mean an overall hard limit of 50,000 clients per account.

That would obviously not work in the long term for a growing business. So how do other businesses using S3 using folder-per-client achieve separation of permissions?

2 Replies

Just to share some insight about our product specifically, Object Storage allows 1000 buckets per region. Since we currently have availability in both Newark and Frankfurt, the total bucket limit for your account is 2000 buckets. I understand that you'd like to plan for growth as well. You may be happy to know that this limit is adjustable. If you ever need more, you can just open a support ticket and share some details about your use-case with us. After review, Support may grant an increase as applicable.

I also see that you're concerned about policy limitations. However, I don't believe this will be an issue on our platform. After reviewing our documentation, I'm not finding any information that indicates that we apply these type of restrictions. For clarity, here are the 4 limitations that each account has:

  • Total Storage per region: 50TB
  • Maximum Objects per region: 50,000,000
  • Maximum Object Size: 5GB
  • Maximum Buckets per region: 1,000

Other than that, you should be good to go. I'm interested to see what other chime in with.

Finally, here's all of our guides around Object Storage. If you encounter any issues, please do let us know.

Thanks @dcortijo for bringing the question up here. My concern then is with the policy file size and impact it might have on performance. I imagine that larger files would have a negative impact on the performance of bucket requests and drive up the overhead on your servers. For instance, if I had 100k clients in one bucket, then my policy would be 50MB based on the policy example I provided for the OP (500b * 100k clients).

Are there other methods such as creating sub-users in the linode dashboard, and using ACL's to manage access to the individual folders?

Reply

Please enter an answer
Tips:

You can mention users to notify them: @username

You can use Markdown to format your question. For more examples see the Markdown Cheatsheet.

> I’m a blockquote.

I’m a blockquote.

[I'm a link] (https://www.google.com)

I'm a link

**I am bold** I am bold

*I am italicized* I am italicized

Community Code of Conduct