Announcement: Linode Object Storage Early Adopter Program

Hello Linode Community -

We’re ready to open Linode Object Storage to early adopters for testing! As an early adopter, you'll have the opportunity to test this new feature before it is widely available to customers and give us feedback on your experience.

With Linode Object Storage you can:

  • Scale your cloud architecture by employing backups at an affordable price
  • Easily transfer data from or use alongside your existing S3 buckets
  • Store backups, host static sites, and share files or other media

If you want to participate, simply open a Support ticket titled “Object Storage Early Adopter Program” to request access and our team will add you to the list. Once you receive a confirmation that you are successfully enrolled in the EAP, Object Storage will be listed on the Navigation bar in Linode Cloud Manager. You will be able to easily create your buckets, add Access keys, and use your Access keys to manage any S3 compatible Object Storage.

We have two guides for Object Storage:

We’re here to help at any time, so just open a Support ticket or comment on this post with questions or issues. This has been one of the most requested products from Linode in recent years, and we appreciate your support as we bring this new product to the cloud computing community.

Happy beta'ing!

The Linode Team

3 Replies

Been playing around a bit with object storage yesterday night - with the storage being in Newark and my Linode in Frankfurt, it is…. slow ;)

I've made an s3ql filesystem into a bucket, and I've mounted another one into my nextcloud instance. Works nice.

One caveat: the cloud manager seems delayed in showing number of objects/size in a bucket. Even though I clearly have a few MB in a bucket for half an hour, it shows empty. It will eventually update, though.

Hi,
I haven't signed up to try this service yet, as I don't have a specific use case for it at the moment. However, I'm a bit concerned that other customers may choose the name of a bucket that you wanted to use. If all customers share a single access point, rather than each customer being given their own access point to their buckets, I could see conflicts occurring in the use of the object storage service and bucket names. Perhaps a slight change to the sub-domain access would be helpful, such as some type of inclusion of a customer ID number. That way, each customer could have their own bucket names and access points, and wouldn't conflict with others, even if others had the same name you selected. Just a thought.

Blake

I agree with Blake's concerns - this is something I've always disliked about S3 and other S3-compatible services.

However the easy answer, which I'll do, is prefix the bucket name with something unique to you, like your Linode username.

So your URL becomes: [linode-username]-[bucket name].us-east-1.linodeobjects.com

E.g. andysh-my-first-bucket.us-east-1.linodeobjects.com

I suspect this isn't something that's easily solvable by Linode, and I'm happy to live with it.

The slight improvement I can see with Linode's implementation is that a bucket name only has to be unique within the cluster, so a customer using NJ and one using UK could have the same bucket name. With AWS, it has to be unique across all of AWS's regions.

Reply

Please enter an answer
Tips:

You can mention users to notify them: @username

You can use Markdown to format your question. For more examples see the Markdown Cheatsheet.

> I’m a blockquote.

I’m a blockquote.

[I'm a link] (https://www.google.com)

I'm a link

**I am bold** I am bold

*I am italicized* I am italicized

Community Code of Conduct