Announcement: Linode Object Storage Early Adopter Program

Linode Staff

Hello Linode Community -

We’re ready to open Linode Object Storage to early adopters for testing! As an early adopter, you'll have the opportunity to test this new feature before it is widely available to customers and give us feedback on your experience.

With Linode Object Storage you can:

  • Scale your cloud architecture by employing backups at an affordable price
  • Easily transfer data from or use alongside your existing S3 buckets
  • Store backups, host static sites, and share files or other media

If you want to participate, simply open a Support ticket titled “Object Storage Early Adopter Program” to request access and our team will add you to the list. Once you receive a confirmation that you are successfully enrolled in the EAP, Object Storage will be listed on the Navigation bar in Linode Cloud Manager. You will be able to easily create your buckets, add Access keys, and use your Access keys to manage any S3 compatible Object Storage.

We have two guides for Object Storage:

We’re here to help at any time, so just open a Support ticket or comment on this post with questions or issues. This has been one of the most requested products from Linode in recent years, and we appreciate your support as we bring this new product to the cloud computing community.

Happy beta'ing!

The Linode Team

6 Replies

Been playing around a bit with object storage yesterday night - with the storage being in Newark and my Linode in Frankfurt, it is…. slow ;)

I've made an s3ql filesystem into a bucket, and I've mounted another one into my nextcloud instance. Works nice.

One caveat: the cloud manager seems delayed in showing number of objects/size in a bucket. Even though I clearly have a few MB in a bucket for half an hour, it shows empty. It will eventually update, though.

Hi,
I haven't signed up to try this service yet, as I don't have a specific use case for it at the moment. However, I'm a bit concerned that other customers may choose the name of a bucket that you wanted to use. If all customers share a single access point, rather than each customer being given their own access point to their buckets, I could see conflicts occurring in the use of the object storage service and bucket names. Perhaps a slight change to the sub-domain access would be helpful, such as some type of inclusion of a customer ID number. That way, each customer could have their own bucket names and access points, and wouldn't conflict with others, even if others had the same name you selected. Just a thought.

Blake

I agree with Blake's concerns - this is something I've always disliked about S3 and other S3-compatible services.

However the easy answer, which I'll do, is prefix the bucket name with something unique to you, like your Linode username.

So your URL becomes: [linode-username]-[bucket name].us-east-1.linodeobjects.com

E.g. andysh-my-first-bucket.us-east-1.linodeobjects.com

I suspect this isn't something that's easily solvable by Linode, and I'm happy to live with it.

The slight improvement I can see with Linode's implementation is that a bucket name only has to be unique within the cluster, so a customer using NJ and one using UK could have the same bucket name. With AWS, it has to be unique across all of AWS's regions.

I have been using object-storage for about 3 days now. It looks great and works fine as intended.

MY QUESTION

Is there any plan to implement "metadata" attribute/method on the object storage such that metadata information can be obtained, similar to:

linode-cli obj la

E.g.
1) to list all metadata of all objects in all buckets

linode-cli obj metadata

2) to list all metadata of all objects in a "named" bucket

linode-cli obj my-bucket metadata

The "metadata" attribute/method is implemented in s3fs python package.

It will be nice to have interface to this through linode-cli.

See link to s3fs documentation below:

https://buildmedia.readthedocs.org/media/pdf/s3fs/latest/s3fs.pdf

@marcgerges Thanks for taking the time to beta test! I’m going to reach out to our developers to see if we’ve encountered slow updates before and if so, what we are doing to correct that.

@tech10 @andysh Thank you both for the feedback. I’ve made sure our developers are aware of both of your concerns regarding the way bucket naming is handled.

@Object_user We do not have plans to implement “metadata” at launch but it is on our roadmap as a potential feature.

@dsmith - slow is easily explainable in this case by my Linode and the S3 bucket being separated by half the world :) I am quite convinced object storage is blazingly fast otherwise.

In fact I am checking every few days if Frankfurt is available - as soon as it is, this'll be so much better than my current environment with a certain other cloud provider.

Reply

Please enter an answer
Tips:

You can mention users to notify them: @username

You can use Markdown to format your question. For more examples see the Markdown Cheatsheet.

> I’m a blockquote.

I’m a blockquote.

[I'm a link] (https://www.google.com)

I'm a link

**I am bold** I am bold

*I am italicized* I am italicized

Community Code of Conduct