Storage Pooling?

Is it possible to have storage pooling across multiple linodes at the same data center? Can I steal storage allocation from one linode to make a larger disk on a second linode?

10 Replies

No. All storage is local to the Linode it belongs to.

  • Les

You could try a distributed filesystem like Ceph: http://ceph.com/ or weed-fs https://github.com/chrislusf/weed-fs

Any plan for offering this in the future?

I have 10-15 Linodes doing different things. Usually a webserver requires lot of disk space, but not more than 1-2GB of ram, and for example a DB server requires not much disk space, but requires more vCPU's, and so on.

And if I want do have a rdiff-backup server, I need 200GB+ disk space, which would be a total waste of RAM, CPU and money

This would really be a killer feature, and it would make it possible to have all my VPS'es on Linode (I currently have a few servers outside Linode, because it would be too expensive for e.g. file/backup/storage servers).

I would expect this to be unlikely; Linode's don't use SAN storage; all the storage is local to the physical machine you are on (so make sure you have good backups; if that machine explodes then you can lose all your data for that linode). This makes it problematic to pool data across machines, even in the same datacenter.

If you don't care too much about speed then you can do a version of it yourself (eg NFS or DBD or maybe even iSCSI).

I'd expect that the data on the hosts is at least mirror-RAID'ed (or ZFS or something like that), so in case of disk crash (which I guess is the most common hardware failure in serveres these days), the service can continue without big downtime.

Sure I can set up some NFS share between the servers and make it work, but this is kind a workaround.

However, NFS cloud service is maybe a good idea to a new Linode product? (Like NodeBalancer).

You can likely achieve your goal with GlusterFS within the datacenter. Latency might cause problems for multiple datacenters.

rohara:

Okay. I'll take a look on GlusterFS

But this is very expensive, as for example 500GB storage, I need a Linode 32GB (such a waste of RAM and vCPU's) for getting enough storage, which will cost $320/mnd.

Properly 3-6x the price for 500GB storage at Amazon S3 (a bit difficult to calculate and compare, because it depends on amount of requests, in/out traffic etc).

I really love Linode. In general I find Linode competitive on their plans, especially because of the high quality service/stability/support. But on storage, not so much.

Their pricing for storage is very competitive as long as you're comparing with other providers who actually sell the same thing.

Trying to compare the cost of network-attached block storage to local storage is apples and oranges; the places that offer cheaper local storage tend to be dedicated servers where you're using lower-grade drives and managing your own RAID.

  • Les

You are right, I'm not comparing with other offering the same thing, I'm comparing on what I need. Which is a VPS with a large storage disk attached to an affordable (doesn't have to be SSD)

I know that Linode doens't offer this, so I'm just hoping this would change in the future.

Meanwhile, I think I'll look at integrating with Google Storage, so I can offshore some of my files there.

@rasander:

I'd expect that the data on the hosts is at least mirror-RAID'ed (or ZFS or something like that), so in case of disk crash (which I guess is the most common hardware failure in serveres these days), the service can continue without big downtime.

AFAIK (and it always was, before the migration to SSDs) it's RAID 10 – striping for speed, mirroring for redundancy. Still, if the disk controller fails and writes crap to all the disks, you're still screwed.

Reply

Please enter an answer
Tips:

You can mention users to notify them: @username

You can use Markdown to format your question. For more examples see the Markdown Cheatsheet.

> I’m a blockquote.

I’m a blockquote.

[I'm a link] (https://www.google.com)

I'm a link

**I am bold** I am bold

*I am italicized* I am italicized

Community Code of Conduct