How Linode managed to increase Disk space?

Hello,

I have 2 linodes on different Hosts and both having 250+ days of uptime and I never restarted since I got them. Disk space is not my thingy so Im not gonna restart to take advantage of this. :P But linode made a nice decision thanks linode :)

BTW Lets get to the topic

How Linode managed to give us this disk space increases? Im not asking about price and making profit.

1. How did they add disk space to host nodes to give this increase? 512 customers got 4GB (if im correct) and 512MB host machines got 20 clients on them and then to give this increase they need 80GB of RAID 10 disk space.

I dont think since start linode planned to leave some empty space to throw out on promotion because it doesnt make sense!

And if they added new disks to RAID10 setup to make this promotion I bet they need to restart the host node to configure new disks and It didnt happened.

Since Linode doesnt use SAN I'm worrying that how they managed to do this?

I maybe not technically inclined to understand new server stuff about adding new disks in breeze in these years but any tips from you guys and linode staff are really appreciated! :)

16 Replies

Raid disks can be hotswapped no need for restarts (cool huh?) so I'd guess they put in newer bigger drives. Also for linode the increase will potentially be more than 4gb per 512 due to backups and standby servers.

It's VPS, not physical servers, they can migrate from one host-node to another without interrupting.

Maybe VPS were migrated to the host-node with bigger HDD? Also, each VPS can get additional HDD space (see extras), it means that physical nodes has additional space.

@obs:

Raid disks can be hotswapped no need for restarts (cool huh?) so I'd guess they put in newer bigger drives. Also for linode the increase will potentially be more than 4gb per 512 due to backups and standby servers.

Ooops I forgot the hotswapping trick! But dude it needs lots of money to put in new disks there and if they did that I think now they have lots (thousands) of used disks on the shelves! :P

@OZ:

It's VPS, not physical servers, they can migrate from one host-node to another without interrupting.

Maybe VPS were migrated to the host-node with bigger HDD? Also, each VPS can get additional HDD space (see extras), it means that physical nodes has additional space.

Actually XEN XM-Save doesnt help too much in this case and it require lots of man work and chance is very high on failing things. And people doesnt complained about their Linode is stuck,freeze or lose their data. And we should expect large number of failure rate when talking about linodes large customer base!

@ruchirablog:

@OZ:

It's VPS, not physical servers, they can migrate from one host-node to another without interrupting.

Maybe VPS were migrated to the host-node with bigger HDD? Also, each VPS can get additional HDD space (see extras), it means that physical nodes has additional space.

Actually XEN XM-Save doesnt help too much in this case and it require lots of man work and chance is very high on failing things. And people doesnt complained about their Linode is stuck,freeze or lose their data. And we should expect large number of failure rate when talking about linodes large customer base!

Actually migration it's routine operation and doesn't produce any 'stuck,freeze or lose their data'.

@OZ:

@ruchirablog:

@OZ:

It's VPS, not physical servers, they can migrate from one host-node to another without interrupting.

Maybe VPS were migrated to the host-node with bigger HDD? Also, each VPS can get additional HDD space (see extras), it means that physical nodes has additional space.

Actually XEN XM-Save doesnt help too much in this case and it require lots of man work and chance is very high on failing things. And people doesnt complained about their Linode is stuck,freeze or lose their data. And we should expect large number of failure rate when talking about linodes large customer base!

Actually migration it's routine operation and doesn't produce any 'stuck,freeze or lose their data'.

Linode uses local disks your operation is best if they use SAN! And it does produce errors! They cant hit the button and complete the migration. Every linode is not the same, people modify stuff and use various things on their linodes. There is no all in 1 and 0 error migration method in the VPS world!

@ruchirablog:

And it does produce errors!
No, it doesn't, I saw hundreds and thousands of migrations without any errors.

VPS is not physical server.

Anyway, arguing on suggestions it's not smartest way. Let's wait answer from Linode support.

@OZ:

@ruchirablog:

And it does produce errors!
No, it doesn't, I saw hundreds and thousands of migrations without any errors.

VPS is not physical server.

yes it can be done 99% without errors if VPS is using SAN disks! With local disk its much different!

However yes see what linode staff says!

@ruchirablog:

yes it can be done 99% without errors if VPS is using SAN disks! With local disk its much different!

However yes see what linode staff says!

Linode uses local storage.

@ruchirablog:

I dont think since start linode planned to leave some empty space to throw out on promotion because it doesnt make sense!
BTW, in addition to the other posts, I wouldn't automatically discount this option.

While I assume that over time older hosts get upgrades (which as others have pointed out need not involve any downtime in the case of disks), it also wouldn't surprise me if at any given moment the basic host configuration had enough space to be able to offer such upgrades.

After all, we've also seen memory upgrades, and I don't think you can hot swap memory in the host hardware, so that would have been more likely memory already there but just not released to guests yet.

It may seem wasteful, but since drives/memory only come in particular sizes, you can purchase ahead of the curve, but target the sizes offered per the market (where being reasonably ahead of competitors is likely just as good as enormously ahead in terms of customer attraction) and save the reserve for these sorts of upgrades which help engender strong loyalty. If you pick a good balance I think it could be better for the business than just allocating everything you have up front.

– David

To answer ALL your questions:

It was magic.

That is all.

Hosts are probably not migrated. I see it as being one of three options:

1. Linode keeps extra disk space around for the "Extras" tab and simply opens these up when they want to increase space.

2. Linode rotates out drives by having arrays rebuilt. I can see this taking a long time if there is an external array associated with each host. I don't know what sizes Linode uses or how many hosts per box, but Enterprise disks are very expensive. I wouldn't be surprised if there are a bunch of smaller disks and an external 14-drive array associated with each.

3. This one I think is very unlikely for a few reasons. First being that Linode doesn't do live migrations to different hosts. We know this because we don't have to change LISH information when we get more disk space. If Linode doesn't do live migration then the only chance to move the data is when the Linode is shutdown during resize. The resize process doesn't take long enough to allow time for an entire Linode copy to another host. The only option here is that you have a new host that is storing all of the Linode's data for the few hours/days/weeks before the resize-restart. When the restart happens you're resizing the disk on the new host and copying over the changed files. Like I said this is unlikely.

I would bet my money on probably 2, maybe 1, never 3. I wonder if there is a significant reduction in disk performance during these hotswaps. What if a drive fails during a rebuild? That would put Linode in a bad situation. Granted they can just plug the old drive in, still.

So, which is it?

Another possibility:

Older hosts which once had 40 linode 512s on do not get put in the pool for new linodes and now have fewer on as people remove linodes (i.e. 30 instead of 40) which allow for more disk space per linode

Newer hosts have bigger disks which means they can have 40 on and have more disk space anyway.

I'm not sure what the point of speculating about this is, but I can't resist jumping in.

In one of the past disk space upgrades, some older hosts did not have enough room. The upgrades were doled out on a first come, first serve basis; when the host ran out, users who wanted the space had to ask for a migration to a different host.

@tonymallin:

Another possibility:

Older hosts which once had 40 linode 512s on do not get put in the pool for new linodes and now have fewer on as people remove linodes (i.e. 30 instead of 40) which allow for more disk space per linode
Yes, they do do that to rotate out old servers, though not necessarily (just) for this reason.

The general consensus seems to be that Linode hosts have four 15K RPM SAS drives in RAID10, so there aren't any 14-drive arrays going on here.

Such an array, based on the largest available disk size (600GB) today, means that a host can have up to 1200GB of usable space. However, consider the following scenario with 450GB disks (the second-largest size on the market):

450x4 = 900GB usable

40x512s on one host

20GB per 512, 800GB total consumed

100GB remaining for host, extras, etc.

@obs:

@ruchirablog:

yes it can be done 99% without errors if VPS is using SAN disks! With local disk its much different!

However yes see what linode staff says!

Linode uses local storage.

yes I know! :) Thats why I asked this question! If they are using SAN this question would be so dumb :P

Reply

Please enter an answer
Tips:

You can mention users to notify them: @username

You can use Markdown to format your question. For more examples see the Markdown Cheatsheet.

> I’m a blockquote.

I’m a blockquote.

[I'm a link] (https://www.google.com)

I'm a link

**I am bold** I am bold

*I am italicized* I am italicized

Community Code of Conduct