Even bigger performance gain than Xen 2.0...

I'm wondering if something like HyperDrive III would boost performance enough to double the number of linodes/xenodes per server.

Apparently, it plugs into IDE and provides 8 DIMM slots for 16GB RAM that can be used as a RAM-based hard drive.

Before you guys freak out about lost data, consider this:

  • Retains data when the PC is restarted or shutdown by having an independent power supply connected to the main PC power lead through a PCI slot blanking plate.

  • Integral 160 minute 7.2v battery back up to cover electricity board power outages (1250 milliamp hours - on board trickle charge unit takes 48 hours to fully charge).

  • Integral secondary IDE socket for backup HDD. Autobackup/restore firmware which kicks in during any power outage to back up the HIII to the HDD.

Sounds really promising even if this is used only for swap, /tmp and /var/log…

I've never heard of such devices before so if anyone here used it, lets hear about it.

5 Replies

I found what looks like a more enterprise-ready product here:

http://www.superssd.com/products/ramsan-120/

The numbers are much more impressive. Plus, Texas Memory Systems (TMS) presents itself as a USA-based company with a 20-year history.

So it looks like RAM drives with built-in batteries + built-in backup HD + automated backups during powerloss are more common than I thought.

With ECC RAM and no moving parts, this might be more reliable than HD…

I really can not see how this can help anything.

It does not remove the memory limitations based on the system.

It does not remove the hard drive limitations on the system either.

Both of which are now becoming the limiting factor on the Hosts.

There is also the CPU contenstion to worry about, the more nodes on a box the lower the amount of CPU available to each node.

All that the pointed out products see to supply is an 8/16 gig ram disk, with the ability to backup to a harddrive where nessecary. This product what not be of any use, if would display as an additional hard drive, but with only 8/16 gig available it is of no use on a linode host, except for possibly as a 8/16 gig cache for the hard drives.

It may be useful for other applications keeping an entire database on it, or a complete web site, or for log files or temp files.

Adam

Ultimately, it comes down to this question:

Will caker be able to make more money by using SSD to fit more Linodes per host, and at the same time provide same or better performance to each customer?

WIN/WIN/WIN: OR would Linode customers be willing to pay extra $/month to have their 256MB swap stored on SSD/RAM rather than HD? caker wins more $, customer wins more performance, other linodes on same box win reduced i/o conflict with this customer

I really can not see how this can help anything.

Are you sure?

I was under the impression a lot of programming effort was put into disk i/o limiting because that was the biggest bottleneck in linodes. The primary reason SSD exist is to solve the disk i/o bottleneck problem.

It does not remove the memory limitations based on the system.

I think most people get more RAM because they want to reduce swapping memory to disk due to performance. That is, if swapping memory to external SSD/RAM (instead of HD) was fast enough, would people still bother upgrading system RAM? NOTE: 'fast enough' != 'fast as possible'

For example, a Linode-128 with all of its 256MB swap on SSD could outperform a Linode-256 with 256MB swap on HD if they were both doing things that required 300MB RAM. If CPU was underutilized on both hosts, then caker could fix 2x as many on Linode-128 with SSD swap and still offer better performance to customers using Linode-128 with HD swap.

Although an unsophisticated approach of putting all the swap partitions onto a solid-state (SSD) disk would reduce a lot of bottleneck, the best use would probably be to use SSD as a cache for existing storage. This cache approach avoids the marketing dilema of explaining to customers why Linode-128 is faster than competitor's 256MB VPS.

It does not remove the hard drive limitations on the system either.

Not sure how SSD companies can have any customers if that was true…

Just from one company, their SSD products have this range of specs:

speed: 70,000 - 2 million random I/O per second

capacity: 16GB - Terabyte

bandwidth: 400MB/sec - 12GB/sec sustained

features: many…ie. gigantic cache for existing storage devices - see http://www.superssd.com/products/ramsan-330/

There is also the CPU contenstion to worry about, the more nodes on a box the lower the amount of CPU available to each node.

I totally agree.

The hard drive limitations where in terms of space, which is now one of the limiting factors on each machine, according to recent discussion with chris.

Memory is also a limiting in factor if you only have 6 gig of ram you can only put users in for that amount of ram.

If I understand xen correctly you can not oversell physical ram allocations and therefore a SSD or anything else like that would have no benifit in terms of allowing more users on the device

Adam

Well, given that disk IO contention is so important here, that thing looks promising to me. Any increase in effective HD speed means swapping would be faster, etc.

It is just a question of cost really, I wonder how much those suckers cost, and if it would not be cheaper to get more RAM, which would reduce IO demand, perhaps on cpu with >32 bit addressing.

Reply

Please enter an answer
Tips:

You can mention users to notify them: @username

You can use Markdown to format your question. For more examples see the Markdown Cheatsheet.

> I’m a blockquote.

I’m a blockquote.

[I'm a link] (https://www.google.com)

I'm a link

**I am bold** I am bold

*I am italicized* I am italicized

Community Code of Conduct