RAMdisk - who's using one?

Silencing hard drives, optical drives and other storage devices

Moderators: NeilBlanchard, Ralf Hutter, sthayashi, Lawrence Lee

Post Reply
frostedflakes
Posts: 1608
Joined: Tue Jan 04, 2005 4:02 pm
Location: United States

RAMdisk - who's using one?

Post by frostedflakes » Sun Jan 13, 2008 9:54 am

It's been talked about before, but I'm curious if more people are using them now that memory prices have plummeted (you can get 8GB of value DDR2 for ~$150). Was just wondering if any people are using them and what your experiences/impressions are. :)

Spare Tire
Posts: 286
Joined: Sat Dec 09, 2006 9:45 pm
Location: Montréal, Canada

Post by Spare Tire » Sun Jan 13, 2008 10:28 am

If you mean software ramdisk, i use it for swap and temp download folder. If you mean hardware, they still haven't come out with DDR2 ramdisk. DDR1 prices are going up.

frostedflakes
Posts: 1608
Joined: Tue Jan 04, 2005 4:02 pm
Location: United States

Post by frostedflakes » Sun Jan 13, 2008 2:06 pm

Discussion of either is fine. I remember hearing about a DDR2 iRAM but nothing lately so I'm guessing it's been canceled or Gigabyte is very slow to develop it. Now would be the perfect time for such a product since DDR2 has become so affordable.

MoJo
Posts: 773
Joined: Mon Jan 13, 2003 9:20 am
Location: UK

Post by MoJo » Sun Jan 13, 2008 3:20 pm

I too have been waiting for a DDR2 RAM disk. An 8GB RAM disk would be fantastic.

You can get ones with backup to HDD. Now flash memory is so cheap (especially relatively low speed Compact Flash etc) backup to flash would be nice. You could run a system off it then.

I always find it odd that people selling the iRAM on ebay show you Windows booting. Since it has no permanent backup, installing Windows on it is fairly pointless.

For use as a temporary drive it would be brilliant.

Spare Tire
Posts: 286
Joined: Sat Dec 09, 2006 9:45 pm
Location: Montréal, Canada

Post by Spare Tire » Mon Jan 14, 2008 8:58 pm

I thought the iRam had a battery to keep the data at powerdown?

scdr
Posts: 336
Joined: Thu Oct 21, 2004 4:49 pm
Location: Upper left hand corner, USA

Post by scdr » Mon Jan 14, 2008 11:34 pm

Seems like it might be a bit hard to make something that would have a long enough lifetime to justify the design/etc. effort.
(I would generally rather have a motherboard that could take an additional 8GB (e.g.) RAM, rather than having it dedicated to a RAM drive.)

While you wait though - had a DIY RAM drive idea.

1. Build a second computer using inexpensive parts
(e.g. like the cheap machine listed in this thread:
viewtopic.php?t=45522
[Edit: Something like the system in the second post. Though you could probably optimize it further for this application.]
)

2. Run a small OS on it, dedicating most of the RAM to a software RAM drive.

3. Network it to your system using firewire (or fast net cards - though that might be a bit slow).

Voila - DDR2 RAM drive.

(And you can customize it with backing store to flash drives, hard disk, whatever.)

Not the most space or power efficient solution, but not much more expensive than an i-RAM, and a lot more flexible. (It could double as a print server, a file server, whatever.)

MoJo
Posts: 773
Joined: Mon Jan 13, 2003 9:20 am
Location: UK

Post by MoJo » Tue Jan 15, 2008 8:21 am

Spare Tire: The backup battery is really only for short term off time. It is quoted as giving 8 hours battery backup, but of course batteries deteriorate over time.

HyperOS Systems (the Hyperdrive) make a RAM drive with HDD/solid state backup, but it's very expensive.

scdr: Your idea is a good one. I read some benchmarks somewhere of someone who tried it on fairly low end hardware with BSD (Pentium III IIRC) and they could saturate a gigabit network connection.

Most new mobos can take up to 8GB. Presumably if you had an 8GB RAM machine with lots of HDD storage, you could still get excellent performance due to having a massive disk cache.

I remember reading about a project that turned a Linux machine into a Firewire HDD (i.e. another computer connected via FW would just see the Linux box as an external HDD). If something similar were available for SATA that would give the highest throughput.

I think one of the best options at the moment might be to get a RAID card with on-board RAM. You can get 256MB Dell PCI-e SAS cards for peanuts on eBay, and having that amount of cache makes a big difference. It's a shame they can't take more because it would be easy to stick a gig or two on there.

scdr
Posts: 336
Joined: Thu Oct 21, 2004 4:49 pm
Location: Upper left hand corner, USA

Post by scdr » Tue Jan 15, 2008 9:31 pm

MoJo wrote: scdr: Your idea is a good one. I read some benchmarks somewhere of someone who tried it on fairly low end hardware with BSD (Pentium III IIRC) and they could saturate a gigabit network connection.
I figured that the cpu on the parts list I mentioned was overkill, it just happened to be a listing of a cheap computer that could take 8GB of DDR2 RAM.
MoJo wrote: Most new mobos can take up to 8GB. Presumably if you had an 8GB RAM machine with lots of HDD storage, you could still get excellent performance due to having a massive disk cache.

I remember reading about a project that turned a Linux machine into a Firewire HDD (i.e. another computer connected via FW would just see the Linux box as an external HDD). If something similar were available for SATA that would give the highest throughput.
I misremembered the bandwidths involved. Until somebody comes out with an inexpensive eSATA client adapter (or external PCI-x network adapter), Gigabit ethernet would definitely be the way to go. If somebody wanted to work out the load balancing, two or three gigabit ethernet links might give you about the throughput of a SATA-300 link.

Fayd
Posts: 379
Joined: Thu May 10, 2007 2:19 pm
Location: San Diego

Post by Fayd » Wed Jan 16, 2008 4:00 am

personally, i would need 16 gigs of ram drive possible before i could consider such a venture.

reason being, the game i play is 10 gigs~, and i want a 4 gigs swap.

2 8gig ramdrives in raid0 would be perfect :P

EDIT


but scdr, isnt the whole point of this venture to cut down on latency? and wouldnt that just defeat the point?

MoJo
Posts: 773
Joined: Mon Jan 13, 2003 9:20 am
Location: UK

Post by MoJo » Wed Jan 16, 2008 6:36 am

scdr wrote:I figured that the cpu on the parts list I mentioned was overkill, it just happened to be a listing of a cheap computer that could take 8GB of DDR2 RAM.
Your best bet might be to look for old servers on eBay. Often companies simply throw them away. By todays standards they might be slow, but often they come with a lot of RAM and the mobos can take 16GB or more.

Often it's parity enabled RAM too.
I misremembered the bandwidths involved. Until somebody comes out with an inexpensive eSATA client adapter (or external PCI-x network adapter), Gigabit ethernet would definitely be the way to go. If somebody wanted to work out the load balancing, two or three gigabit ethernet links might give you about the throughput of a SATA-300 link.
Well, there is always 10gig ethernet :)

On the SATA front, some RAID cards can take up to a gig of RAM. Multiple RAID cards can often be used to span arrays over multiple PCI-e ports, so you could in theory have several 1 gig cache RAID cards with a massive RAID 0 array.

The Areca cards are good for that. They will run at PCI-e bus speed (740MB/sec) with almost zero latency until the cache is full.

AZBrandon
Friend of SPCR
Posts: 867
Joined: Sun Mar 21, 2004 5:47 pm
Location: Phoenix, AZ

Post by AZBrandon » Wed Jan 16, 2008 11:04 am

MoJo wrote:The Areca cards are good for that. They will run at PCI-e bus speed (740MB/sec) with almost zero latency until the cache is full.
Reminds me of the 8-SSD drive RAID-0 test I saw a while back:

http://www.dvnation.com/benchmark-24.html

860mb/sec sustained, 1240mb/sec burst. Ha! That's a lot of throughput, especially for being just one card.

scdr
Posts: 336
Joined: Thu Oct 21, 2004 4:49 pm
Location: Upper left hand corner, USA

Post by scdr » Wed Jan 16, 2008 11:17 pm

Fayd wrote: but scdr, isnt the whole point of this venture to cut down on latency? and wouldnt that just defeat the point?
I don't quite follow.
Is latency now the big bottleneck for hard disks?
(i.e. hard disk bandwidth is fast enough to be largely irrelevant?)

Are you saying that the latency of an external computer would be of the same order as the latency of a typical hard disk?
A quick web search suggests that a typical ethernet request/response might take 0.35ms (milliseconds), where the average access time of hard disks seems to be in the 8-14ms range.
So the basic disk latency is maybe 10-40x the basic ethernet latency.
Do the rest of the networking/disk sharing/ram drive layers add enough overhead to negate that speed difference?

I just picked gigabit ethernet for the example because it is readily available, fairly high bandwidth, and relatively cheap. Is there an affordable, available, lower latency interface that would be better to use? Or is it a question of the other layers slowing things down, so need to use specialized, rather than general purpose software for the RAM drive?
Or are there other issues taking up so much time that it can't be done with general purpose hardware?

My main thought was that it seems hard to justify making a special purpose RAM drive circuit when: the demand is fairly small (so price will be high), and the useful minimum capacity for a disk is growing exponentially (so the lifetime of the product is fairly short). So, rather than a specialized device, a general purpose product that one could build today might be adequate.
And, as a corollary, perhaps an affordable faster networking interface (whether SATA client adapter, PCI-e to PCI-e, 10GB ethernet, or whatever), would be as good or better thing to wish for.

MoJo
Posts: 773
Joined: Mon Jan 13, 2003 9:20 am
Location: UK

Post by MoJo » Thu Jan 17, 2008 5:21 am

Found this SCSI RAM drive based on an Atmel AVR:

http://micha.freeshell.org/ramdisk/index.php

A DD2/SAS update is definitely needed!

Post Reply