but scdr, isnt the whole point of this venture to cut down on latency? and wouldnt that just defeat the point?
I don't quite follow.
Is latency now the big bottleneck for hard disks?
(i.e. hard disk bandwidth is fast enough to be largely irrelevant?)
Are you saying that the latency of an external computer would be of the same order as the latency of a typical hard disk?
A quick web search suggests that a typical ethernet request/response might take 0.35ms (milliseconds), where the average access time of hard disks seems to be in the 8-14ms range.
So the basic disk latency is maybe 10-40x the basic ethernet latency.
Do the rest of the networking/disk sharing/ram drive layers add enough overhead to negate that speed difference?
I just picked gigabit ethernet for the example because it is readily available, fairly high bandwidth, and relatively cheap. Is there an affordable, available, lower latency interface that would be better to use? Or is it a question of the other layers slowing things down, so need to use specialized, rather than general purpose software for the RAM drive?
Or are there other issues taking up so much time that it can't be done with general purpose hardware?
My main thought was that it seems hard to justify making a special purpose RAM drive circuit when: the demand is fairly small (so price will be high), and the useful minimum capacity for a disk is growing exponentially (so the lifetime of the product is fairly short). So, rather than a specialized device, a general purpose product that one could build today might be adequate.
And, as a corollary, perhaps an affordable faster networking interface (whether SATA client adapter, PCI-e to PCI-e, 10GB ethernet, or whatever), would be as good or better thing to wish for.