I don't claim to be an expert, but SAN is different than NAS. As you mentioned, SAN uses proprietary interfaces and NAS uses TCP/IP.
It's not a requirement for SAN to use proprietary technologies. Most certainly it has been often implemented that way, but the term just means 'storage area network' i.e. a network dedicated to storage traffic. What network technology you happen to use is really secondary. Everything from Fiberchannel to ATM to Ethernet could be used and the used encapsulation isn't set in stone either. You could use your own frame protocol, TCP/IP, just IP, or even the hardware layer directly.
You mentioned that your company has been transitioning from SAN to NAS to improve reliability. For that to hold true one would have to know what your companys current SAN looks like. The techniques used to achieve reliability are pretty much the same for both SAN and NAS. So what I'm really thinking is that you are transitioning to a new generation of devices for improved flexibility and that might allow for greater reliability because it allows some new configurations. Ofcourse there is the added benefit of less vendor lockin.
But many times the NAS networks are not using existing LAN's (at least in serious high performance installations), they are using private IP networks using Gigabit, Fibre Channel, etc.
Actually Fibre Channel doesn't use IP (IIRC). This is the point I was trying to make. When you configure your devices like this, what you really get is a SAN and not a NAS anymore
In fact even Coraid uses the term SAN for their AoE solution.
Coraid homepage wrote:
EtherDrive® Storage is simply Ethernet connected hard disk drives. A Storage Area Network (SAN) using EtherDrive storage can be assembled for less than $1.35 per GigaByte.
I'm assuming they use the term SAN here because their solution doesn't make use of TCP/IP and you really want that dedicated network for storage anyway, atleast if you are planning to write any significant amounts of data during workinghours.
I would agree that SAN and NAS are not counterpoles, but the point is that many major companies are using NAS for high end applications where performance is important. So maybe ATA over Ethernet (or at least SATA) is not as farfetched as it sounds. Dell is selling a SATA based NAS solution.
This is what really threw me about your original comment. Are you trying to imply that NAS is something too expensive and 'enterprisy' for a normal consumer ? Like I said, the simplest NAS is a single harddrive in a networked enclosure. Do you think this should not be called NAS because the enterprise market doesn't have a use for such a simplistic NAS implementation ?
As far as ATA over Ethernet goes, I don't think this will fly in the enterprise market. You can just as well use iSCSI to connect to your drive rack even if there are actually SATA drives in the rack. In fact such devices exist already. We are also starting to see the emergence of SAS 'Serially Attached SCSI', which is SCSI using the same cabling as SATA. The intresting side effect is that apparently some upcoming SAS raid controllers can drive SATA drives as well, so you could start with a modest setup using cheaper SATA drives and upgrade to SAS as needed.
I see ATA over Ethernet being mostly interesting in the consumer space. Normal consumers wont need all the advanced SCSI features and aren't intrested in managing their devices. In such an environment using the simplest possible transport to connect to your remote storage is an acceptable compromise to reduce cost. I wouldn't be surpriced if this technology gets picked up by home entertainment system builders, because while drive sizes have been going up, you really don't want to put more than one hdd in your htpc device. So if you really want a _lot_ of storage space to handle your entire movie and music collection you want to place the storage array away from your livingroom.