Newisys NA-1400 NAS Appliance

Want to talk about one of the articles in SPCR? Here's the forum for you.
MikeC
Site Admin
Posts: 12285
Joined: Sun Aug 11, 2002 3:26 pm
Location: Vancouver, BC, Canada
Contact:

Newisys NA-1400 NAS Appliance

Post by MikeC » Sun May 07, 2006 7:43 pm

Last edited by MikeC on Sun May 07, 2006 10:12 pm, edited 1 time in total.

Marvin
Posts: 64
Joined: Mon Nov 18, 2002 11:54 pm
Location: Tampere, Finland

Re: Newisys NA-1400 NAS Applicance

Post by Marvin » Sun May 07, 2006 10:05 pm

MikeC wrote:Much thanks to Newisys for the opportunity to examine this power supply.
Not to be picky, But it's not a power supply :)

Too bad that it didn't work that way you wanted. It would have been too good...

I'm also planning a computer where this kind of solution would be nice, but if I still need to include one disk to the box, then I'll think I just put one old PC as a server somewhere.

zds
Posts: 261
Joined: Tue Aug 03, 2004 1:26 pm
Location: Helsinki, Finland
Contact:

Re: Newisys NA-1400 NAS Applicance

Post by zds » Sun May 07, 2006 11:33 pm

Interesting review indeed. It would have been nice to see performance numbers with gigabit ethernet, thou, as anybody seriously considering this kind of setup will certainly have one, and most motherboards nowadays ship with one.

Too bad the noise level was so high; not a big problem to mod but as the main point of this is to be no-fuss-just-plug solution, need to mod is a serious setback.

Anyway, what might be interesting to test would be to have OS installed to one of these DRAM-based SATA storages and then have all the real data reside in the remote box.

Of course using Linux or some way to run Windows from Unix box would remove the whole remote boot issue altogether. The A/V professionals are likely to use Macs anyway, and one could think their Unix roots would enable them to boot from network.

Bluefront
*Lifetime Patron*
Posts: 5316
Joined: Sat Jan 18, 2003 2:19 pm
Location: St Louis (county) Missouri USA

Post by Bluefront » Mon May 08, 2006 3:04 am

I've been using NAS for a few years now......wired and wireless. This is the way to go for a large amt of storage that mostly is rarely accessed.

Maybe I missed it, but how hot are these drives running? Does the device have a "go to sleep" mode? Does it use a propriatory file format rather than fat32/ntfs?

From looking at the pictures, I'd guess this box is simply too small to cool four drives easily with low noise. Then there is that nasty dust build-up problem.....apparently not even addressed. I'll pass..... :cry:

qviri
Posts: 2465
Joined: Tue May 24, 2005 8:22 pm
Location: Berlin
Contact:

Post by qviri » Mon May 08, 2006 3:48 am

Page 4: "On its own, the NA-1400 was far noisier than the four drives would have been on their own; the cooing fan on the bottom was downright loud and could be heard from more than a room away."

alock
Posts: 22
Joined: Wed Mar 17, 2004 9:37 am
Location: UK
Contact:

Post by alock » Mon May 08, 2006 3:56 am

For those wanting a quiet NAS, I would recommend one of these:
http://www.synology.com/enu/products/DS ... /index.php

I have one containing a 2.5" drive mounted in foam. Very quiet (no internal fan) and faster than my 54Mb wireless network.

lm
Friend of SPCR
Posts: 1251
Joined: Wed Dec 17, 2003 6:14 am
Location: Finland

Post by lm » Mon May 08, 2006 5:06 am

we've never seen it done before
Truly diskless workstations have been possible with linux for years.

For example I had my main box run without any mass storage devices whatsoever and a file server in another room back in 2001. And there was nothing new about it back then.

NeilBlanchard
Moderator
Posts: 7681
Joined: Mon Dec 09, 2002 7:11 pm
Location: Maynard, MA, Eaarth
Contact:

Post by NeilBlanchard » Mon May 08, 2006 8:26 am

Greetings,

I've used the Buffalo Technologies 0.6TB NAS device, which uses four 160GB Western Digital (2MB cache, IDE) hard drives, and has an internal power supply (mATX?) and uses a 92mm Adda exhaust fan. The fan runs at ~550RPM according to their software, and the whole unit is fairly quiet: my seat-of-the-pants guesstimate is 28-30dBA? It is almost all fan/air noise, and the HD's are nearly completely below the noise floor.

ImageImage

They make a whole range of units, some using IDE, some using SATA HD's. The SATA units are the Pro models and they have hot-swapping drawers similar to the Newisys unit. If they are all similar to the smallest one, that I have used, they should be a lot quieter!

It can be set up to be a print server for one USB printer, and the serial connection is for a UPS/battery backup power supply. It works for Windows, Linux, and for Macs. I like the default RAID 5 (which yielded 480GB total), but you can change it to RAID 1, JBOD, or use it as one single ~600GB drive. It can also act a an anonymous FTP server.

valnar
Posts: 175
Joined: Wed Apr 19, 2006 4:19 am
Location: OH, USA

Infrant!

Post by valnar » Mon May 08, 2006 10:15 am

The best NAS box in this price range is the Infrant ReadyNAS. It's the one to get.

Robert

Yahoolian
Posts: 9
Joined: Mon Jul 19, 2004 9:41 am

Post by Yahoolian » Mon May 08, 2006 4:15 pm

A follow-up article using Linux or perhaps some other OS capable of easily remote booting would be interesting.

defaultluser
Posts: 82
Joined: Mon Feb 20, 2006 9:39 pm

Post by defaultluser » Mon May 08, 2006 8:28 pm

Why don't you just use a USB enclosure?

You can run USB up to 15 feet without a hub, and extend it indefinitely with hubs. 15 feet is enough to put the drive in a closet. Even if that isn't an option, with the right enclosure, drives are almost inaudible.

Best of all, performance is MUCH better than any of these NAS devices, and you don't have to jump through any hoops: Windows XP will detect the drive on installation.

Devonavar
SPCR Reviewer
Posts: 1850
Joined: Sun Sep 21, 2003 11:23 am
Location: Vancouver, BC, Canada

Post by Devonavar » Mon May 08, 2006 8:43 pm

I suppose USB storage is an option, but I don't think that it's entirely interchangable with an NAS box.

Here are a few potential advantages of NAS over USB:

- centrally available to a whole network
- USB 2 = 480 Mbps; GigE = 1000 Mbps (latency is worse though)
- USB performance degrades with long cables. I very much doubt you'd get a full 480 Mbps out of a 15' cable.
- Relaying a signal through hubs will degrade latency, and each hub requires power from an outlet

Advantages of NAS over an enclosure (or, disadvantages of enclosures)

- the quiet enclosures tend to have thermal issues
- I am sceptical that four 500 GB drives are "almost inaudible", even if they are all enclosed. Big drives = loud drives.

That said, I do think a USB enclosure could be a solution for many people. I just don't think it can always replace an NAS box.

Devonavar
SPCR Reviewer
Posts: 1850
Joined: Sun Sep 21, 2003 11:23 am
Location: Vancouver, BC, Canada

Post by Devonavar » Mon May 08, 2006 8:44 pm

Yahoolian wrote:A follow-up article using Linux or perhaps some other OS capable of easily remote booting would be interesting.
Unfortunately, the box has been sent back to Newisys. Plus, I don't have anywhere near enough experience with Linux for it to be something I could knock off in a day or two. That would be quite an undertaking, and I think my efforts are best spent elsewhere.

Ackelind
Posts: 467
Joined: Sun Mar 20, 2005 1:18 pm
Location: Umea, Sweden.

Re: Infrant!

Post by Ackelind » Mon May 08, 2006 10:25 pm

valnar wrote:The best NAS box in this price range is the Infrant ReadyNAS. It's the one to get.

Robert
That one looked nice in theory.. but when I noticed the retail price it gave me hiccups.

Let's just say that I could build myself a fully functioning and almost high performing computer for the price. That price was without the drives even.

valnar
Posts: 175
Joined: Wed Apr 19, 2006 4:19 am
Location: OH, USA

Re: Infrant!

Post by valnar » Tue May 09, 2006 2:09 am

Ackelind wrote: That one looked nice in theory.. but when I noticed the retail price it gave me hiccups.

Let's just say that I could build myself a fully functioning and almost high performing computer for the price. That price was without the drives even.
Well, if you went with software RAID. With any hardware RAID card that has the same functionality of online expansion, not so. The price of the bare box at $650 is quite worth it for the functionality it gives, and it takes less than 60 watts to power with four drives. I suppose it depends on your priorities. If you want something small, quiet, low powered, easy to setup and "just works", it's the best in it's class (price range).

I thought about building a Linux box to do the same too, but after a 3Ware or Areca RAID card, that knocks the price out of the ballpark. Now, if you already have spare parts, that's another story. :)

Robert

Ackelind
Posts: 467
Joined: Sun Mar 20, 2005 1:18 pm
Location: Umea, Sweden.

Post by Ackelind » Tue May 09, 2006 3:00 am

It seems like the cheap ones are all crap, and the expensive ones are good, but ofcourse too expensive for home usage.

Where is that cheap, not too loud, 4-drive, GBLan NAS that I want?

andyb
Patron of SPCR
Posts: 3307
Joined: Wed Dec 15, 2004 12:00 pm
Location: Essex, England

Post by andyb » Tue May 09, 2006 4:05 am

Interesting Review.

However as many people have already said (review included) this box is a compromise.

I decided some time ago that getting myself a personal file server/NAS box was just not worth it. I decided to compromise, and have my "Server", "Games Machine", and "General use PC" all in one box as one machine.

I am happy with my current result, I have 900GB of storage (not in reality as most of it is duplicated ~500GB is more realistic). I only have one HDD spinning when its being used a "General use PC", I have 2 HDD's spining when I am using my "Games Machine" (the GPU fan increases a lot, but my speakers are on so I dont notice). If anyone else on the network wants anything on my "Server" I am running at gigabit speeds, and reach 40+ MB/s for large files.

The big question is, which readers from SPCR are seriously interested in this product.??? Why.??? Where will you put it.??? What will you use it for.??? How many PC's will be using it.???

BTW, Netgear has a very very cheap DIY unit, it is VERY compromised, it only used 2 HDD's, and they are PATA, it does do RAID 0/1, but it's limited to 100Mb, however its saving grace is its price ~£60 + HDD's. Its passively cooled - I dont know whether the cooling is good enough.!!! I am also unsure whether the item has a PSU+Fan or a brick. This might be ideal for someone to put one or two 2.5" drives into and not worry about noise or heat. Performance is OK, but not fantastic.

http://www.netgear.com/products/details/SC101.php


Andy

zds
Posts: 261
Joined: Tue Aug 03, 2004 1:26 pm
Location: Helsinki, Finland
Contact:

Post by zds » Tue May 09, 2006 8:00 am

defaultluser wrote:Best of all, performance is MUCH better than any of these NAS devices
I seriously doubt this unless you give me some hard numbers. USB was designed as asynchronous bus that is cheap to implement, which means that in reality it's hard to get transfer speeds near the theoretical maximum.

Firewire does a lot better, and naturally eSATA does even better. But in terms of reliability and bandwidth gigabit ethernet is also tough one to beat: there is dedicated cabling between any two points (remember, USB uses bus) and it's built to work reliably over distances few orders of magnitude greater than USB (5m vs. 1500m).

Olaf van der Spek
Posts: 434
Joined: Tue Oct 04, 2005 6:10 am

Post by Olaf van der Spek » Tue May 09, 2006 9:35 am

Devonavar wrote:I suppose USB storage is an option, but I don't think - USB 2 = 480 Mbps; GigE = 1000 Mbps (latency is worse though)
Why does/would ethernet have higher latency?

Olaf van der Spek
Posts: 434
Joined: Tue Oct 04, 2005 6:10 am

Re: Infrant!

Post by Olaf van der Spek » Tue May 09, 2006 9:36 am

valnar wrote:Well, if you went with software RAID.
Why would you not go with Linux software RAID?

Olaf van der Spek
Posts: 434
Joined: Tue Oct 04, 2005 6:10 am

Post by Olaf van der Spek » Tue May 09, 2006 9:39 am

zds wrote:there is dedicated cabling between any two points (remember, USB uses bus)
Not completely true, it assumes the network is idle.
and it's built to work reliably over distances few orders of magnitude greater than USB (5m vs. 1500m).
1500m?
That's optical and most home networks use copper (I guess).

andyb
Patron of SPCR
Posts: 3307
Joined: Wed Dec 15, 2004 12:00 pm
Location: Essex, England

Post by andyb » Tue May 09, 2006 10:15 am

Using Cat5e or ideally Cat 6 network cables (copper) the maximum cable length is 100M (305ft). USB is 5M (15ft), after that data corruption may occur.

Gigabit over Fiber (Fibre :)) is 1500M (many feet).

Firewire - Not sure 5M guess.

eSATA 2M (6ft).

Networking is ultra reliable, and is designed for constant use, and essentially streaming because it uses lots of small packets to send the data. Firewire is also designed for streaming video etc etc.

USB has very large overheads, which is why its quoted as 480Mbits/S vs firewires 400, but is ALWAYS slower, however its dirt cheap to implement and everyones got USB (unlike firewire).

Networking is superb, Gigabit is fantastic, the only way to make Gigabit better is to use HUGE packets, which reduces the CPU usage at both ends, however EVERY device on the network needs to support it and have it turned on.!!! Bummer.

NAS is an ideal for the future, but is not quite ready for mass market yet, give it 12 months and it will be.


Andy

IsaacKuo
Posts: 1705
Joined: Fri Jan 23, 2004 7:50 am
Location: Baton Rouge, Louisiana

Re: Infrant!

Post by IsaacKuo » Tue May 09, 2006 10:58 am

Olaf van der Spek wrote:
valnar wrote:Well, if you went with software RAID.
Why would you not go with Linux software RAID?
Especially considering this product almost certainly uses Linux software RAID anyway...

valnar
Posts: 175
Joined: Wed Apr 19, 2006 4:19 am
Location: OH, USA

Re: Infrant!

Post by valnar » Tue May 09, 2006 11:24 am

IsaacKuo wrote:
Olaf van der Spek wrote:
valnar wrote:Well, if you went with software RAID.
Why would you not go with Linux software RAID?
Especially considering this product almost certainly uses Linux software RAID anyway...
I believe their CPU has hardware RAID, although I'm not sure.

Robert

zds
Posts: 261
Joined: Tue Aug 03, 2004 1:26 pm
Location: Helsinki, Finland
Contact:

Post by zds » Tue May 09, 2006 1:49 pm

Olaf van der Spek wrote:
zds wrote:there is dedicated cabling between any two points (remember, USB uses bus)
Not completely true, it assumes the network is idle.
I said *cabling* is dedicated.. This means in each piece of copper only two chips are talking to each other, while they might transit data for several pieces of software. USB on the other hand is bus, so there might be multiple devices talking over the same copper. Even if there is just one device in each end, the protocol still is built to support multiple and thus is not optimal for hi-speed point-to-point communication.

Devonavar
SPCR Reviewer
Posts: 1850
Joined: Sun Sep 21, 2003 11:23 am
Location: Vancouver, BC, Canada

Post by Devonavar » Tue May 09, 2006 2:08 pm

Olaf van der Spek wrote:
Devonavar wrote:I suppose USB storage is an option, but I don't think - USB 2 = 480 Mbps; GigE = 1000 Mbps (latency is worse though)
Why does/would ethernet have higher latency?
Hmm, maybe I'm wrong about this, I don't have any hard data to support it. I was basing it on my subjective impressions. I find that USB drives don't have the lag that networked drives do. I notice that it generally takes a second or two to open a networked directory, but that second or two is not there with a USB drive.

zds
Posts: 261
Joined: Tue Aug 03, 2004 1:26 pm
Location: Helsinki, Finland
Contact:

Post by zds » Tue May 09, 2006 2:31 pm

Devonavar wrote:I notice that it generally takes a second or two to open a networked directory, but that second or two is not there with a USB drive.
I guess the reason is NetBIOS protocol used by Windows shared directories. It's frankly quite abysmal performance-wise.

IsaacKuo
Posts: 1705
Joined: Fri Jan 23, 2004 7:50 am
Location: Baton Rouge, Louisiana

Post by IsaacKuo » Tue May 09, 2006 2:51 pm

Devonavar wrote:I was basing it on my subjective impressions. I find that USB drives don't have the lag that networked drives do. I notice that it generally takes a second or two to open a networked directory, but that second or two is not there with a USB drive.
That's weird...I don't experience that delay on most shares, whether the file server is Windows or Linux. At work, there are a few file servers where there are big delays when browsing to a directory within it, though. I don't know what causes it but it sure is annoying and sluggish.

Devonavar
SPCR Reviewer
Posts: 1850
Joined: Sun Sep 21, 2003 11:23 am
Location: Vancouver, BC, Canada

Post by Devonavar » Tue May 09, 2006 3:02 pm

zds wrote:I guess the reason is NetBIOS protocol used by Windows shared directories. It's frankly quite abysmal performance-wise.
Ah, I must be blaming the wrong thing then. Are there any alternatives to NetBIOS?

drag
Posts: 6
Joined: Wed May 10, 2006 6:15 pm

Post by drag » Wed May 10, 2006 7:10 pm

valnar wrote:
IsaacKuo wrote:
Olaf van der Spek wrote: Why would you not go with Linux software RAID?
Especially considering this product almost certainly uses Linux software RAID anyway...
I believe their CPU has hardware RAID, although I'm not sure.

Robert
On a little box using a xscale proccessor then hardware raid would be faster.. But, beleive or not, generally on a PC with at least a semi-modern proccessor Linux software raid is faster. Actually can be very much faster.

Keep in mind what your dealing with here.
Real hardware raid has a dedicated proccessor to perform the calculations needed to stripe data across multiple devices. These aren't specialized items, but are generally just small embedded-style proccessors running at a few hundred mhz at most. With a PC you can get a 2.5-3ghz proccessor for dirt cheap, like in a Dell low end server for 400 bucks. This will easily outperform any sort of proccessor you'd find on affordable real hardware raid. (likes cards you can get for around 400 dollars)

Then on top of that how data is managed and algorythms used in Linux MD raid are very smart and efficient. And since the CPU is stuck in I/O wait most of the time waiting for the harddrives to find the information programs need then it's not going to cost you anything to use those spare cpu cycles on software raid.

Keep in mind also that 90% of what people think is 'hardware raid' isn't. If you have a 'raid' chip on your motherboard, for instance, or in a low end card (say costing 200 dollars or less), then the vast majorities of these are best described as 'fake raid'. They work after a similar fasion that winmodems function were all the work is done emulated by software in the drivers. Usually this stuff is pretty low-class stuff, Linux MD (software) raid is usually better then what you'd get from driver manufacturers, although Linux DM stuff is more and more supporting these products since some actually do have rudementary ability to offload certain calculations.

Limitations you run into with software raid on Linux has mostly to do with PC and PC-class hardware. The PCI bus is limited by it's bandwidth.. after about 4 or 5 harddrives on it your going to run into bandwidth limitations. Hardware raid items don't typically run over the PCI buss and thus can scale higher. Linux MD doesn't support hotswapping yet, although SATA should be technically hotswappable out of the box. So you need hardware raid if you want to replace a failed harddrive without shutting down the box. Good hardware raid provides CPU offload which can help in performance on heavily loaded servers. Hardware raid will often provide advanced data protection features that your not going to find on Linux software raid. For instance it's not impossible that when a harddrive is going out on a Linux system that corrupted data is stripped accross the raid array before the OS notices that the drive is failing, and remounts everything read-only or crashes.

So hardware raid is very nice, but if your looking for best performance without breaking the bank then something like a low end dedicated file server running a Pentium-D (or AMD dual core) and Linux software raid will be fastest. Especially if you can get all the I/O on the PCI express bus to get around older PC's nasty PCI bandwidth limitations.

Devonavar wrote:
zds wrote:I guess the reason is NetBIOS protocol used by Windows shared directories. It's frankly quite abysmal performance-wise.
Ah, I must be blaming the wrong thing then. Are there any alternatives to NetBIOS?
Not for Windows. It's your best choice.

For Windows server they have DFS, but that's mostly for enterprise stuff. I don't know much about it.

It's not realy NetBIOS anymore. The protocol, in it's latest incarnation, is called SMB for "server message block". Or if you want to get realy fancy it's called CIFS for "common internet file system" (which it's not! it's not safe enough for internet use, only use on protected networks)

CIFS stems from IBM and friends trying to make a standardized protocol. They tried to clean it up and such.

This thing isn't that bad though. It's pretty fast. On my system I tried out SAMBA (windows file and print services for Linux and anything basicly non-windows) and it would take about 2 minutes and 15 seconds to copy a 700+ meg file off of the server.

The reason it takes so long is just because SMB has a lot of overhead. It's got a lot of cruft built up from over the years and it wasn't that good to begin with.

In Linux/Unix land people tend to use NFS (network file system) which is a older protocol, but it's fast. The downside is that security is pretty much non-existant. Anybody that can get physical access to your network can spoof your client's IP address or DNS name (depending how you have it setup) and download all the files on your server very easily. (which is why I keep a incrypted directory on it for important stuff) But it's faster then SMB.

The same size file that took over two minutes on SMB took slightly less then a minute on NFS. With just a bit of overhead from TCP/IP the bottleneck was the harddrives.

NFS and SMB is pretty much what people use. SMB mostly due to the fact that it's what Windows uses and everybody needs to keep compatable with Windows.

The other network file system I use is a smaller one called SSHFS, which uses Linux FUSE to export a file system over SSH (secure shell). Everything is encrypted and authentication is strong so it's safe to be used over the internet and I use it on my laptop when I am out and about. It's pretty fast (still faster then SMB), but encryption has a lot of CPU overhead.

The nice thing about NFS is that it supports all the special file types such as named pipes and sockets and such in Linux. So you can run on remote root with it and easily boot your entire operating system off of a server. The SAMBA guys are aiming for this compatability with Linux, but it's not going to happen any time soon. I don't know about Windows.

Other odd file systems include OCFSv2, Lustre, GFS, OpenAFS, and so on and so forth, but for home users SMB or NFS is what will work best.

Probably what you can do for Windows is just have a system with a lot of ram and only install your operating system on it. Then you can install all your games and other software on the server. That way you can hopefully tweak your system not to access the swap file on the disk to much and keep your drive in sleep mode as much as possible.

Otherwise you can have pretty long SATA cables. External SATA enclosures aren't to expensive and you can get ones that can hold several drives so you can probably end up with pretty massive storage in it's own box away from you or in a closet or something like that.

Like the ones shown here:
http://www.macgurus.com/productpages/sata/satakits.php

With eSATA you can have cables up to two meters long. So there are lots of possibilities there. I expect that it would be just as fast as local storage, since that's pretty much what it would be.

Post Reply