Page 1 of 1

NAS build: part list feedback?

Posted: Mon Jan 27, 2014 7:11 pm
by TomasC
I am looking to build or buy a NAS. It would be my first NAS, so I have been researching this for quite a while now. If you just want to see the part list, scroll down. If you want to read what I'd like it to be and do, continue on! :)

My priorities for this system are (in order):
  1. maximize data safety (protect against hardware failure (RAID support), power failure (UPS or encrypted cloud storage or ...), accidental deletion (versioning), ...)
  2. minimize noise (its place will be approximately 1 meter (3 feet) from my ears so a fanless build would probably be best)
  3. minimize total cost of ownership (price of parts, replacement drives, power usage, ...)
  4. DLNA
  5. USB 3
  6. processing power to run a SMALL web server
Only the first three are really important. The rest is only nice to have.

To give an idea of scale and usage:
  • I would like to keep this system unaltered for six to eight years (only possibly adding or replacing hard disks). I think this is realistic; I managed to keep my last desktop system for seven years with minor upgrades and the current is approaching four.
  • I would currently need about 2 TiB of actual storage. Given the years it needs to survive and the moderate rate my storage needs grow, I would like to provision 6 TiB at the very least.
  • The system will probably be idle more than 95% of the time. It will only be used for storing vacation photos and videos, some office documents and some source code. I will manually copy over some files and let it automatically synchronize very regularly for day-to-day changes (usually documents). (I'm considering something like git combined with rsync.)
After reading plenty of reviews, the Western Digital RED series seem great. They are low power, low noise and the latest generation supports TLER (which I understand is very useful when deploying RAID). (Source: http://www.pcper.com/reviews/Storage/We ... lusion-Pri and others)

Regarding hard drives, the 2.5" format saves on power and produces less heat but the cost per TiB is so much higher than the 3.5" format that 3.5" is the better choice for a home NAS.

Regarding memory, 1.35 volt memory is better in the long run than 1.5 volt. The prices are very slightly higher while the performance is the same but it is cooler and the power savings are larger than the price difference in the long run. (Source: http://www.servethehome.com/testing-pow ... el-avoton/)

I won't modify hardware. I can build a system well enough but I'm too clumsy and inexperienced to start sawing, drilling or hammering anywhere near electronics or anything expensive. I broke a power supply, a V.90 modem and a motherboard before I finally learned this lesson. (Don't ask. :D)

I have experience installing, configuring and running several Linux distributions so installing Debian or NAS4Free and configuring the RAID, UPS, synchronization and versioning myself is not a problem. I considered using Linux From Scratch (because I've always wanted to build a bare minimal server) but decided against it because I'm not willing to spend the time required to constantly check for and apply (security) updates manually.

I don't care if it is a pre-built system (like from Synology or QNAP) or something I build myself. However, the only pre-built fanless NAS I've found is the HFX PowerNAS and it is expensive (1295 EUR excluding drives and shipping). They offer the case they use separately, though, which I've used as a basis for my first attempt at composing a NAS.

This is my part list so far:
  • Case: HFX PowerNAS enclosure (5 internal 3.5" bays, 1 2.5" bay, heatsink-like exterior, mini-ITX compatible, passively cools up to 65 Watt TDP, 249 EUR excluding shipping) (http://www.hfx.at/store/index.php?page= ... &Itemid=54)
  • Cooling: HFX BorgFX micro (CPU heatsink and heat pipes connecting to the case walls, 59 EUR excluding shipping) (http://www.hfx.at/store/index.php?page= ... &Itemid=54)
  • Motherboard: Super Micro X9SBAA-F (4x SATA3, 2x USB3, 2x Gbe LAN, IPMI, 217 EUR excluding shipping) (http://www.supermicro.com/products/moth ... SBAA-F.cfm)
  • Memory: 2x Kingston KVR13LSE9/2 (DDR3 SO-DIMM 1333MHz, 1,35v, 2GB, 33 EUR per stick excluding shipping)
  • Power supply: Mini-box picoPSU 80 + 60W Adapter (52 EUR excluding shipping)
  • Hard drives: 4x Western Digital RED WD30EFRX (3 TB, 3.1 Watt idle, 0.4 Watt sleeping, EUR 107 per drive excluding shipping)
Total cost: 1071 EUR
Total capacity: 12 TB raw (6 TB in RAID 10 or 6; 9 TB in RAID 5)

Questions:
  • What do you think about the part list in general?
  • I like the motherboard because it consumes, excluding disk drives, only 12 Watt idle and features USB 3. Are there much lower power or cheaper parts available that can provide what I want? I've considered ARM but can't seem to find boards with enough SATA ports or USB 3 support that are available.
  • Would you place the system OS on the RAID array or on a separate drive? Why?
  • I am planning to run software RAID since there are motherboards with enough SATA ports and software RAID is probably cheaper (in terms of hardware and power usage). Would I be better off with a RAID card? Which would you recommend?
  • The case is by far the most expensive part. Since this will be a very low power setup (I'm hoping for less than 20 Watts with the drives installed and idle), would I be able to use a cheaper case?

Thanks for reading! Any and all comments are appreciated. :wink:

Re: NAS build: part list feedback?

Posted: Mon Jan 27, 2014 11:16 pm
by Abula
TomasC wrote:[*]What do you think about the part list in general?
Seems you have done good research, and have chose very good parts, i like the case. That said, on fully fanless idk, i used to have an acer ah342 whsv1 with an Intel Atom D510 (dual core 1.66ghz, and it idle close to 50s and load around 65-70C depending on ambient temp, and this pre build server had a 120mm fan on it, so fully fanless might get some high temps, then again the atoms are built to stand a lot of heat, in a lot of motherboards they are fanless. I leave you a pic of the temps on my setup,

Image

Try to find out if you will be able to mount the pipes setup for sure, in my atom setup, it was a soldier cpu, no way of mounting another heatsink on it.

Personally i don't fear fans, while its one of the biggest source of noise, if you chose them correctly, in most cases you will be able to lower them to inaudible levels where it does make a big difference, even some airflow helps a lot for heat not to be enclosed. If you were to consider with fans and prebuilts, i like a lot Synology, their OS is pretty nice, with lots of extras and apps, for example a 4bay nas Synology DS412+ DiskStation (Diskless) 4 Bay Desktop NAS Enclosure, they can run raid and usually the have very low powered CPUs, in the past they used small fans that were noisy, but they have moved into 92mm fans, i have no experience on this new setups, so i cant say for sure they are quiet.

If you still want to persue fanless, and if you were to get by 2hdd setup, there is one recently introduced to the market, QNAP HS-210 Silent/Fanless Stylish Set-Top Network Attached Storage.

Either way if you go with your well planned build, be sure to share how it went, some pics and specially temps, im really interested into what will the hdds and cpu temp being fully fanless.
TomasC wrote:[*]I like the motherboard because it consumes, excluding disk drives, only 12 Watt idle and features USB 3. Are there much lower power or cheaper parts available that can provide what I want? I've considered ARM but can't seem to find boards with enough SATA ports or USB 3 support that are available.
Atoms are kinda tricky, at least i have played with two setup, and consumption were not as low as i expected, my acer sever, atom based, idled above 25W, if im not mistaked on load was around 40W, but it was a long time ago and also the atom you are choosing is newer, so might be different now a days.
TomasC wrote:[*]Would you place the system OS on the RAID array or on a separate drive? Why?
It really comes down into what OS you will use, in most servers there is not much difference, specially if the server is always up, SSDs help a lot on booting and powering off, but once all is loaded the gains are not that dramatic, but will also depend into what the server will do. Depending on the OS, there might be benefits, i have read in certain OS they used separate hdd/ssds as caches, in others that the real time parity is done on the fly, people lose a lot of time moving files as the OS does it at the same time, cache drives help this process as you dont really move them to the array but to a separate disk that all the data will be later on moved by server. I did use an SSD on my server, but it was left over and was a sata II, i had the spare sata, and i didnt want to include the OS drive on pool, so i went with it, but there is not much of a gain for me beside starting and shutting down, and even then the HBAs takes so long into booting that you don't feel like if it had an ssd on it.
TomasC wrote:[*]I am planning to run software RAID since there are motherboards with enough SATA ports and software RAID is probably cheaper (in terms of hardware and power usage). Would I be better off with a RAID card? Which would you recommend?
This is a tricky question, we have come from a culture that software raid sucked, and swearing that hardware raid was the thing, but lately there has been so much advancements on filesystems and software raid that many have moved of from hardware raid. Im sure you can google it and find the pros n cons of each, so i suggest you do that. Raid has to be seen just as uptime of your server or information, not as a real backup, raid as it is is very dangerous, depends on the parity you have, if more drives than your parity fail or if an rebuild another hdd fails... you will lose all your data, for this reasons a lot of people have moved away from standard software raid and hardware raid to setup that are more reliant and dependable. The approach microsft took was very inefficient but still has its pros, they basically went with deduplication, or having the same info on multiple drives, their setups are meant to allow hdds to fail and still have accessible data as there is no real array but pooling of drives, and if you data was important you should back it up externally, but windows is known in servers for other issues, for example very prone to silent data corruption. There are others like unraid that have a weird raid4 alike setup, its not a real array like raid 5/6, but a parity based, where all drives are independent that if more hdds than the parity fails you still have access to your information, you only lose what was on the hdds, the downside is that its very slow into writing, but as fast as you hdds on reading, there is no increase of speed like in raid5/6, but its a very simple OS for storage and has grown a lot, now has a lot of addons that are supported, the community is great and help a lot into hardware and even setting up the server. There are more complex setup, like ZFS, that have multiple software based raid setup, with a lot of failsafes to avoid corruption and to be very reliable, but its not as simple, you gotta read a lot, and to me wasn't worth it, as i don't care much about the info i have on the server.

Even prebuilds like synology have supports for multiple raid setup, or what they called hybrid raid where you can grow the server as you add more drives. There are lots of other options, like freenas/nas4free, amahi, snapraid, etc, none is better than the other, simply they are different all have their pros n cons, but worth to check all and see what fits better into what you are building.

Just one last thing, about the raid card, it will consume from 5-15W, so if you are perusing a low power setup this would play against, also a lot of the raid cards get extremly hot, and in a fanless envoirment you probably will cook them fast, so i would go with software.... which... its up to you, really comes down for what you will use it.
TomasC wrote:[*]The case is by far the most expensive part. Since this will be a very low power setup (I'm hoping for less than 20 Watts with the drives installed and idle), would I be able to use a cheaper case?
I really dont know, fanless i doubt it, but its expensive at 250, there are lots of compact cases that will cost less than half but with fans. Personally i don't want to discourage you into building it, i really like the looks of the case, but idk how it will go fully fanless, interested though into your results.

Good luck on the build.

Re: NAS build: part list feedback?

Posted: Fri Jan 31, 2014 2:26 am
by HFat
I'm a bit late but...
Linux software RAID works well. For most applications, it's less trouble than hardware RAID. And you'll have little use for a SSD unless you want very fast (re)boots. You can have several RAID partitions organized differently on the same drives by the way. RAID1 is the way to go for a small system partition.
I'd use a quiet fan considering you're running HDs. I don't know about this CPU but the older Atoms with a higher TDP rating (for what that's worth) didn't need pipes to be cooled, especially when there's a little airflow around (no need for a fan blowing directly on the heatsink). My experience running these without any fans and stock heastsinks over the years is that they're reliable when they run hot. If you're planning to spend a lot of money for a case you can use pipes with, my opinion is that it's a waste. And risky. The CPU isn't the only source of heat.
I wouldn't use a pico for a NAS, especially if you want it to be reliable but maybe that's just me.

Have you considered an old Microserver (not the current version)?

Re: NAS build: part list feedback?

Posted: Wed Mar 05, 2014 5:02 am
by TomasC
Thank you, Abula and HFat, for your replies!
Abula wrote:That said, on fully fanless idk, i used to have an acer ah342 whsv1 with an Intel Atom D510 (dual core 1.66ghz, and it idle close to 50s and load around 65-70C depending on ambient temp, and this pre build server had a 120mm fan on it, so fully fanless might get some high temps, then again the atoms are built to stand a lot of heat, in a lot of motherboards they are fanless.[snip]
Try to find out if you will be able to mount the pipes setup for sure, in my atom setup, it was a soldier cpu, no way of mounting another heatsink on it.
I contacted HFX. They informed me that their heat pipe solution is incompatible with most Atom setups. Thanks for pointing this out.

I think the HFX PowerNAS enclosure really relies on the heat pipes to get the heat to the heatsink-styled outer casing. Since I won't be able to mount the heat pipes, I'm discarding the entire case as an option. I'm now looking for something with a lot of 'natural air flow' (read: holes :-)) in the hope I still won't have to provide any active cooling.
Abula wrote:Atoms are kinda tricky, at least i have played with two setup, and consumption were not as low as i expected, my acer sever, atom based, idled above 25W, if im not mistaked on load was around 40W, but it was a long time ago and also the atom you are choosing is newer, so might be different now a days.
I based my power consumption on some tests by ServeTheHome.com (nicely summarized here: Intel Atom C2550 Power Consumption and Comparison). It shows 12 Watts idle for the Super Micro X9SBAA-F with the Atom S1260 (including RAM and an SSD). It also shows 24 Watts idle for the older Atom D525, confirming your observations :-)
Abula wrote:Raid has to be seen just as uptime of your server or information, not as a real backup, raid as it is is very dangerous, depends on the parity you have, if more drives than your parity fail or if an rebuild another hdd fails... you will lose all your data, for this reasons a lot of people have moved away from standard software raid and hardware raid to setup that are more reliant and dependable.
As I understand it, RAID will protect your data from (some) hardware failure but not from user failure (e.g. accidentally deleting a file) or software failure (e.g. software corrupting a file). That why I plan on combining RAID with Git (versioning) to cover most bases.
Abula wrote:There are others like unraid that have a weird raid4 alike setup, its not a real array like raid 5/6, but a parity based, where all drives are independent that if more hdds than the parity fails you still have access to your information, you only lose what was on the hdds, the downside is that its very slow into writing, but as fast as you hdds on reading, there is no increase of speed like in raid5/6, but its a very simple OS for storage and has grown a lot, now has a lot of addons that are supported, the community is great and help a lot into hardware and even setting up the server. There are more complex setup, like ZFS, that have multiple software based raid setup, with a lot of failsafes to avoid corruption and to be very reliable, but its not as simple, you gotta read a lot, and to me wasn't worth it, as i don't care much about the info i have on the server.
I hadn't looked into Unraid yet; thanks for the suggestion.
Abula wrote:Just one last thing, about the raid card, it will consume from 5-15W, so if you are perusing a low power setup this would play against, also a lot of the raid cards get extremly hot, and in a fanless envoirment you probably will cook them fast, so i would go with software.... which... its up to you, really comes down for what you will use it.
Again thanks for the information; hadn't taken that into account yet. Now I'll definitely go for software raid :-)
HFat wrote:I wouldn't use a pico for a NAS, especially if you want it to be reliable but maybe that's just me.
I figured if it's capable of supplying enough power and the external adapter is made for continuous use, it'd be ok (a lot of the pico articles mention adapters made for signage, like road side signs). Do pico's have a bad reliability reputation?
HFat wrote:Have you considered an old Microserver (not the current version)?
No, I haven't. Aren't these quite noisy (for instance using really small really high RPM fans)?

Re: NAS build: part list feedback?

Posted: Wed Mar 05, 2014 8:40 pm
by Abula
TomasC wrote:
Abula wrote:Raid has to be seen just as uptime of your server or information, not as a real backup, raid as it is is very dangerous, depends on the parity you have, if more drives than your parity fail or if an rebuild another hdd fails... you will lose all your data, for this reasons a lot of people have moved away from standard software raid and hardware raid to setup that are more reliant and dependable.
As I understand it, RAID will protect your data from (some) hardware failure but not from user failure (e.g. accidentally deleting a file) or software failure (e.g. software corrupting a file). That why I plan on combining RAID with Git (versioning) to cover most bases.
Again dont see raid a true backup, as its not, its just an uptime, no matter how many parity drives you have, it can still fail, what raid is meant for is for sustaining a hdd failures and still continue to have access to your data, but one or more drives can fail, they can even fail when rebuilding and all your information is lost, never ever raid should be consider a backup, true backups should be external to your server. Its like taking chances with things, the more pairty drives you have the less likely it will happen, but it still can happen, and will happen, its just a matter of time and events. Now there are external backup places like Crashplan that can help you secure your real important information outside from your PCs, and they do have multiple setups that most likely guarantee the uptime and backup of your data. But all comes down into how important is what you storing to you, its your money and you should weight all variables before committing to anything.

Since you were looking into SVT C2550 article, if your budget allows, check also the new Intel atom 8 core C2750, one motherboard that recently caught my attention is the ASRock C2750D4I Mini ITX Server Motherboard FCBGA1283 DDR3 1600/1333, while very expensive, it has some nice features like fanless, new 8 core atom cpu, mini itx, 12 sata ports, so it might be of your interest, there is a supermicro version as well, i'll leave you some links in case you want to read more,

ASRock C2750D4I Review – Intel Atom C2750 Storage Platform
Intel Atom C2750 – 8 Core Avoton / Rangeley Benchmarks – Fast and Low Power

Good luck with the choices, if you do built the asrock let me know, its a motherboard that its been on my mind for couple of months now, i just wished there were mini itx enclosures with more hdd slots, as with a 16 hdd HBA i could net 28hdd, but there are nothing in the market atm that could fulfill this, but who knows for the future.

Re: NAS build: part list feedback?

Posted: Wed Mar 05, 2014 11:24 pm
by boost
I think Bay Trail is the way to go, the Supermicro A1SRi-2558F4-core Bay Trail will come out in ~2 weeks. It has 6 Sata ports, enough for your Nas.
The HFX solution is incompatible and total overkill for an Atom board. If your worried about the CPU temps, a chipset cooler with a heatpipe like the Xigmatek Porter is the biggest that fits, but with normal CPU load it isn't really needed.
For a server that's up 24/7 i would suggest ECC RAM.
The HFX PowerNAS is great for cooling a normal CPU, but the hard drives have no dampening. If the server is so close to you you should decouple the hard drives. That requires a different case. The Lian Li PC-Q35A has 5 5.25" bays you could use with hard drive decouplers (SPCR review).
The hard drives need quite some power for spinup, usually around 1Amp@12V and 0.5Amp@5V.
HFat wrote:HFat wrote:
Have you considered an old Microserver (not the current version)?
No, I haven't. Aren't these quite noisy (for instance using really small really high RPM fans)?
The Microserver would be ideal for you and you can get one for less than 200€.
The standart fan isn't quiet, but it's a 12cm model that can be replaced.

Re: NAS build: part list feedback?

Posted: Thu Mar 06, 2014 12:11 pm
by matt_garman
Just for comparison sake, you might want to check out a company called U-NAS, who make small DIY NAS enclosures. I have two systems built with the NSC-800, which holds 8 3.5" drives. They also make an NSC-400, which holds four drives. The stock fans in the NSC-800 are 10k RPM 120mm, and I would call them very quiet. They wouldn't be considered "silent" by this site's demanding standards, but anyone else would call them silent. :)

Here is a writeup (with pics) of the main server for my house over on STH forums.

That ASRock C2750D4I motherboard that Abula mentioned looks wonderful. In fact, see also this STH thread where I debated using that board (Avoton) versus a Haswell Xeon. Ultimately, I got impatient and went with the Xeon. This turned out to be a good move because the ASRock board uses the Marvell SE9230 and SE9172 SATA chipsets, which, at the time, had serious problems in Linux (which is what I'm running). They may have been fixed since then, so, caveat emptor. If the Marvell issues get fixed, that board, while pricey, does look really nice. It has all those SATA ports, dual Intel NICs, a dedicated LAN port for IPMI, 4x full-size DDR3 RAM slots (with ECC support), low power consumption, and significantly better performance than previous-gen Atoms.

Also from STH, here's a writeup by someone who did a build with that ASRock board. He's using FreeNAS, which is built on FreeBSD, and presumably (hopefully?) doesn't suffer the same Marvell SATA issues.

I use my NAS as more than just a NAS; I also run some random stuff, including MythTV and use it for some hobby programming. In hindsight, I think I'd be better off using that ASRock board and FreeNAS (assuming FreeBSD doesn't suffer the Marvell SATA issues) for a pure NAS device. Then I'd build another machine as my actual server; on this I'd run some kind of hypervisor, and connect storage to it using something like iSCSI from the NAS. Then I could have all the individual services of the server neatly compartmentalized into their own VMs. I could also integrate my pfSense box into this setup for further hardware consolidation. But that's an expensive and time-consuming change; I'll save that for a few years down the road.

Re: NAS build: part list feedback?

Posted: Fri Mar 07, 2014 10:06 am
by TomasC
Thanks, Abula, boost and matt_garman, for the feedback!
Abula wrote:Again dont see raid a true backup, as its not, its just an uptime, no matter how many parity drives you have, it can still fail, what raid is meant for is for sustaining a hdd failures and still continue to have access to your data, but one or more drives can fail, they can even fail when rebuilding and all your information is lost, never ever raid should be consider a backup, true backups should be external to your server.
I think I have a good understanding of what RAID is and isn't. Thanks for the concern, though :-). This NAS build would function as the external backup you mention (backing up my desktop and laptop PC). I am considering adding a UPS and/or a second level of backup (for example using the Crashplan service you mention) after this project is done.

Abula wrote:Since you were looking into SVT C2550 article, if your budget allows, check also the new Intel atom 8 core C2750
boost wrote:I think Bay Trail is the way to go, the Supermicro A1SRi-2558F4-core Bay Trail will come out in ~2 weeks.
matt_garman wrote:That ASRock C2750D4I motherboard that Abula mentioned looks wonderful.
The new Atom range (C2xxx) was actually my first choice for this NAS project. They are more power-efficient and more powerful than the S1260 I eventually selected. What brought me to the S1260 is that the new C2xxx range has a slightly higher idle power usage compared to the S1260 (4 Watts higher, according to ServeTheHome) and I suspect my system will be idle more than 95% of the time. They also cost around €100 more, as you mention. These are still acceptable prices and power consumption figures, but I simply won't need the extra processing power, since in its role as a NAS I suspect the S1260 will be fast enough doing RAID parity calculations to saturate a single gigabit network connection.

boost wrote:The HFX solution is incompatible
You are right. I discarded it as an option when I learned the heat pipes would not fit the Atom board. I am going to use a different case (see new part list below).

boost wrote:For a server that's up 24/7 i would suggest ECC RAM.
I agree. The motherboard and RAM I had selected are ECC :-)

boost wrote:If the server is so close to you you should decouple the hard drives. That requires a different case. The Lian Li PC-Q35A has 5 5.25" bays you could use with hard drive decouplers (SPCR review).
That's a great suggestion. I hadn't even thought about that. I'm not sure if the drives (or the vibrations they create in the case) will be too loud. I have a WD20EARS mounted in a plastic tray in my desktop (same distance from me as the NAS would be) and I never hear it. On the other hand, that desktop is switched off when it's quiet in my apartment. Something I'll have to consider some more... :-)

boost wrote:The Microserver would be ideal for you
The Microserver is surprisingly close to what I want. It surprises me HP makes something so close to what I want; I didn't know they made such small servers. (I only have work-related experience with their DL380 G3-G7.) It does have high idle power usage (due to the more powerful components, no doubt) and the fan unfortunately is not easy enough for me to replace.

matt_garman wrote:you might want to check out a company called U-NAS, who make small DIY NAS enclosures. I have two systems built with the NSC-800, which holds 8 3.5" drives. They also make an NSC-400, which holds four drives.
The NSC-400 would definitely be an option for me, if it is quiet or can easily be made quiet. Does anyone know of a review of or has experience with the noise production of the NSC-400?


My updated part list:
  • Motherboard: Super Micro X9SBAA-F (4x SATA3, 2x USB3, 2x Gbe LAN, IPMI, 217 EUR excluding shipping) (http://www.supermicro.com/products/moth ... SBAA-F.cfm)
  • Memory: Kingston KVR13LSE9/2 (DDR3 ECC SO-DIMM 1333MHz, 1,35v, 2GB, 33 EUR excluding shipping)
  • Power supply: Mini-box picoPSU 80 + 60W Adapter (52 EUR excluding shipping)
  • Hard drives: 4x Western Digital RED WD40EFRX (148 EUR per drive excluding shipping)
  • Case: three options at the moment:
    • Fractal Node 304 (66 EUR)
    • Lian-Li PC-Q18 (120 EUR) (perhaps even with the fans off, since the SPCR review indicates it's still cool enough then)
    • Lian-Li PC-Q35 with the HDDs suspended in the 5.25" bays (128 EUR for the case and 4x ? EUR for the suspensions)
Price excluding drives: 368 EUR
Price including drives (16 TB): 960 EUR
Estimated power consumption: 14 Watts idle (disks parked); 42 Watts loaded (CPU + HDDs).
Given my price of 0.22 EUR per kWh, assuming 95% idle and an optimistic six years of service, the TCO over six years would be 1137 EUR. (That's excluding any replacement drives when one of them fails, though. I suspect I'll need to replace at least one drive at some point.)

Re: NAS build: part list feedback?

Posted: Sat Mar 08, 2014 2:41 am
by HFat
TomasC wrote:I am considering adding a UPS
Unless the reliability of your electricity supply is remarkable for a home environment and considering that you're spending 1000 euros on a pico-powered server... yes, it would be prudent to consider it!
It would also be prudent to pick a very good brick that can deliver a good bit more than 60W. Do you know for a fact your drives consume so little power when spinning up? Your idle/parked estimate is wrong* (see SPCR's review). Even if they do, you might want to hook up peripherals (including USB-powered stuff) or switch to different drives down the road...

*assuming you're talking about parking the heads, not spinning down the drives which is called standby or sleep and would make the server so unresponsive depending on your usage that putting the whole server in standby might make more sense

Re: NAS build: part list feedback?

Posted: Sun Mar 09, 2014 2:51 pm
by TomasC
Thank you for your replies, HFat. You prevented me from making at least one crucial mistake :-)

HFat wrote:
TomasC wrote:I am considering adding a UPS
Unless the reliability of your electricity supply is remarkable for a home environment and considering that you're spending 1000 euros on a pico-powered server... yes, it would be prudent to consider it!
I agree :-) The apartment never had any power outage in the last two years, but that doesn't mean it can't happen or that there are no other power problems (spikes, ...). Any recommendations for a UPS to power this NAS and a regular cable router/modem just long enough for the NAS to shut down would be greatly appreciated.

HFat wrote:Do you know for a fact your drives consume so little power when spinning up?
I had actually only counted the operational power and forgot about the extra power requirement at startup :?. I have found a review on StorageReview.com measuring the power usage at startup of the 3 TB variant of the WD Red. I'm not sure how to interpret the data, though. Their graph indicates 13.71W, but I don't know if that is cumulative (12V + 5V) or only the 12V part (in which case the cumulative would be 15.49W). According to the data sheet from WD, the startup 12V power draw is 1.75A +/- 10%. That's 23.1W worst case and 18.9W best case, both numbers higher than wat StorageReview.com measured.

Depending on which number to believe, the maximum power usage of the parts I have listed now would be 107W or 68W (quite the difference).

HFat wrote:I wouldn't use a pico for a NAS, especially if you want it to be reliable but maybe that's just me.
Is a PicoPSU unreliable for 24/7 applications? Or is it the external power brick that is the problem? Or is it fine as long as the power brick is a good one? If the PicoPSU is not suited for this build, what would be?

HFat wrote:It would also be prudent to pick a very good brick
Regarding power bricks, I have been looking for some reviews but haven't found much except that an efficiency rating of "V" is best. I think I read something about another rating (using letters up to "K" instead of Roman numerals), but I can't seem to find it again. The SPCR review of the PicoPSU is very positive about the EDac 120W adapter. Its fan shouldn't be a problem for me since it only starts at 90W power draw, which I'll almost never reach. Are there better ones on the market since that review from 2006?

HFat wrote:Your idle/parked estimate is wrong* (see SPCR's review). Even if they do, you might want to hook up peripherals (including USB-powered stuff) or switch to different drives down the road...

*assuming you're talking about parking the heads, not spinning down the drives which is called standby or sleep and would make the server so unresponsive depending on your usage that putting the whole server in standby might make more sense
I used the standby numbers. The NAS would be active for only one hour per day (if that), synchronizing with other devices for backup purposes or retrieving some old files. I don't mind it if the NAS takes a long time to become available (i.e. coming out of standby and/or spinning up the drives). Placing the drives or even the entire NAS in standy for the rest of the time would be great, if I can wake it back up remotely in the rare occasion I'd need to access a file remotely. I've looked into wake-on-LAN but I read that doesn't always work well (more specifically that Supermicro boards ignore wake-on-LAN packets if IPMI is active).


Thanks in any case for reading and all the comments so far!

Re: NAS build: part list feedback?

Posted: Sun Mar 09, 2014 4:03 pm
by HFat
TomasC wrote:Any recommendations for a UPS to power this NAS and a regular cable router/modem just long enough for the NAS to shut down would be greatly appreciated.
Any decent UPS I know about will have batteries which are totally overkill to shut down such an efficient NAS.
Ideally you want a UPS which doesn't simply wait until you have an outage to kick in.
The main problem I think is that you want to save power.. and how much power will a good UPS burn? I'm not knowledgable about that since any time I needed the reliability, a very high overall power efficiency wasn't a requirement.
At home I have a much cheaper server than the one you're buying and don't use a UPS with it. It's not so much the cost: I could of course buy a decent one on the used market. But I'm concerned about the power draw.
TomasC wrote:measuring the power usage at startup of the 3 TB variant of the WD Red
Not all measures are done properly and accurate. And more platters means a more power is needed when spinning up.
TomasC wrote:Is a PicoPSU unreliable for 24/7 applications? Or is it the external power brick that is the problem? Or is it fine as long as the power brick is a good one? If the PicoPSU is not suited for this build, what would be?
24/7 only makes it more likely your data will be vulnerable whenever something unforeseen happens.
I'm an ignorant coward so for this kind of application I like to have either a reputable server power supply or a largish consumer power supply packing the hardware necessary for stability and which has been tested to behave very well under stress. Something like those Kingwin/Superflower which can be very efficient at low loads since you're concerned about that. I assume these characteristics will translate into good behaviour in case the building's electricity (or the UPS) acts up or something. But assuming is not recommended... maybe there is someone here who actually has a clue about what really is safe and unsafe.
TomasC wrote:Regarding power bricks, I have been looking for some reviews but haven't found much except that an efficiency rating of "V" is best. I think I read something about another rating (using letters up to "K" instead of Roman numerals), but I can't seem to find it again. The SPCR review of the PicoPSU is very positive about the EDac 120W adapter. Its fan shouldn't be a problem for me since it only starts at 90W power draw, which I'll almost never reach. Are there better ones on the market since that review from 2006?
I assume there are better ones but I can not give you advice about specific bricks. Maybe you should ask serious vendors such as Logicsupply.
Don't forget there's not only efficiency to consider but what the brick would output if something happened with the electricity supply like a short drop or some kind of surge.
TomasC wrote:Placing the drives or even the entire NAS in standy for the rest of the time would be great, if I can wake it back up remotely in the rare occasion I'd need to access a file remotely. I've looked into wake-on-LAN but I read that doesn't always work well (more specifically that Supermicro boards ignore wake-on-LAN packets if IPMI is active).
I don't care for IPMI and stuff and WoL has always worked for me. I occasionally had to tweak stuff so that it would work but in the end it was reliable.
Putting drives in standby also works well provided your software doesn't constantly access the drives (your implementation of RAID might create problems however) but is messier to set up. On Linux, you can tweak stuff to prevent needless drive access (this is what I've done on my home server). Or you can set up the partitions which are constantly accessed on a flash or RAM drive.
The advantage of system standby is a much lower power consumption (at least if your PSU isn't terrible) and the disadvantage is that you need to wake up the server explicitely with a timer or WoL while drives are spun up automatically (you'll simply experience delays). This makes system standby preferable to save power when the users sleep or are away from home (you can configure standby so that it only takes place at certain times of the day/week) and disk standy preferable if you don't care for a few watts or if you access the server with dumb devices which can not be configured to wake it up.

Re: NAS build: part list feedback?

Posted: Mon Mar 10, 2014 6:21 am
by matt_garman
TomasC wrote:Any recommendations for a UPS to power this NAS and a regular cable router/modem just long enough for the NAS to shut down would be greatly appreciated.
For something that's this low-powered, any consumer-grade UPS should work. Do a little research to make sure it can tell your system that the power is out (and in turn trigger a graceful shutdown). I use "apcupsd" on Linux.

TomasC wrote:I had actually only counted the operational power and forgot about the extra power requirement at startup :?. I have found a review on StorageReview.com measuring the power usage at startup of the 3 TB variant of the WD Red. I'm not sure how to interpret the data, though. Their graph indicates 13.71W, but I don't know if that is cumulative (12V + 5V) or only the 12V part (in which case the cumulative would be 15.49W). According to the data sheet from WD, the startup 12V power draw is 1.75A +/- 10%. That's 23.1W worst case and 18.9W best case, both numbers higher than wat StorageReview.com measured.

Depending on which number to believe, the maximum power usage of the parts I have listed now would be 107W or 68W (quite the difference).
When in doubt, always err on the worst case scenario. Having a bigger PSU than you need does mean you'll lose some efficiency for a server like yours that uses so little power. On the other hand, if you regularly power it down or hibernate it, then efficiency matters less. Kind of like shopping for a fuel-efficient car: the less you drive, the less fuel efficiency matters.

Also, you want to build in a bit of margin anyway: inevitably, you'll be replacing drives over the years, and the replacements may not have the exact same power characteristics. E.g., you may go for more dense drives (more platters), which would only increase the spin-up power demands.

Another strategy is to look for a HBA that supports staggered spin-up for your drives. This way your drives power up sequentially, rather than in parallel, and the max power draw is equal to that of your single highest-power-draw drive.



TomasC wrote:Is a PicoPSU unreliable for 24/7 applications? Or is it the external power brick that is the problem? Or is it fine as long as the power brick is a good one? If the PicoPSU is not suited for this build, what would be?

Regarding power bricks, I have been looking for some reviews but haven't found much except that an efficiency rating of "V" is best. I think I read something about another rating (using letters up to "K" instead of Roman numerals), but I can't seem to find it again. The SPCR review of the PicoPSU is very positive about the EDac 120W adapter. Its fan shouldn't be a problem for me since it only starts at 90W power draw, which I'll almost never reach. Are there better ones on the market since that review from 2006?
In the U-NAS NSC-800 I have, I use the Seasonic SS-300M1U. Plenty of headroom for parallel drive spin-up, and seems to be reasonably efficient at lower power. And at less than 50% load, the fan doesn't spin.

I use a PicoPSU for 24/7 operation for my pfSense box... it's an Atom-based system, with a single SSD, so fairly static loading and lower power than a NAS... but so far so good, no issues. But one person's experience doesn't make for useful reliability statistics. :)

As for power bricks, if you simply go with something branded, you'll probably be OK. Meanwell is a brand generally known for making quality power supplies (but buy from a reputable supplier, as there are a lot of fake Meanwell PSUs, particularly on ebay). Avoid the ultra-cheap, no-name stuff. If you want the absolute best, look at medical grade PSUs... but be prepared to abuse your wallet. :)
TomasC wrote:Placing the drives or even the entire NAS in standy for the rest of the time would be great, if I can wake it back up remotely in the rare occasion I'd need to access a file remotely. I've looked into wake-on-LAN but I read that doesn't always work well (more specifically that Supermicro boards ignore wake-on-LAN packets if IPMI is active).
If you have a working IPMI system, wake-on-lan is redundant. You can wake the system manually with IPMI from a web GUI, or even from the commandline using "ipmitool" (on Linux, although I'm sure other OSes have something similar). Something like "ipmitool power on" or similar. With that, you can use a separate system to schedule system wakeups if your BIOS doesn't natively support such timers.

Re: NAS build: part list feedback?

Posted: Mon Mar 10, 2014 7:06 am
by xan_user
FWIW, ive been running picos 24/7 for years... they were mostly designed to run industrial computers/pos
terminals advertising kiosks ect. so i dont see why they would not do the job on a nas.

also, have any of you seen this? http://www.logicsupply.com/components/p ... /openups2/ sure, its a little pricey...but its self contained and fits in a 2.5 bay! batteries are about $7 each. (its made by same company as pico...)

Re: NAS build: part list feedback?

Posted: Wed Mar 12, 2014 8:27 am
by fwki
duplicate removed

Re: NAS build: part list feedback?

Posted: Wed Mar 12, 2014 8:31 am
by fwki
I use these Winmates in my pc-based Touchscreen Jukeboxes:
http://www.ebay.ca/itm/130W-mini-ITX-AT ... 3544wt_945

Unlike most picoPSU's they do not just pass through the brick's DC supply since these are real PSU's that generate the regulated supply voltages for PC. They are often used for medical applications. SPCR review:
http://www.silentpcreview.com/Winmate_DD-24AX.

I have five Winmate jukebox systems in use now for 4-5 years with one system using its Winmate to also power a 12" touchcreen monitor, no failures. I use Liteon 19V bricks with them.

Re: NAS build: part list feedback?

Posted: Fri Mar 14, 2014 11:50 am
by qualdoth
You mentioned in your first post that you're looking to build OR buy a NAS. However, your post is largely focused on building a NAS. While I like tinkering and building machines as much as the next guy, have you actually looked into off the shelf NAS units? There are quite a large number that would meet your technical specifications, and then some, at a price point similar or lower to the sum cost of your components. Are commercial options inadequate for you with respect to their noise profile?

I know for me personally, the last time around I needed a NAS, I simply went out and bought a pro-sumer level appliance (QNAP TS419-P).

Re: NAS build: part list feedback?

Posted: Mon Apr 21, 2014 4:05 am
by TomasC
I have ordered the following parts:
  • PicoPSU-120 + adapter @ €69.95
  • Fractal Design Node 304 Black @ €64.99
  • Super Micro MBD-X9SBAA-F-O @ €219.72
  • Kingston 4GB ECC SODIMM DDR3L-1333 (KVR13LSE9S8/4) @ €39.78
  • Hard drives:
    • 2x Hitachi Deskstar NAS 4000GB (H3IKNAS40003272SE) @ €147,91
    • 1x Seagate NAS HDD 4TB (ST4000VN000) @ €155
    • 1x WD Red 4TB (WD40EFRX) @ €152.9
I split the hard drives over multiple brands to minimize 'bad batch' problems.

This gives a total of €394.44 for the NAS itself (excluding hard drives). That's a comparable to some four-bay NAS boxes from Synology and QNAP, but will require more work from me of course (assembly, software, ...).

We'll see how far I'll get with these parts :-)

I want to thank everyone who gave me feedback in this topic, preventing me from buying the wrong case, wrong PSU, ... Thanks, people!

I'm not planning to create a build log, but if anyone is interested (pictures, benchmarks, measurements (power, temperature, ...)), let me know!