[*]What do you think about the part list in general?
Seems you have done good research, and have chose very good parts, i like the case. That said, on fully fanless idk, i used to have an acer ah342 whsv1 with an Intel Atom D510 (dual core 1.66ghz, and it idle close to 50s and load around 65-70C depending on ambient temp, and this pre build server had a 120mm fan on it, so fully fanless might get some high temps, then again the atoms are built to stand a lot of heat, in a lot of motherboards they are fanless. I leave you a pic of the temps on my setup,
Try to find out if you will be able to mount the pipes setup for sure, in my atom setup, it was a soldier cpu, no way of mounting another heatsink on it.
Personally i don't fear fans, while its one of the biggest source of noise, if you chose them correctly, in most cases you will be able to lower them to inaudible levels where it does make a big difference, even some airflow helps a lot for heat not to be enclosed. If you were to consider with fans and prebuilts, i like a lot Synology, their OS is pretty nice, with lots of extras and apps, for example a 4bay nas Synology DS412+ DiskStation (Diskless) 4 Bay Desktop NAS Enclosure
, they can run raid and usually the have very low powered CPUs, in the past they used small fans that were noisy, but they have moved into 92mm fans, i have no experience on this new setups, so i cant say for sure they are quiet.
If you still want to persue fanless, and if you were to get by 2hdd setup, there is one recently introduced to the market, QNAP HS-210 Silent/Fanless Stylish Set-Top Network Attached Storage
Either way if you go with your well planned build, be sure to share how it went, some pics and specially temps, im really interested into what will the hdds and cpu temp being fully fanless.
[*]I like the motherboard because it consumes, excluding disk drives, only 12 Watt idle and features USB 3. Are there much lower power or cheaper parts available that can provide what I want? I've considered ARM but can't seem to find boards with enough SATA ports or USB 3 support that are available.
Atoms are kinda tricky, at least i have played with two setup, and consumption were not as low as i expected, my acer sever, atom based, idled above 25W, if im not mistaked on load was around 40W, but it was a long time ago and also the atom you are choosing is newer, so might be different now a days.
[*]Would you place the system OS on the RAID array or on a separate drive? Why?
It really comes down into what OS you will use, in most servers there is not much difference, specially if the server is always up, SSDs help a lot on booting and powering off, but once all is loaded the gains are not that dramatic, but will also depend into what the server will do. Depending on the OS, there might be benefits, i have read in certain OS they used separate hdd/ssds as caches, in others that the real time parity is done on the fly, people lose a lot of time moving files as the OS does it at the same time, cache drives help this process as you dont really move them to the array but to a separate disk that all the data will be later on moved by server. I did use an SSD on my server, but it was left over and was a sata II, i had the spare sata, and i didnt want to include the OS drive on pool, so i went with it, but there is not much of a gain for me beside starting and shutting down, and even then the HBAs takes so long into booting that you don't feel like if it had an ssd on it.
[*]I am planning to run software RAID since there are motherboards with enough SATA ports and software RAID is probably cheaper (in terms of hardware and power usage). Would I be better off with a RAID card? Which would you recommend?
This is a tricky question, we have come from a culture that software raid sucked, and swearing that hardware raid was the thing, but lately there has been so much advancements on filesystems and software raid that many have moved of from hardware raid. Im sure you can google it and find the pros n cons of each, so i suggest you do that. Raid has to be seen just as uptime of your server or information, not as a real backup, raid as it is is very dangerous, depends on the parity you have, if more drives than your parity fail or if an rebuild another hdd fails... you will lose all your data, for this reasons a lot of people have moved away from standard software raid and hardware raid to setup that are more reliant and dependable. The approach microsft took was very inefficient but still has its pros, they basically went with deduplication, or having the same info on multiple drives, their setups are meant to allow hdds to fail and still have accessible data as there is no real array but pooling of drives, and if you data was important you should back it up externally, but windows is known in servers for other issues, for example very prone to silent data corruption. There are others like unraid that have a weird raid4 alike setup, its not a real array like raid 5/6, but a parity based, where all drives are independent that if more hdds than the parity fails you still have access to your information, you only lose what was on the hdds, the downside is that its very slow into writing, but as fast as you hdds on reading, there is no increase of speed like in raid5/6, but its a very simple OS for storage and has grown a lot, now has a lot of addons that are supported, the community is great and help a lot into hardware and even setting up the server. There are more complex setup, like ZFS, that have multiple software based raid setup, with a lot of failsafes to avoid corruption and to be very reliable, but its not as simple, you gotta read a lot, and to me wasn't worth it, as i don't care much about the info i have on the server.
Even prebuilds like synology have supports for multiple raid setup, or what they called hybrid raid where you can grow the server as you add more drives. There are lots of other options, like freenas/nas4free, amahi, snapraid, etc, none is better than the other, simply they are different all have their pros n cons, but worth to check all and see what fits better into what you are building.
Just one last thing, about the raid card, it will consume from 5-15W, so if you are perusing a low power setup this would play against, also a lot of the raid cards get extremly hot, and in a fanless envoirment you probably will cook them fast, so i would go with software.... which... its up to you, really comes down for what you will use it.
[*]The case is by far the most expensive part. Since this will be a very low power setup (I'm hoping for less than 20 Watts with the drives installed and idle), would I be able to use a cheaper case?
I really dont know, fanless i doubt it, but its expensive at 250, there are lots of compact cases that will cost less than half but with fans. Personally i don't want to discourage you into building it, i really like the looks of the case, but idk how it will go fully fanless, interested though into your results.
Good luck on the build.