Parts list critique for long time geek, first time builder

Got a shopping cart of parts that you want opinions on? Get advice from members on your planned or existing system (or upgrade).

Moderators: NeilBlanchard, Ralf Hutter, sthayashi, Lawrence Lee

Plekto
Posts: 398
Joined: Tue Feb 19, 2008 2:08 pm
Location: Los Angeles

Re: Parts list critique for long time geek, first time build

Post by Plekto » Thu Mar 10, 2011 9:57 pm

SSDs fail on writing, so if the system files are merely being read and your main applications are as well, then it will last nearly forever and you can pretty much ignore the traditional issues of hard drive crashes and bad sectors.

But you cannot put them in RAID (normally) and you need a separate drive or ramdrive to handle all of the cache files and swap file.(virtual memory) - or else the SSD will corrupt itself in short order and slow down to a crawl as well as start to have errors after a few months.

quest_for_silence
Posts: 5275
Joined: Wed Jun 13, 2007 10:12 am
Location: ITALY

Re: Parts list critique for long time geek, first time build

Post by quest_for_silence » Thu Mar 10, 2011 10:32 pm

Thanks for further clarifying, Plekto: unfortunately I still don't see any link between your explanation and the reported (by cyreb7) alleged incompatibility of SSDs with the ASUS P67 BIOS/UEFI.

ces
Posts: 3395
Joined: Thu Feb 04, 2010 6:06 pm
Location: US

Re: Parts list critique for long time geek, first time build

Post by ces » Fri Mar 11, 2011 5:11 am

Plekto wrote:SSDs fail on writing, so if the system files are merely being read and your main applications are as well, then it will last nearly forever and you can pretty much ignore the traditional issues of hard drive crashes and bad sectors.
1. Are you sure about that?

2. Doesn't the system always need to write somewhere the time and date it last accessed a file?

ces
Posts: 3395
Joined: Thu Feb 04, 2010 6:06 pm
Location: US

Re: Parts list critique for long time geek, first time build

Post by ces » Fri Mar 11, 2011 5:12 am

Plekto wrote:But you cannot put them in RAID (normally) and you need a separate drive or ramdrive to handle all of the cache files and swap file.(virtual memory) - or else the SSD will corrupt itself in short order and slow down to a crawl as well as start to have errors after a few months.
I am not certain I understand what you are saying. Could you explain it in more detail.

Plekto
Posts: 398
Joined: Tue Feb 19, 2008 2:08 pm
Location: Los Angeles

Re: Parts list critique for long time geek, first time build

Post by Plekto » Fri Mar 11, 2011 12:14 pm

SSDs that are based upon MLC (multiple layer cells) are known to suffer from fragmentation at a much faster rate than with a typical hard drive. Enterprise-level SSDs that support TRIM and that are single layer cells exist, but they are incredibly pricey. And you absolutely have to be running Windows 7 - Vista and XP don't support it. If the drive or OS doesn't support TRIM, it's just as bad. This is due to a mis-match between the motherboard and OS's expectations of a hard drive being present instead and the file and block sizes being different with the SSD internally - and how it handles it.(makes a typical SSD very very unhappy). Tests have shown that without support for the TRIM function, even normal SSD drives used as a boot drive(not in a RAID array) will also suffer and fragment badly.

The question is why does this happen? It's because the OS and the swap file/virtual memory as well as the various cache files. They quickly cause errors to start to appear in memory blocks that the SSD is pretty good at dealing with, due to wear-leveling, but it causes slow-downs nonetheless. A typical MLC drive can handle 2000-10,000 writes per cell before it goes bad. With an OS drive pushing upwards of 15-20 million writes a month, that leads to the wear-leveling and error-correction routines basically going nuts after a while. Serious fragmentation that cannot be fixed happens.

http://www.computerworld.com/s/article/ ... me=Storage
The most interesting part of this is where the Intel itself assumes that their MLC based SSDs are good for about 15TB of random writes under optimal circumstances and a MTBF of only 5 years. As we all know, MTBF figures tend to be inflated by a very large amount and are based upon assumptions that it won't be used as an OS drive. Most cheap SSDs these days are closer to that 2000 cycles per cell as opposed to 10,000. That's still gobs of safety for a data drive or for storage. But not for a boot drive.

If the drive is having TRIM issues, it's going to be causing itself 5-10x the wear internally trying to re-allocate and fix all of the blocks. You don't know what you have, though, as most of the cheaper $100 or so SSDs (like the OP wants to put in his system) are likely to be cheap crap as opposed to enterprise-level and backed by a good warranty. It's unfortunately common to hear about people with cheap SSDs having serious issues in as little as 6 months to a year. All of the stuff is from who knows where in China and they're certainly not putting out their best product at such low prices.

MLC is meant for large file and long-term storage only, where it's great and lasts nearly forever. SLCs like the IBM X25-E are very pricey. The cheapskate alternative is to go hybrid and toss the heavy wear and non-critical cache files onto a small laptop drive that you toss in a corner of the case and forget about.

http://forums.anandtech.com/showthread.php?t=2125189
Here is the exact drive that he's considering buying. And Windows 7 with TRIM enabled. And it still is suffering badly due to something being wrong with the drive not doing it properly (don't know why - it's just having issues complying properly) - and as the fragmentation worsens, the drive will eat itself faster and faster tying to deal with it. That slow-down is 12 days with everything supposedly working correctly to stop this from happening.

My PC has written to the hard drive in three days roughly 1.5 million times. That's a fairly low number as I've not been gaming much. Figure 15 million writes a month, conservatively. That's individual episodes and not counting data amounts - that's a bare minimum of events that the SSD would be required to write data.

MLC flash typically has 1024 or 512KB block sizes, so that 64GB model (cheaper drives almost always use 1024Kb blocks) probably has a total of 65,536 blocks. That means ~288 full writes in a month out of ~2000. 7 months. It might last longer than that, but it's starting to lose entire sets of blocks (the M in MLC rears its head in a bad way when cells start actually dying) at that point. You get what you pay for, and for $100, it's not a lot.

SLC, yes, I'd run it without worries. But a 64GB SLC drive is $600 last I checked.

Post Reply