Good affordable SSDs?

Silencing hard drives, optical drives and other storage devices

Moderators: NeilBlanchard, Ralf Hutter, sthayashi, Lawrence Lee

m^2
Posts: 146
Joined: Mon Jan 29, 2007 2:12 am
Location: Poland
Contact:

Post by m^2 » Tue Feb 10, 2009 9:44 am

No, wear leveling starts from day zero.
As the name suggests, it's meant to keep all cells equally worn out.

Think of using 1 cell at the time. It wears out and gets replaced. Another one, ... Your drive dies after all replacements +1 cell are gone, which is ~5-10% of possible lifetime with wearing all cells equally.

Aris
Posts: 2299
Joined: Mon Dec 15, 2003 10:29 am
Location: Bellevue, Nebraska
Contact:

Post by Aris » Tue Feb 10, 2009 11:43 am

Plekto wrote:Intel's option is very expensive. The 1 SATA port version of this can be had for a paltry $250 by comparison. 16 GB of memory is about $150 these days as well, bringing it to about $400. Not entirely unreasonble, IMO, considering the benefits.
So $400 for 16gb of storage and your saying Intel's SSD is expensive?

You can get the 80gb Intel x25-m for $370 now a days. 30 bucks less and 64gb more storage, plus i never have to worry about batteries.

Current price per GB is under $5. your going to be hard pressed to get a ramdrive that cheap not to meantion with that much capacity, and even if you do get close your still anchored down by that pesky battery requirement.

Plekto
Posts: 398
Joined: Tue Feb 19, 2008 2:08 pm
Location: Los Angeles

Post by Plekto » Tue Feb 10, 2009 1:32 pm

16GB might not be as much as 80GB, but for a fast boot device, most people don't load it up with tons of data as it is. ie - I have 5-10GB of apps and a couple of gigs worth of OS... Technically the device will hold up to 64GB - I just chose 16GB as that's often more than enough for a boot drive. 32GB is about $120-140 more.(price of ram) The battery comes with the Ramdrive, though, so that's not a factor, either. I thought that the replacement battery was something you had to buy/was optional.

The real issue is life span and speed. NAND flash isn't as robust as physical memory. Also, write speeds are much slower than the competition. I mentioned it only because it is the closest real competitor in price that seems to be any good. But it's still plagued with problems. Not due to Intel trying, either. It's just that SSDs based off of flash memory have issues.

The Intel X-25 Extreme, which is closer in performance, is ~450 shipped, though, for 32GB. That brings a X-25E and the ANS-9010/32GB to much closer a price difference. ($250 plus memory) The X-25M is pretty gutless by comparison. ~450 vs ~500.. I call it a wash, since Ram prices keep dropping and used DDR2 ram is everywhere for even less.

In fact, the difference between a memory based ramdisk and the ANS-9010 is virtually nonexistant if you use a real SATA2 card instead of the on-board controller(s). This therefore might be a cheap way to get around the 8-16GB barrier with 64 bit OSes(all motherboards come with only 4 memory slots...) These could function as physical ram. Given enough money, you could stuff a small cabinet full of these in a giant raid array for upwards of 100GB of physical ram.(well, technically 32-64GB on the motherboard and the rest as a giant swap file at DDR2 speeds)

Is it cheap? No. But it's certainly a world less expensive than a RAMSAN. And for people like myself, cheap enough to splurge on as well. I'll chose real ram over flash type drives any day.

Aris
Posts: 2299
Joined: Mon Dec 15, 2003 10:29 am
Location: Bellevue, Nebraska
Contact:

Post by Aris » Tue Feb 10, 2009 2:49 pm

Plekto wrote: The real issue is life span.
I hear this issue tossed around in forums, but have yet to see any proof of it. Link something about a modern SSD like the one from intel die'ing in less than 5 years worth of read/writes. I'm sure there is some way you can calculate the amount of reads/writes a typical computer makes in 5 years, and then accellerate that to predict actual lifespan.

Its like when people go to the fan forum and say sleeve fans are going to wear out sooner than ball bearing fans. Sure "technically" they will, but if the difference is 20years or 10years its a non issue either way.

Not only are ramdrives not cheep, their also, by nature, volitile. Which is the biggest hicup of all. It may be fine for your gamming rig, but when data integrity is critical, like it is to most folks, this isnt even an option.

Plekto
Posts: 398
Joined: Tue Feb 19, 2008 2:08 pm
Location: Los Angeles

Post by Plekto » Tue Feb 10, 2009 6:15 pm

http://www.storagesearch.com/ssdmyths-endurance.html
The problem is that the X25 series use less stable flash rated at a puny 10,000 cycles. Intel is claiming that their wear-leveling software is giving it a wear factor of 1.1 versus the industry average of 20... Somehow I think it's unlikely that Intel is beating the standard by that much.

***from that link***
To get that very high speed the process will have to write big blocks (which also simplifies the calculation).

We assume perfect wear leveling which means we need to fill the disk 2 million times to get to the write endurance limit.

2 million (write endurance) x 64G (capacity) divided by 80M bytes / sec gives the endurance limited life in seconds.

That's a meaningless number - which needs to be divided by seconds in an hour, hours in a day etc etc to give...

The end result is 51 years!
****
But that's for newer flash. NAND based flash like Intel's is looking at 10,000 X 36G, or 36,000 versus 128 million. 51 years becomes 0.14 years without fancy load-leveling being utilized. I suspect that there will be reports of dead drives in the 2-3 year time frame. By comparison, I have used systems dating from the 70s with ram that still passes every check imaginable.

P.S. - The automatic CF backup feature is why this ramdrive is a viable option, IMO.


EDIT:
http://www.infostor.com/article_display ... t/s-1.html
Found it. A real industry article. The X25 uses multi-layer NAND.

Quote:
"But why only transient data? The answer requires a more discriminating look at NAND technology. NAND uses high voltage (20V) to perform program/write operations and occasionally this high voltage shorts out a NAND die, resulting in a catastrophic write short. One NAND die is about 8Gb, or 1GB, of data and losing this much data is similar to losing a disk drive.

One drive vendor determined that some manufacturers’ NAND chips had a catastrophic write short rate of 4% defects per million (DPM) dies per year. At that rate, a NAND die has an MTBF of approximately 219,000 hours. However, each 128GB SSD flash drive typically has 200 8Gb NAND dies and thus is expected to experience a die failure approximately every 45 days. Even at a 2% DPM failure rate, SSD drives using these NAND chips are expected to experience a chip failure every 90 days..."

Obviously Intel is lying and relying on most customers not keeping the things for more than a year or two.

DragonMaster
Posts: 13
Joined: Sun Jan 25, 2009 4:14 pm
Location: Canada

Post by DragonMaster » Tue Feb 10, 2009 6:42 pm

(all motherboards come with only 4 memory slots...)
Server boards with 8 and 16 slots are easy to come by. You can get 128GB RAM with some of them.

Aris
Posts: 2299
Joined: Mon Dec 15, 2003 10:29 am
Location: Bellevue, Nebraska
Contact:

Post by Aris » Tue Feb 10, 2009 7:01 pm

Plekto wrote: Obviously Intel is lying and relying on most customers not keeping the things for more than a year or two.
I dont see how its "obvious" since if it were we would be hearing much more from customers and their prematurely failed drives all over the net. So far the biggest complaint i have heard is how peoples south bridges arent fast enough to keep up with their new drives' speed.
Plekto wrote: and relying on most customers not keeping the things for more than a year or two.
And thats why its got a 3 year warrenty on it? If they expected their products to start failing after a year or two, they wouldnt warrenty them up to 3 years, or else they'd lose a crap load of money.

Warrenty's is the most important number when trying to figure out reliability, because this is the number of years the company is willing to put their money where their mouth is.
Last edited by Aris on Wed Feb 11, 2009 8:44 am, edited 1 time in total.

QuietOC
Posts: 1407
Joined: Tue Dec 13, 2005 1:08 pm
Location: Michigan
Contact:

Post by QuietOC » Wed Feb 11, 2009 7:32 am

frostedflakes wrote:In Windows at least, I don't think there's much difference between full and quick format. Both delete the master file table, and then the full format also checks for bad sectors (this is why it takes so much longer). AFAIK, neither overwrites the drive with zeros, which is probably what would be really damaging to an SSD.
Microsoft changed the full format in Vista (and presummably 7) to write zeros.

dhanson865
Posts: 2198
Joined: Thu Feb 10, 2005 11:20 am
Location: TN, USA

Post by dhanson865 » Wed Feb 11, 2009 12:06 pm

QuietOC wrote:
frostedflakes wrote:In Windows at least, I don't think there's much difference between full and quick format. Both delete the master file table, and then the full format also checks for bad sectors (this is why it takes so much longer). AFAIK, neither overwrites the drive with zeros, which is probably what would be really damaging to an SSD.
Microsoft changed the full format in Vista (and presumably 7) to write zeros.
Solid. I did not know that.

APPLIES TO

* Windows Vista Ultimate
* Windows Vista Home Premium
* Windows Vista Ultimate 64-bit Edition
* Windows Vista Home Premium 64-bit Edition
* Windows Server 2008 Standard
* Windows Server 2008 Enterprise
* Windows Server 2008 for Itanium-Based Systems

Apparently it doesn't apply to Vista Starter, Vista Home Basic, Vista Business?

Even if it does apply I use Server 2008 on some of my servers already so I'll have to change my habits.

Before this I'd use the full format in XP/2003 and below to have it warn me about bad sectors. Now I'll have to do a quick format then test for bad sectors with a utility after the space is allocated.

For the paranoid among us that will slow down a server/PC rollout as testing for bad sectors separate from the OS install/format process will take an extra boot and some extra steps.

Plekto
Posts: 398
Joined: Tue Feb 19, 2008 2:08 pm
Location: Los Angeles

Post by Plekto » Wed Feb 11, 2009 5:05 pm

Aris wrote:
Plekto wrote: Obviously Intel is lying and relying on most customers not keeping the things for more than a year or two.
I dont see how its "obvious" since if it were we would be hearing much more from customers and their prematurely failed drives all over the net. So far the biggest complaint i have heard is how peoples south bridges arent fast enough to keep up with their new drives' speed.
You'll also note that not a single server room that I know of uses anything other than commercial-grade/enterprise SSDs. If you're really wanting to do a fair comparison, enterprise-level SSDs are what we should be comparing. After all, I *can* pay $10 more for the same drive that is used in a server farm and has a long warranty. And memory lasts for decades. Intel themselves doesn't recommend using their consumer grade SSDs for business and server use. I wonder why...

What's the least expensive enterprise-grade SSD go for? A quick search online gives me about $400 for 16GB, and about $550 for a 32GB one. I'd not trust a SSD to be as reliable as a hard drive unless it was rated to be used like one for critical data. But then again, used ram is everywhere. I can get 4gb ddr2 memory all over the place for about $15-$20 each. 16GB would be closer to $75-80 this way, or about $400 for a functional drive.(figuring in $50 or so for a good commercial grade CF card plus shipping/fees)

So pretty close to parity in price.

Aris
Posts: 2299
Joined: Mon Dec 15, 2003 10:29 am
Location: Bellevue, Nebraska
Contact:

Post by Aris » Wed Feb 11, 2009 9:26 pm

why would consumers need server grade hardware? your logic is out of left field.

So if we were to then apply this logic, we should all toss out our Western Digital Green Power drives because their not used in servers, and thus will almost certainly die a premature death.

Intel doesnt recommend using their CONSUMER GRADE SSD's for SERVER USE because they are NOT enterprise grade hardware. Almost no one actually uses any sort of enterprise grade storage medium in their home PC's, SSD or otherwise.

You do realize that they dont recommend consumer grade HDD's for server use either right? Servers dont use consumer grade HDD's, they use 10k/15k SCSI/SAS drives.

frostedflakes
Posts: 1608
Joined: Tue Jan 04, 2005 4:02 pm
Location: United States

Post by frostedflakes » Wed Feb 11, 2009 11:52 pm

QuietOC wrote:
frostedflakes wrote:In Windows at least, I don't think there's much difference between full and quick format. Both delete the master file table, and then the full format also checks for bad sectors (this is why it takes so much longer). AFAIK, neither overwrites the drive with zeros, which is probably what would be really damaging to an SSD.
Microsoft changed the full format in Vista (and presummably 7) to write zeros.
Thanks for the correction, I didn't know about that.

Plekto
Posts: 398
Joined: Tue Feb 19, 2008 2:08 pm
Location: Los Angeles

Post by Plekto » Thu Feb 12, 2009 12:01 am

Aris wrote:why would consumers need server grade hardware? your logic is out of left field.
The reason is that only the server grade SSDs are close to the reliability of a standard consumer hard drive. The others are well below that. Now, for most people, that not a big deal. But for me, that's huge since I haven't had a hard drive last more than 3 years in a long time other than the couple of more recent server grade ones which are running better.

http://www.tgdaily.com/content/view/36575/145/
I found this from last year. The manufacturer in question is Dell and the technology is NAND, just like the Intel ones. It's fast but it's not good enough, because despite the press/comments by Dell, the actual techs doing the RMAs are reporting 10-20% failure rates.

Ouch? It looks like NAND/the cheaper stuff is just not good, so that leaves other slightly more expensive technologies if you care about your data...

Aris
Posts: 2299
Joined: Mon Dec 15, 2003 10:29 am
Location: Bellevue, Nebraska
Contact:

Post by Aris » Thu Feb 12, 2009 6:16 am

Plekto wrote:
Aris wrote:why would consumers need server grade hardware? your logic is out of left field.
The reason is that only the server grade SSDs are close to the reliability of a standard consumer hard drive. The others are well below that.
3 year warrenty is pretty much standard for most HDD's, and thats what this SSD has. Seems like reliability is about the same to me until someone has actual data suggesting otherwise, data you have yet to show. Your opinions about what you think are and are not below standard do not constitute facts.
Plekto wrote:since I haven't had a hard drive last more than 3 years in a long time other than the couple of more recent server grade ones which are running better.
So if i'm hearing you correctly, even normal consumer grade HDD's do not meet your expectatioins of reliability. So why then are you so critical of consumer grade SSD's and not of consumer grade HDD's.

If you want to discuss reliability of enterprise level storage technology, i think that you should create another thread, because thats not what this thread is about.

m^2
Posts: 146
Joined: Mon Jan 29, 2007 2:12 am
Location: Poland
Contact:

Post by m^2 » Thu Feb 12, 2009 1:29 pm

Plekto wrote:Found it. A real industry article. The X25 uses multi-layer NAND.
X25-M uses MLC, X-25-E uses SLC.
Aris wrote:
Plekto wrote:
Aris wrote:why would consumers need server grade hardware? your logic is out of left field.
The reason is that only the server grade SSDs are close to the reliability of a standard consumer hard drive. The others are well below that.
3 year warrenty is pretty much standard for most HDD's, and thats what this SSD has. Seems like reliability is about the same to me until someone has actual data suggesting otherwise, data you have yet to show. Your opinions about what you think are and are not below standard do not constitute facts.
'
3? For JMicron you have 2, IIRC some time ago it was 1. For the Vertex-like Super Talent you get 1. Ridata: 1-2.
AFAIK no cheap drive gives you 3 years and just a while ago 1 year was standard. For enterprise you usually get 3-5 years.

About ACARD: It's very cool, but good only for specific purposes. When write performance is significant, it just blows flash away. The same with multitasking, did you see iPEAK results?
Price / GB is very high and price / MBps isn't great unless you go with really low capacity.
Sidenote: 32 GB != 32 GB. With ACARD, you want to use the internal ECC unless you buy ECC memory, which costs more. But this consumes 1/9 capacity and leaves you with ~30541989664B out of 34359738368 that you really have.
OTOH Intel, just like majority of storage manufacturers claims that 1 GB==10^9B and gives you ~32000000000B.

Aris
Posts: 2299
Joined: Mon Dec 15, 2003 10:29 am
Location: Bellevue, Nebraska
Contact:

Post by Aris » Thu Feb 12, 2009 2:30 pm

m^2 wrote:3? For JMicron you have 2, IIRC some time ago it was 1. For the Vertex-like Super Talent you get 1. Ridata: 1-2.
AFAIK no cheap drive gives you 3 years and just a while ago 1 year was standard. For enterprise you usually get 3-5 years.
For the Intel x25-m it is a 3 year warrenty

Source: http://www.intel.com/support/flash/pssd ... 029645.htm

"Intel’s Three Year Limited Warranty
Intel warrants to the purchaser of the Product (defined herein as the Intel® X25-E, X25-M, and X18-M SATA Solid-State Drives)"

All 3 of their SSD's from their SLC to their MLC are all under the same 3 year warrenty.

Tobias
Posts: 530
Joined: Sun Aug 24, 2003 9:52 am

Post by Tobias » Thu Feb 12, 2009 3:28 pm

m^2 wrote: Sidenote: 32 GB != 32 GB. With ACARD, you want to use the internal ECC unless you buy ECC memory, which costs more. But this consumes 1/9 capacity and leaves you with ~30541989664B out of 34359738368 that you really have.
OTOH Intel, just like majority of storage manufacturers claims that 1 GB==10^9B and gives you ~32000000000B.
Funilly enough, though, OCZ for some reason doesn't. Their 30GB is actually 30GiB usefull space and in a post in their forum they said that there is 32+4GiB available in the Apex disks for wearlevelling.

Plekto
Posts: 398
Joined: Tue Feb 19, 2008 2:08 pm
Location: Los Angeles

Post by Plekto » Thu Feb 12, 2009 4:32 pm

m^2 wrote:
Plekto wrote:Found it. A real industry article. The X25 uses multi-layer NAND.
X25-M uses MLC, X-25-E uses SLC.
Even the E model is using NAND, but you're right - SLC is much better. Pass on the M and spend $50 more for the E.

And, no, 2-3 years for a hard drive is really too short. I tend to upgrade my systems about every 3-5 years and having to swap out $200 in drives every other year makes me upset. Sure, I get *some* credit(lately a coupon? joy) if I RMA the drives back, but often the things crap out just after the 2 year warranty is up. I have one right now that's 2 years old, in fact that's developing errors - probably replace that with a WD GP and just NOT RMA it back...

By comparison, the 5 year warranty on the WD enterprise drives is at least some consolation. $10 more up front at least gets me a new drive in a couple of years.

Aris
Posts: 2299
Joined: Mon Dec 15, 2003 10:29 am
Location: Bellevue, Nebraska
Contact:

Post by Aris » Fri Feb 13, 2009 5:48 am

Plekto wrote: And, no, 2-3 years for a hard drive is really too short. I tend to upgrade my systems about every 3-5 years and having to swap out $200 in drives every other year makes me upset.
Idunno what exactly your doing to toast your consumer grade HDD's so fast, but i havnt had a single drive go on me for over 5 years now. And the last time i lost one it was because it got over 120 degrees in my room when the A/C crapped out one summer and it overheated. A problem, i might add, i wouldnt have had if i was using an SSD since they have a much higher operational temp.

So i feel quite comfortable with a 3 year warrenty.

m^2
Posts: 146
Joined: Mon Jan 29, 2007 2:12 am
Location: Poland
Contact:

Post by m^2 » Fri Feb 13, 2009 6:03 am

Aris wrote:All 3 of their SSD's from their SLC to their MLC are all under the same 3 year warrenty.
This thread is about cheap drives. X-25 is consumer high end. Indeed, quality is expected here.
Plekto wrote:
m^2 wrote:
Plekto wrote:Found it. A real industry article. The X25 uses multi-layer NAND.
X25-M uses MLC, X-25-E uses SLC.
Even the E model is using NAND
Just like any other flash SSD. ;)

Aris
Posts: 2299
Joined: Mon Dec 15, 2003 10:29 am
Location: Bellevue, Nebraska
Contact:

Post by Aris » Fri Feb 13, 2009 6:21 am

m^2 wrote:
Aris wrote:All 3 of their SSD's from their SLC to their MLC are all under the same 3 year warrenty.
This thread is about cheap drives. X-25 is consumer high end. Indeed, quality is expected here.
I believe the price on the X25-m has dropped to the point of being "Affordable" which is what the title of the thread is.

80gb for $370 isnt really all that bad. Its enough space for an OS a few applications/games and a modest mp3 collection with room to spare, and the overall cost of ownership isnt too outlandish.

@Plekto
Reguarding the whole "Pass on the M and spend $50 more for the E". While its true the SLC version is less than $50 more at this point, its also less than half the storage space. I only have 2 games on my PC at home with a WinXP Pro installation. The only other major software i have is open office. All of which fills right around 35gb of space. So you would be pretty hard pressed to use the x25-e all by itself with no additional drives in your system. Which would be possible with the M version.

m^2
Posts: 146
Joined: Mon Jan 29, 2007 2:12 am
Location: Poland
Contact:

Post by m^2 » Fri Feb 13, 2009 7:23 am

Aris wrote:@Plekto
Reguarding the whole "Pass on the M and spend $50 more for the E". While its true the SLC version is less than $50 more at this point, its also less than half the storage space. I only have 2 games on my PC at home with a WinXP Pro installation. The only other major software i have is open office. All of which fills right around 35gb of space. So you would be pretty hard pressed to use the x25-e all by itself with no additional drives in your system. Which would be possible with the M version.
It highly depends on one's needs. I wouldn't be pressed at all with 20 GB and would fit in 16. But I also met a person for whom 300 SI GB Velociraptor wasn't enough for OS.

Plekto
Posts: 398
Joined: Tue Feb 19, 2008 2:08 pm
Location: Los Angeles

Post by Plekto » Fri Feb 13, 2009 7:16 pm

http://www.pcper.com/article.php?aid=669

Just came out. Yet another reason Multi-layer NAND is no good.

m^2
Posts: 146
Joined: Mon Jan 29, 2007 2:12 am
Location: Poland
Contact:

Post by m^2 » Sat Feb 14, 2009 4:11 am

Plekto wrote:http://www.pcper.com/article.php?aid=669

Just came out. Yet another reason Multi-layer NAND is no good.
This has nothing to do with MLC. Even SLC FusionIO shows performance degradation over time.

dhanson865
Posts: 2198
Joined: Thu Feb 10, 2005 11:20 am
Location: TN, USA

Post by dhanson865 » Sat Feb 14, 2009 8:16 am

Plekto wrote:http://www.pcper.com/article.php?aid=669

Just came out. Yet another reason Multi-layer NAND is no good.
MLC or SLC these issues will occur but it does add new items to the list of things to not do on an SSD

1. Never do a full format on an SSD

2. Never put a swap file on an SSD (Turn off virtual memory or move the pagefile/swapfile to another drive (a traditional hard drive) )

3. Never put your temporary internet files on the SSD if you have the option to put them elsewhere. This goes for other internet apps that write small chunks like BitTorrent)

4. Never defragment an SSD

5. Never assume your SSD won't fail. Always make backups to another media type.
A laptop user placing light workloads on their X25-M may never see the worst of these issues, but many users are going solid state for their desktop OS partitions, and a typical power user workload can fragment these drives in short order. It is likely that other manufacturers will employ similar write combining techniques in the future, and with those new devices may come similar real world slowdowns. While the specialized controller used by Intel enables it to bulldoze through most scenarios, we have seen that even the best logic is subject to severe write combining / internal fragmentation. Hopefully Intel can further tweak their algorithms with a future firmware update to the X25-M. In the meantime, we hope our suggestions keep your SSD on the speedier side of things.

There are a few more SSD slowdown articles in the PCPer pipeline. Stay tuned for further updates!

Plekto
Posts: 398
Joined: Tue Feb 19, 2008 2:08 pm
Location: Los Angeles

Post by Plekto » Sat Feb 14, 2009 1:20 pm

With that list, you can see why that ramdisk is a viable alternative, especially as it has a built-in backup option. Memory might be a bit more expensive, but it works great.

Oh - the article was mentioning - and read the responses on Slashdot about the same discussion as well, especially the original person's responses - that the issue was that the SSDs are incompatible with the NFTS file system itself.

They are optimized for large blocks and the SSDs have different block patterns. So what happens is that it has a blank block and sticks a part of it in it.(say a 16kb "block" in NTFS). When it comes around to write again, it looks and see that there's tons of free space in that block so it takes that 16kb out and combines it with 16kb someplace else and writes them together in a single SSD block(now 2 out of 32 16k positions are filled)

Eventually it's spending most of its time shuffling and internally de-fragmenting its own memory and lookup tabels. Wear doesn't increase, but it does run very slowly. The guy mentioned elsewhere in response that he was seeing this behavior in as little as minutes or hours with the right testbed application thrashing the poor drive.

One way to get around this is to set the NTFS block size to be identical to the SSD block side(in the X25's case, 512kb - ouch). As you might expect, this might make for a bit of havoc with a system volume's many small files... What's needed is a new SSD specific file system that gets rid of this problem. Such a thing exists, but only in dedicated systems that have no bios/direct access to the memory and no mapping it as a "drive". This also would require a change in SSD interfaces as well to no longer act like NTFS drive replacements.

The other way is for SSDs to make 16 or 32kb block arrays. This is more expensive, though, as that's a ton of tiny chips. But it would solve the problem quite nicely.

highlandsun
Posts: 139
Joined: Thu Nov 10, 2005 2:04 am
Location: Los Angeles, CA
Contact:

Post by highlandsun » Sun Feb 15, 2009 11:22 pm

Plekto wrote: The other way is for SSDs to make 16 or 32kb block arrays. This is more expensive, though, as that's a ton of tiny chips. But it would solve the problem quite nicely.
Actually it just requires a simple reorganization of the internal array inside a flash chip. Unfortunately chip-makers are still making flash chips as if they will be used standalone (e.g., in a CF card, or in a USB stick) and so they arrange the cells for single 512B page use. Changing the arrangement to make them suitable for use in parallel arrays wouldn't require any changes to pin counts or timing, but none of the chip makers are doing it (yet). The problem then, of course, is you can only optimize for one bit width at a time, so you wouldn't want to buy chips optimized for an 8 chip x 64 byte array and stick them into a drive that needs a 16 chip x 32 byte array.

Aris
Posts: 2299
Joined: Mon Dec 15, 2003 10:29 am
Location: Bellevue, Nebraska
Contact:

Post by Aris » Wed Feb 18, 2009 9:20 am

Plekto wrote:With that list, you can see why that ramdisk is a viable alternative, especially as it has a built-in backup option. Memory might be a bit more expensive, but it works great.
Well the topic is "affordable" and "ssd", which a ramdisk is neither.
Plekto wrote:Eventually it's spending most of its time shuffling and internally de-fragmenting its own memory and lookup tabels. Wear doesn't increase, but it does run very slowly.
I have seen this stated before in other reviews. While yes performance does decrease a bit, its still far faster than anything any rotating magnetic drive can produce. You have to keep things in perspective when you post.

Their are a multitude of negative side effects for almost any storage medium, HDD's and your ramdisks included. Lets talk about how performance on an HDD is slower when using blocks of data closer to the center of the drive as apposed to the edge of the disk, or noise associated with heads, and the delays of such heads. And then with your ramdisks, your relying on a battery for all your data integrity which can lose its charge over time, which happens to CMOS batteries all the time after a few years.

In the end, every option has its plus's and negatives. Nothing is perfect. But i feel SSD technology has reached a point where its plus's have now outwieghed its negative's. Speed is still amazing, and getting better with every new revision, at a rate far exceeding anything from HDD's. Prices are falling fast. Storage space is also increasing rapidly. Just a couple years ago 16gb was a lot, now their comming out with 256gb and 512gb within the next year. SSD technology already has quite a few things going for it over traditional HDD technology, and all of its shortcommings will soon catch up. At which point they will be superior in every way to HDD's.

And lets not forget the single biggest reason to go with SSDs, which by the way is the reason we are all on SILENT PC REVIEW's website. It is SILENT.

Spare Tire
Posts: 286
Joined: Sat Dec 09, 2006 9:45 pm
Location: Montréal, Canada

Post by Spare Tire » Thu Feb 19, 2009 7:18 pm

Regarding partition alignment, i am under the impression that the starting offset is dependant of how big the flash block is, and that depends on what's used in each drive? The OCZ recommends 128kb offset, but can that be extrapolated to other drives? I read intel drives can only erase in blocks of 512kb, so it's partition should start with that offset?
Anybody can confirm this? If this is the case, then we have to know what block size uses each drive no?

Also, i think it's important not just to set the offset of the drive but set the correct partition size. Since we need to align windows allocation units with the disk's tracks (and also possibly the flash blocks?), things get even more complicated. The smallest common multiple of 4096byte allocation units and 32256bytes tracks (63 sectors of 512bytes) is 8 tracks, so the partition must be multiples of 8 tracks. I haven't factored in the size of flash blocks because i still need confirmation of what size a flash block is, and if it varies among manufaturers.

Am i mistaken?

gogos7
Patron of SPCR
Posts: 26
Joined: Sun Jun 05, 2005 3:22 pm
Location: Greece

Post by gogos7 » Fri Feb 20, 2009 6:31 am

Hello,

anyone from Japan or China here??

New "I-O Data" SSD which are actually Samsung MLC (90 Read, 70 Write) are selling for:
64GB -> 85 euro &
128GB -> 150 euro !
french info here

In UK, similar Samsung here:
http://www.novatech.co.uk/novatech/spec ... SAM-SSD64M

Post Reply