1st Intel SSD Review & 4 Way SSD RoundUp

Silencing hard drives, optical drives and other storage devices

Moderators: NeilBlanchard, Ralf Hutter, sthayashi, Lawrence Lee

MikeC
Site Admin
Posts: 12285
Joined: Sun Aug 11, 2002 3:26 pm
Location: Vancouver, BC, Canada
Contact:

1st Intel SSD Review & 4 Way SSD RoundUp

Post by MikeC » Thu Sep 04, 2008 12:21 pm

Last edited by MikeC on Mon Sep 08, 2008 1:53 pm, edited 2 times in total.

dhanson865
Posts: 2198
Joined: Thu Feb 10, 2005 11:20 am
Location: TN, USA

Post by dhanson865 » Thu Sep 04, 2008 6:40 pm

Thanks for the heads up.

I look forward to their followup article with XP testing instead of Vista (since I have no plans to use Vista in the near future).

Of course it would be nice if they tested the Core V1 versus the Core V2 but at least they mentioned the relative pricing and possible drawbacks.

Probably the best thing about that article is that they aren't playing sides. Plenty of positives and negatives clearly documented. It doesn't read like a fluff piece or like they have an axe to grind.

AZBrandon
Friend of SPCR
Posts: 867
Joined: Sun Mar 21, 2004 5:47 pm
Location: Phoenix, AZ

Post by AZBrandon » Thu Sep 04, 2008 7:39 pm

Wow, I skipped about half the benchmarks, but WOW! That is a lot of data to take in! I really enjoy reading reviews where you can tell they are genuinely working to deliver the most possible amount of relevant data. I also appreciated that they didn't even publish the IOMeter benchmarks since they weren't convinced the data was being reported accurately. I like that - if the data is suspect, don't even report it and thus confuse people. Anyway, good stuff to read, you can tell the technology is certainly maturing.

Aris
Posts: 2299
Joined: Mon Dec 15, 2003 10:29 am
Location: Bellevue, Nebraska
Contact:

Post by Aris » Thu Sep 04, 2008 8:38 pm

really wish they had a comparitive graph of all the tested items including a velociraptor for reference. each product has its own graph per test, so 4 graphs per test, you have to keep flipping back n forth between all 4 for each test to see how they compare to each other. very combersome. I dont like how they displayed the test results at all.

Bar81
Posts: 261
Joined: Thu Sep 04, 2003 4:19 pm
Location: Dubai

Post by Bar81 » Thu Sep 04, 2008 9:31 pm

Unfortunately, another worthless SSD review. It seems all these sites do is benchmark the drives rather than using them as an OS drive day to day. There is a small comment at the end of the review but the language suggests the author has no first hand experience in that regard.

MikeC
Site Admin
Posts: 12285
Joined: Sun Aug 11, 2002 3:26 pm
Location: Vancouver, BC, Canada
Contact:

Post by MikeC » Thu Sep 04, 2008 10:56 pm

I would not pass this one off as worthless. PCMark combines a series of tests that take into account many aspects of performance, and the SSDs kill the Velociraptor, most scoring twice as high. It seems to me that most SSDs would easily feel much faster than most HDDs for one simple reason: Random Access time. It's literally a hundred times faster than the fastest HDDs. In most desktop usage, this one aspect dictates how responsive a PC feels more than any other. Opening and closing files is not unimportant, but most files I open/close just aren't big enough to make throughput that important.

I have some experience with the Cenatek Rocket Drive, an early iteration of SSD going back to 2002. Its RAM-based speed easily made up for the much slower processor in a system I used to compare against another PC with a much faster CPU and good conventional HDD.

My review of the Cenatek may be of interest; somewhat historical now, but still relevant in the comparison between SSD and HHD. While current HDDs are much faster than the ones used in this review, current SSDs are also faster than the Cenatek. HDD latency has not improved much, we're still seeing some 12ms (or higher) random access time in most 7200rpm drives.

TBorgeaud
Posts: 13
Joined: Fri Sep 05, 2008 2:31 am
Location: London, UK

Post by TBorgeaud » Fri Sep 05, 2008 3:15 am

For several years I have been interested in (I've been considering) the possibility of using solid state storage. More lately, since I've become interested in having a quiet PC and flash memory / SSD prices have fallen, using an SSD, or even just compact flash, is looking more attractive.

However, the limited write endurance of SSD still concerns me.

It seems to me that, for CF and other simple non-ssd flash based storage products, write endurace does need to be considered. The price and capacities of SSDs available these days suggest that, whether or not write endurance was ever a real problem, it is likely to be a non-issue now, but I am not at all certain.

I have read various articles and reviews concerning SSD write endurance (including a nice article at storagesearch regarding ssd myths).

However, all the articles (that do mention endurance) tend to either dismiss the issue as being unimportant for typical applications and typical flash endurances (estimated at 100k+ writes) or have mentioned that SSDs are available with excellent endurance and good wear leveling.

For me, this leaves the question of which SSDs actually feature wear-leveling and flash cell endurance that mean the write endurance can be completely ignored, and whether or not cheaper SSDs will not suffer in "pathological" situations (pathological as opposed to simply maximum quantity of writing).

I.e. How can I estimate how much better reliability and lifespan would I really get from a typical SSD used in particular situations (could I actually wear out an SSD)?

Also, are there any CF style alternatives that might provide good endurance for use with a typical server/desktop OS installation which does not require huge storage?

Bar81
Posts: 261
Joined: Thu Sep 04, 2003 4:19 pm
Location: Dubai

Post by Bar81 » Fri Sep 05, 2008 7:13 am

MikeC wrote:I would not pass this one off as worthless. PCMark combines a series of tests that take into account many aspects of performance, and the SSDs kill the Velociraptor, most scoring twice as high. It seems to me that most SSDs would easily feel much faster than most HDDs for one simple reason: Random Access time. It's literally a hundred times faster than the fastest HDDs. In most desktop usage, this one aspect dictates how responsive a PC feels more than any other. Opening and closing files is not unimportant, but most files I open/close just aren't big enough to make throughput that important.

I have some experience with the Cenatek Rocket Drive, an early iteration of SSD going back to 2002. Its RAM-based speed easily made up for the much slower processor in a system I used to compare against another PC with a much faster CPU and good conventional HDD.

My review of the Cenatek may be of interest; somewhat historical now, but still relevant in the comparison between SSD and HHD. While current HDDs are much faster than the ones used in this review, current SSDs are also faster than the Cenatek. HDD latency has not improved much, we're still seeing some 12ms (or higher) random access time in most 7200rpm drives.
I've used the OCZ Core SSD, so I know first hand all about the random access time advantages which is why I bought one in the first place. It's the random write performance that's the issue and until a reviewer has used one of the MLC drives first hand or even the SLC drives on a day to day basis for several weeks, that reviewer is unqualified to write a review about the unit. Simply hooking up the drive and running a bunch of benchmarks is worthless. Plus, where are the comments regarding using various controllers (try installing on an Intel ICH9R versus a JMicron JMB360 versus an AMD SB450 for example and you'll see what I'm talking about). These "reviews" are an embarrassment and do a huge disservice to the consumer.

m^2
Posts: 146
Joined: Mon Jan 29, 2007 2:12 am
Location: Poland
Contact:

Post by m^2 » Fri Sep 05, 2008 7:56 am

Agree, it's not a good review. No random read / write tests? These are the most interesting about flash!
And Sandra shows that NCQ is enabled. :lol:
TBorgeaud wrote:For several years I have been interested in (I've been considering) the possibility of using solid state storage. More lately, since I've become interested in having a quiet PC and flash memory / SSD prices have fallen, using an SSD, or even just compact flash, is looking more attractive.

However, the limited write endurance of SSD still concerns me.

It seems to me that, for CF and other simple non-ssd flash based storage products, write endurace does need to be considered. The price and capacities of SSDs available these days suggest that, whether or not write endurance was ever a real problem, it is likely to be a non-issue now, but I am not at all certain.

I have read various articles and reviews concerning SSD write endurance (including a nice article at storagesearch regarding ssd myths).

However, all the articles (that do mention endurance) tend to either dismiss the issue as being unimportant for typical applications and typical flash endurances (estimated at 100k+ writes) or have mentioned that SSDs are available with excellent endurance and good wear leveling.

For me, this leaves the question of which SSDs actually feature wear-leveling and flash cell endurance that mean the write endurance can be completely ignored, and whether or not cheaper SSDs will not suffer in "pathological" situations (pathological as opposed to simply maximum quantity of writing).

I.e. How can I estimate how much better reliability and lifespan would I really get from a typical SSD used in particular situations (could I actually wear out an SSD)?

Also, are there any CF style alternatives that might provide good endurance for use with a typical server/desktop OS installation which does not require huge storage?
I'm concerned about it too. Generally it's not an issue. 10000 writes on a 32 GB SSD gives 312.5TB or, with 5 years life expectancy - over 175 GB daily. This rough estimation is not fully correct as small writes tend to wear SSDs out much faster. Here you have an analysis of max daily traffic for 10.66 KB writes. If you move things like temp or page file out of your flash drive, you won't get too many small writes, so it's not an issue either. What bothers me is that - if I get it right - wear leveling can't work perfectly. If your OS files that are never updated take 80% of storage space - you get only 20% of durability.

Matija
Posts: 780
Joined: Sat Mar 17, 2007 3:17 am
Location: Croatia

Post by Matija » Fri Sep 05, 2008 8:39 am

Correct me if I'm wrong, but if SLC drives fail regarding writes, you can still read data from them. A write-dead MLC cell doesn't even allow reading.

ist.martin
Posts: 220
Joined: Fri Jul 18, 2003 11:59 am
Location: Vancouver, B.C.

I'll jump in ...

Post by ist.martin » Fri Sep 05, 2008 8:41 am

I have been using a SSD exclusively for 4+ years now.

I have a 0 dB system, with no moving parts. I use it 5-11 hours a day for work and leisure. I am in the stock/financial business, and have many browser windows open simultaneosly, all day long.

I don't do a lot of writing - as I am reading data most of the time, but I would consider my usage pattern fairly typical of a non-gamer and non-developer.

My current system uses a 1.5 GHz Celeron M, and an 8 GB SSD running XP Pro. There is no doubt in my mind that my system performance is much snappier due to the SSD drive. I can compare at home to two other systems with more CPU and mb power. My weak onboard 855GM graphics slow things down with complex window rendering, but everything else is very quick.

Count me as an SSD fan.

CA_Steve
Moderator
Posts: 7650
Joined: Thu Oct 06, 2005 4:36 am
Location: St. Louis, MO

Post by CA_Steve » Fri Sep 05, 2008 5:24 pm

I look forward to a reasonably priced 32GB SSD ($100) to use for my OS and applications drive. Now, if they'd get around to providing a clear indication of what the true lifetime of the drive will be. MTBF of 1,000,000+hrs (114 yrs if always on), and yet, the warranties are 1-5 years. The number of write cycles are vague at best.

dhanson865
Posts: 2198
Joined: Thu Feb 10, 2005 11:20 am
Location: TN, USA

Post by dhanson865 » Sat Sep 06, 2008 6:15 am

MTBF and warranties are vague at best.

Why is it that people see a 3 year warranty on a hard drive from Seagate or Western Digital and feel reassured but they see a 2 year warranty on an SSD from OCZ and freak out?

Other than the IBM deathstar drives I bought back when a 40GB 7200 RPM drive would be considered a fast drive I haven't had a hard drive give me any serious problems. I have drives laying around my residence that range from 6GB to 120GB that are all usable working drives. I just don't use them because they are slower than my current drives.

I have several PCs and hand down drives from the newest to the oldest. I usually have 3 or 4 PCs in the chain.

If an SSD lasts half as long as a hard drive does then I still would get several years of use out of it. A drive currently makes it through 2 or 3 PCs before I decide it's too slow to use but it doesn't die. If an SSD died on that 3rd PC it might force the next upgrade but it wouldn't be the end of the world. It still might be possible that I'd replace the drive before it completely failed just because newer SSDs drop in price enough to make me want to upgrade.

You just have to remember the rules

1. Turn off virtual memory or move the pagefile/swapfile to another drive (a traditional hard drive)

2. Don't defragment your SSD

3. You still need to make backups of important data. Any drive can fail without warning.

I'm waiting on price drops for SSDs but I'm not really that concerned about reliability.

trxman
Posts: 145
Joined: Wed Dec 15, 2004 5:45 pm
Location: system

Post by trxman » Sun Sep 07, 2008 9:03 am

thumbs up for SSDs!

waiting to see Core V2 in action!

highlandsun
Posts: 139
Joined: Thu Nov 10, 2005 2:04 am
Location: Los Angeles, CA
Contact:

Post by highlandsun » Sun Sep 07, 2008 9:12 pm

I get 138MB/sec sequential reads from mine. More like Core V1 speed, I wonder how anyone can get 170MB/sec from this V2 drive...

Aris
Posts: 2299
Joined: Mon Dec 15, 2003 10:29 am
Location: Bellevue, Nebraska
Contact:

Post by Aris » Mon Sep 08, 2008 7:06 am

highlandsun wrote:I get 138MB/sec sequential reads from mine. More like Core V1 speed, I wonder how anyone can get 170MB/sec from this V2 drive...
does your MB support SATA II? If not, your likely hitting near the theoreticall 150mb throughput of original SATA.

I'm with dhanson as far as reliability is concerned. The oldest HD i currently own is a 120gb samsung spinpoint that i bought maybe 3 years ago after my last conventional HDD failed due to overheating. (hot summer day + broken AC = bad for HDD's). Even if you assume that incident didnt happen, the system was built about 4 years ago, and its been on its last leg for almost a year now. Only reason i havnt gotten rid of it yet is because ive been trying to pay off some bills first before i build a new system. It was replaced as my current main rig over a year ago and given to my wife.

I've heard from numerous people now that have used SSD's as main system drives (no convention HDD for pagefiles etc), for over 3 or 4 years with no ill effects whatsoever. I really believe this issue with SSD's has been overhyped. It was likely a problem years ago which has long been solved, but the stigma has persisted. I can maybe see a server running 24/7 running into this issue with SSD's, but certainly not the average consumer.

Also, if i had had an SSD instead of my conventional 3.5" spinpoint, it likely wouldnt have overheated that hot summer day a few years ago, and still be working for me right now. So if you take into account all the dangers to a conventional HDD to cause them to fail that SSD's are not susceptible to, i truely think SSD's have the upper hand when it comes to longevity and reliability.

ist.martin
Posts: 220
Joined: Fri Jul 18, 2003 11:59 am
Location: Vancouver, B.C.

Intel enters fray ...

Post by ist.martin » Mon Sep 08, 2008 10:13 am


MikeK
Patron of SPCR
Posts: 321
Joined: Mon Aug 18, 2003 7:47 pm
Location: St. Louis, USA

Post by MikeK » Mon Sep 08, 2008 12:58 pm

I don't think 2008 is the year of the SSD like they said but it's getting there. Once the price goes down more and the capacity goes up, we'll see them becoming more mainstream. Right now they are not. Micron's 34nm flash memory should help it along. I believe they are still the only one with that.

TBorgeaud
Posts: 13
Joined: Fri Sep 05, 2008 2:31 am
Location: London, UK

Post by TBorgeaud » Mon Sep 08, 2008 1:06 pm

I have no doubt that SSDs are more reliable than conventional spinning hard disks. However, I feel that reliability, durability (or MTBF etc) are not quite the same thing as an expected lifespan/endurance.

SSD or not, backups seem pretty sensible for any data which I cannot (easily) afford to lose.

Therefore, if some form of data duplication for backups is going to be required, in any situation where reliability of conventional hard disks is not a real problem, any increase in reliability would be a bonus for me rather than a necessity. All other things being equal, except perhaps some cost, I would prefer the more reliable technology over the less reliable.

However, if my usage of an SSD device (or similar) placed a limit upon the lifespan of the device that was comparable to the expected life of a conventional hard disk, the situation is somewhat different. If the lifespan of an SSD was potentially limited to a lower lifespan than the typical lifespan of a conventional hard disk, I may (depending upon application) be better off choosing a conventional hard disk.

As far as I can tell, MLC flash cells do have a relatively low write endurance. Current SSDs that use MLC technology will still provide considerable write endurance due to wear leveling (and also some handling of bad blocks).

However, I have found little information to give me any idea about what to actually expect in terms of wear leveling. I just assume that devices employ wear leveling that is sophisticated enough to deal with unusual but quite possible scenarios.

This is really where I am looking for a bit more information. I would like to be able to judge, with some degree of confidence, where I will definitely not be pushing the limits of a device. I.e. I would like to know whether I could actually wear out a particular SSD in some particular situation/application, I would like to know that I don't have to worry about any unusual pattern of disk writing occurring (without having to make any allowances such as how much free space I should leave and which kind of temporary files, logging or journaling I should avoid writing to disk).

For my next workstation, at least, the picture is clear. An SSD, of the kind featured in the revierw, will very likely be worth while simply for the increased performance and reliability. Unless I actually get around to putting together some more comprehensive network storage, I'll probably also use a pair of conventional disks for capacity (probably the same disks as I am using at the moment).

dhanson865
Posts: 2198
Joined: Thu Feb 10, 2005 11:20 am
Location: TN, USA

Re: Intel enters fray ...

Post by dhanson865 » Mon Sep 08, 2008 2:03 pm

ist.martin wrote:X-25 M SSD from Intel:
80GB? Are they selling a 96GB flash drive as 80GB and reserving some space for spare sectors and wear leveling?

There are 10 memory chips on each side (20 total) not counting the controller and dram cache. 20x4 would be 80 so maybe they really have 80GB flash on it and it isn't abnormal to see SSDs that aren't close to a multiple of 32GB.

OK this entire message is data on the mainstream/consumer grade SSDs

The 20 chips are apparently split across channels on the drive controller (2 chips per channel)
10 Parallel Channel Architecture with 50nm MLC ONFI 1.0 NAND

Operating Shock 1,000 G/0.5 ms

Operating Temperature 0°C to +70°C

Product Health Monitoring Self-Monitoring, Analysis and Reporting Technology (SMART) commands, plus additional SSD monitoring
http://www.intel.com/design/flash/nand/ ... /index.htm

http://download.intel.com/design/flash/ ... asheet.pdf
Capacity
Notes:
1. 1GB=1,000,000,000 Byte and not all of the memory can be used for data storage.
2. 1 sector = 512Byte

Unformatted Capacity Total User Addressable Sectors in LBA Mode
80 GB 156,301,488

Table 11. Reliability Specifications
Nonrecoverable read errors 1 sector in 1015 bits read, max
Mean Time between Failure (MTBF) 1,200,000 hours
Power On/Off Cycles 50,000 cycles
Minimum Useful Life 5 years


Power On/Off Cycles
Defined as power being removed from the drive, and then restored. Most host systems remove power from the drive when entering suspend and hibernate as well as on a system shutdown.

Minimum Useful Life
A typical client usage of 20 GB writes per day is assumed. Should the host system attempt to exceed 20 GB writes per day by a large margin for an extended period, the drive will enable the endurance management feature to adjust write performance. By efficiently managing performance, this feature enables the device to have, at a minimum, a five year useful life. Under normal operation conditions, the drive will not invoke this feature.

Native Command Queuing
The Intel X18-M/X25-M SATA SSDs support the Native Command Queuing (NCQ) command set, which consists of
• READ FPDMA QUEUED
• WRITE FPDMA QUEUED
Note: With a maximum queue depth equal to 31.
Apparently if you use the consumer drive in a server it'll throttle you eventually. Then again how many people write more than 20GB a day on a regular basis? I know I would the first day or two as I installed an OS and some apps but it probably won't kick in until you do that for several days in a row. I suppose reviewers and people that like to do home benchmarking would have to keep that in mind as it might slow down after several days of severe testing.

Anybody know off the top of their head what a typical DVR/PVR setup like a MythTV box would write in a day?
Last edited by dhanson865 on Mon Sep 08, 2008 2:11 pm, edited 2 times in total.

trxman
Posts: 145
Joined: Wed Dec 15, 2004 5:45 pm
Location: system

Re: Intel enters fray ...

Post by trxman » Mon Sep 08, 2008 2:09 pm

dhanson865 wrote:80GB? Are they selling a 96GB flash drive as 80GB and reserving some space for spare sectors and wear leveling?
actualy, yes:
Intel actually includes additional space on the drive, on the order of 7.5 - 8% more (6 - 6.4GB on an 80GB drive) specifically for reliability purposes. If you start running out of good blocks to write to (nearing the end of your drive's lifespan), the SSD will write to this additional space on the drive. One interesting sidenote, you can actually increase the amount of reserved space on your drive to increase its lifespan. First secure erase the drive and using the ATA SetMaxAddress commend just shrink the user capacity, giving you more spare area.

one very very interesting X-25 M review with analysis of MLC drives big problem (OCZ Core and Co.):
http://www.anandtech.com/cpuchipsets/in ... i=3403&p=1

dhanson865
Posts: 2198
Joined: Thu Feb 10, 2005 11:20 am
Location: TN, USA

Re: Intel enters fray ...

Post by dhanson865 » Mon Sep 08, 2008 2:16 pm

trxman wrote:
dhanson865 wrote:80GB? Are they selling a 96GB flash drive as 80GB and reserving some space for spare sectors and wear leveling?
actualy, yes:
Intel actually includes additional space on the drive, on the order of 7.5 - 8% more (6 - 6.4GB on an 80GB drive) specifically for reliability purposes.

one very very interesting X-25 M review with analysis of MLC drives big problem (OCZ Core and Co.):
http://www.anandtech.com/cpuchipsets/in ... i=3403&p=1
OK, now do the math with 20 flash chips how much space does each hold and what is the true max capacity before reserve space is taken? Or do we assume all the review pictures are of a drive other than the 80GB version they are reporting about?

dhanson865
Posts: 2198
Joined: Thu Feb 10, 2005 11:20 am
Location: TN, USA

Re: Intel enters fray ...

Post by dhanson865 » Mon Sep 08, 2008 2:25 pm

dhanson865 wrote:
Table 11. Reliability Specifications
Nonrecoverable read errors 1 sector in 1015 bits read, max
Mean Time between Failure (MTBF) 1,200,000 hours
Power On/Off Cycles 50,000 cycles
Minimum Useful Life 5 years


Power On/Off Cycles
Defined as power being removed from the drive, and then restored. Most host systems remove power from the drive when entering suspend and hibernate as well as on a system shutdown.

Minimum Useful Life
A typical client usage of 20 GB writes per day is assumed. Should the host system attempt to exceed 20 GB writes per day by a large margin for an extended period, the drive will enable the endurance management feature to adjust write performance. By efficiently managing performance, this feature enables the device to have, at a minimum, a five year useful life. Under normal operation conditions, the drive will not invoke this feature.

Apparently if you use the consumer drive in a server it'll throttle you eventually. Then again how many people write more than 20GB a day on a regular basis? I know I would the first day or two as I installed an OS and some apps but it probably won't kick in until you do that for several days in a row. I suppose reviewers and people that like to do home benchmarking would have to keep that in mind as it might slow down after several days of severe testing.

Anybody know off the top of their head what a typical DVR/PVR setup like a MythTV box would write in a day?
For the "extreme" SSD the MTBF goes up but the power cycles stay the same. Also there is no throttling of writes for sustained writes and no mention of any kind of the minimum useful life.

If you want to burn out your extreme SSD in 1 year time by using it on a server go right ahead. I don't even know the warranty period on this drive as the intel site doesn't mention it.

Note: I'm not implying that it doesn't have a warranty. I'm not saying it can't last 5 years like the consumer grade drive. What I'm saying is they are marketing this as a performance drive not as the reliability guaranteed drive.

Bar81
Posts: 261
Joined: Thu Sep 04, 2003 4:19 pm
Location: Dubai

Re: Intel enters fray ...

Post by Bar81 » Mon Sep 08, 2008 8:14 pm

trxman wrote:one very very interesting X-25 M review with analysis of MLC drives big problem (OCZ Core and Co.):
http://www.anandtech.com/cpuchipsets/in ... i=3403&p=1
Precisely what I'm talking about; this is the first review that isn't worthless that I've seen. There's more to SSDs than a couple of benchmarks, the MLC drives (before the Intel) have serious issues. Based upon what Intel has been able to do with MLC, I'm probably going to pick up an SLC Intel drive when they hit. In the meantime, the OCZ guys are replacing my drive, but based upon my experience, that of others and this article, I'm pretty sure I'll have a $300 paperweight and it will be my last purchase from OCZ as they had to have known what they were releasing and did so anyway.

highlandsun
Posts: 139
Joined: Thu Nov 10, 2005 2:04 am
Location: Los Angeles, CA
Contact:

Post by highlandsun » Mon Sep 08, 2008 8:40 pm

i'm feeling a bit like jumping on the OCZ drive was a premature decision, but what the hell. The 160GB Intel drive won't be out for another quarter or two, so in the meantime, I'm still getting better read performance with my 120GB CoreV2 than any other notebook drive.

dhanson865
Posts: 2198
Joined: Thu Feb 10, 2005 11:20 am
Location: TN, USA

Re: Intel enters fray ...

Post by dhanson865 » Tue Sep 09, 2008 4:36 am

Bar81 wrote:
trxman wrote:one very very interesting X-25 M review with analysis of MLC drives big problem (OCZ Core and Co.):
http://www.anandtech.com/cpuchipsets/in ... i=3403&p=1
Precisely what I'm talking about; this is the first review that isn't worthless that I've seen. There's more to SSDs than a couple of benchmarks, the MLC drives (before the Intel) have serious issues.
It's interesting that the review covered the Core V1 but not the Core V2. For all we know that issue isn't there at all on the V2.

Bar81
Posts: 261
Joined: Thu Sep 04, 2003 4:19 pm
Location: Dubai

Re: Intel enters fray ...

Post by Bar81 » Tue Sep 09, 2008 6:04 am

dhanson865 wrote:
Bar81 wrote:
trxman wrote:one very very interesting X-25 M review with analysis of MLC drives big problem (OCZ Core and Co.):
http://www.anandtech.com/cpuchipsets/in ... i=3403&p=1
Precisely what I'm talking about; this is the first review that isn't worthless that I've seen. There's more to SSDs than a couple of benchmarks, the MLC drives (before the Intel) have serious issues.
It's interesting that the review covered the Core V1 but not the Core V2. For all we know that issue isn't there at all on the V2.
Based on reports the Core V2 uses a different/updated JMicron controller and has, essentially, the same warts. So far, the only MLC anyone should touch is the Intel and even then it can't overcome what it is. In this case, the old adage proves true, you get what you pay for and in the case of the Core, that's a lot of crap.

CA_Steve
Moderator
Posts: 7650
Joined: Thu Oct 06, 2005 4:36 am
Location: St. Louis, MO

Post by CA_Steve » Tue Sep 09, 2008 6:15 am


dhanson865
Posts: 2198
Joined: Thu Feb 10, 2005 11:20 am
Location: TN, USA

Re: Intel enters fray ...

Post by dhanson865 » Tue Sep 09, 2008 10:42 am

Bar81 wrote:Based on reports the Core V2 uses a different/updated JMicron controller and has, essentially, the same warts. So far, the only MLC anyone should touch is the Intel and even then it can't overcome what it is. In this case, the old adage proves true, you get what you pay for and in the case of the Core, that's a lot of crap.
The buyers remorse is strong with this one.

But beware of the dark side. Anger, fear, aggression. The dark side of the Force, are they. Easily they flow, quick to join you in a fight. If once you start down the dark path, forever will it dominate your destiny. Consume you, it will…

All I can say is that write speed issues with MLC flash have been known for years. Well before OCZ started selling SSDs. If you didn't know about it when you bought your first SSD then it may have been a harsh awakening but the truth was out there. You just need to open your eyes to it.

Early adopters of any tech pay a penalty, usually it is just price but sometimes the penalties vary widely. Caveat Emptor.

Bar81
Posts: 261
Joined: Thu Sep 04, 2003 4:19 pm
Location: Dubai

Re: Intel enters fray ...

Post by Bar81 » Tue Sep 09, 2008 11:10 am

dhanson865 wrote:
Bar81 wrote:Based on reports the Core V2 uses a different/updated JMicron controller and has, essentially, the same warts. So far, the only MLC anyone should touch is the Intel and even then it can't overcome what it is. In this case, the old adage proves true, you get what you pay for and in the case of the Core, that's a lot of crap.
The buyers remorse is strong with this one.

But beware of the dark side. Anger, fear, aggression. The dark side of the Force, are they. Easily they flow, quick to join you in a fight. If once you start down the dark path, forever will it dominate your destiny. Consume you, it will…

All I can say is that write speed issues with MLC flash have been known for years. Well before OCZ started selling SSDs. If you didn't know about it when you bought your first SSD then it may have been a harsh awakening but the truth was out there. You just need to open your eyes to it.

Early adopters of any tech pay a penalty, usually it is just price but sometimes the penalties vary widely. Caveat Emptor.
Stuff it. Of course I have buyer's remorse, I bought something that works worse than any drive I've ever used. In any case, OCZ is a stand up company and are making it right. I will continue to buy their products assuming everything happens as I've been promised.

Post Reply