How many Samsung Hard drives have failed on you?

Silencing hard drives, optical drives and other storage devices

Moderators: NeilBlanchard, Ralf Hutter, sthayashi, Lawrence Lee

Post Reply

How many Samsung Hard drives have failed on you?

1
4
7%
2
2
4%
3
0
No votes
4
0
No votes
More than 4 (post how many)
0
No votes
None
49
89%
 
Total votes: 55

sthayashi
*Lifetime Patron*
Posts: 3214
Joined: Wed Nov 12, 2003 10:06 am
Location: Pittsburgh, PA

How many Samsung Hard drives have failed on you?

Post by sthayashi » Thu May 06, 2004 6:38 am

This poll is an attempt to calculate the SPCR statistical failure rate of Samsung. Rather than do a lot of hearsay and speculation, let's get some solid numbers generated so we can determine what the failure rate of SPCR's sampling size. Please also vote in the How Many Samsung Hard Drives have you owned poll?

Tobias
Posts: 530
Joined: Sun Aug 24, 2003 9:52 am

Post by Tobias » Thu May 06, 2004 2:05 pm

thought these two threads could be interesting here:)

here about 5% of the disks reported had failed...

http://forums.silentpcreview.com/viewto ... highlight=

Here there are number of reported dead drives is larger, but the poll is not linked to any counting of drives.

http://forums.silentpcreview.com/viewto ... ght=#72831

Jordan
Posts: 557
Joined: Wed Apr 28, 2004 8:21 pm
Location: Scotland, UK

Post by Jordan » Thu May 06, 2004 2:39 pm

I have had 2 running perfectly fine but I've only had them about 6 months.

matt_garman
*Lifetime Patron*
Posts: 541
Joined: Sun Jan 04, 2004 11:35 am
Location: Chicago, Ill., USA
Contact:

Post by matt_garman » Thu May 06, 2004 5:34 pm

Jordan wrote:I have had 2 running perfectly fine but I've only had them about 6 months.
Same here, I'm abstaining from voting because I only got mine within the last month. It's had no problems, but it's barely been used.

It might be more useful to also indicate how long the drive(s) has (have) been in use (regardless of failure).

<shrug>

Matt

sthayashi
*Lifetime Patron*
Posts: 3214
Joined: Wed Nov 12, 2003 10:06 am
Location: Pittsburgh, PA

Post by sthayashi » Thu May 06, 2004 9:48 pm

The problem with measuring how much time it took to achieve failure is that it's difficult to get anything meaningful from that. Ultimately, all drives will fail, which is the basic principle behind RAID (Redundant Array of Independent Disks). Drives are more likely fail when pushed to temperature limits. Frequent spinning up and spinning down may increase the likelyhood of failure. Even the conditions in which they were stored prior to purchase can alter their failure rate.

Since this is a simple survey and not a research experiment, I'm not too worried about the time it took to fail. All I want is a number that is semi-meaningful, even if it's not statistically significant.

Right now at the time of this post (1:45AM EST), I've counted 2 drive failures out of a total of 27 drives owned. That's a failure rate of 7.4%. That's a bit high, but I suspect that the numbers may drop as more people vote, since it's still not statistically significant (If it was 28 drives owned, that failure rate would drop to 7.1%).

Jordan
Posts: 557
Joined: Wed Apr 28, 2004 8:21 pm
Location: Scotland, UK

Post by Jordan » Thu May 06, 2004 10:55 pm

I completely forgot, don't ask me how, but through my website (controlpc.co.uk) I have built over 10 PC's with Samsung drives. I Always use Samsung drives because they are quiet, pretty fast and run fairly cool (which esp helps in SFF PCs I build).

Out of all the drives and my own two I can't report a single failure or even hitch so far.

AZBrandon
Friend of SPCR
Posts: 867
Joined: Sun Mar 21, 2004 5:47 pm
Location: Phoenix, AZ

Post by AZBrandon » Fri May 07, 2004 11:23 am

The only flaw with this poll is for folks who have had more than one Samsung drive. If I've owned 5 and had zero failures, I get one vote for "none". If I've had 5 drives and 1 failed, then I would mark "1", which makes it look like for my one vote, I had 100% failure, but really it was a 20% failure rate (1 out of 5).

dasman
*Lifetime Patron*
Posts: 485
Joined: Thu Jan 08, 2004 10:59 am
Location: Erie, PA USA

Post by dasman » Fri May 07, 2004 12:28 pm

AZBrandon wrote:The only flaw with this poll is for folks who have had more than one Samsung drive. If I've owned 5 and had zero failures, I get one vote for "none". If I've had 5 drives and 1 failed, then I would mark "1", which makes it look like for my one vote, I had 100% failure, but really it was a 20% failure rate (1 out of 5).
But if you vote in the companion poll here (also link in the initial post above), then it's taken care of. This poll is how many have failed, the other is how many have you owned. People need to vote in both polls to give this any meaning whatsoever.

Dave

luggage
Posts: 70
Joined: Tue Oct 21, 2003 7:48 am
Location: hbg, sweden
Contact:

Post by luggage » Mon May 10, 2004 2:05 pm

d-temp smart report:

Code: Select all

-------------------------[Device information]-------------------------
Physical drive: 2
Compatiblity: ATA/ATAPI-7 minor version 001Eh
Model: SA]SUNW SP0802^ 0 0 0 0 0 0 0 0 0 0 0 0
Firmware revision: TU100-23
Serial number: 0653Z1VW706410
Disk capacity: 202.56 Gb (424803472 sectors)
Buffer size: 2048 Kb
Identify information CRC: FAILED
Yay 202GB Samsungs! ;)
Still I liked it and I'm going to go get it replaced with another one as soon as I've tried backing up as much as I can be bothered with.

Had it since late August. And it's the first drive that has crasched for me in ages. (last one was a bigfoot or seagate 1080MB...)
Been lucky I guess and not at all like the numerous RMA's my friends have done on any number of Maxtors, WDs and IBMs.

Jan Kivar
Friend of SPCR
Posts: 1310
Joined: Mon Apr 28, 2003 4:37 am
Location: Finland

Post by Jan Kivar » Mon May 10, 2004 11:06 pm

Is the drive's name/id also corrupted in the BIOS POST screen? If it is, You might want to try it with another ATA cable.

Cheers,

Jan

luggage
Posts: 70
Joined: Tue Oct 21, 2003 7:48 am
Location: hbg, sweden
Contact:

Post by luggage » Tue May 11, 2004 5:21 am

Yes on every second or third boot or so.
And even if it boots ok it usually goes "corrupt" during the day. I was was also pondering if it might be the cables and was going to try with a new pair before I rma'd it as I was going by the shop for a cd-r pack anyway to do backup.
And after this tip I'll most surely try it :) Thanks.

I'll guess I'll do another post later to let you know how it turns out.

Is there any other things I should try if the cables turn out ok btw?
I guess i can try and put up another computer to check so it isn't the promise controller on the mb breaking down but I dont have another mb with uata support unfortunatly.

Jan Kivar
Friend of SPCR
Posts: 1310
Joined: Mon Apr 28, 2003 4:37 am
Location: Finland

Post by Jan Kivar » Tue May 11, 2004 6:57 am

luggage wrote:Yes on every second or third boot or so.
And even if it boots ok it usually goes "corrupt" during the day. I was was also pondering if it might be the cables...
By any chance You are using rounded cables?
luggage wrote:Is there any other things I should try if the cables turn out ok btw?
Well, You could download Samsung's or Hitachi's Feature Tool, and set the ATA speed down to ATA100 (UDMA-5). ATA133 is really stretching the spec.

Also, good thing to remember is that the master (only) device should be attached to the end connector. I once had some issues when using rounded cables, ATA133, mobile rack and the middle connector of the cable. I got some write errors, but changing the rack to the end connector solved them.

Please do report back if the new cable helps anything.

Cheers,

Jan

luggage
Posts: 70
Joined: Tue Oct 21, 2003 7:48 am
Location: hbg, sweden
Contact:

Post by luggage » Tue May 11, 2004 8:34 am

Jan Kivar wrote:By any chance You are using rounded cables?
No I do the origami thing :)
Jan Kivar wrote:
luggage wrote:Is there any other things I should try if the cables turn out ok btw?
Well, You could download Samsung's or Hitachi's Feature Tool, and set the ATA speed down to ATA100 (UDMA-5). ATA133 is really stretching the spec.

Also, good thing to remember is that the master (only) device should be attached to the end connector. I once had some issues when using rounded cables, ATA133, mobile rack and the middle connector of the cable. I got some write errors, but changing the rack to the end connector solved them.

Please do report back if the new cable helps anything.

Cheers,

Jan
It seems to have helped. That is it works now - so far :) (remove 1 from the poll :) )

I suspect that what might have happened is that I slightly dislodged the caple from the plextor cdr-writer that sits on the same cable while cleaning heatsinks. And with vibrations and that I got some funky signals. Anyway the new cable gave a much "firmer" connection so I hope that wont happen again. (I did check the connection to the samsung earlier... but not as carefully the connection to the plextor :oops: :roll:)

The samsung (and cudaIV) is running UDMA-5 since the Promise100 can't handle more.
Having it on the Promise controller also makes it "invisble" to the Hutil unfortunatly. (lists as a scsi-controller for windows... :roll:)
I'll try the Hitachi Feature Tool and see if it can find my drives.
Don't feel like moving it over to the nomal IDE channels just to set the AAM as I cant hear it over my fans anyway.
That'll have to wait untill I migrate it over to a new quiter system in the future sometime.

Btw the store I bought it in and that got me the new cable today said they had had very little trouble with the Spinpoints they sold. Only four being returned from customers, and all of them found to be ok so they had not RMA'd anyone to samsung yet. (I dont know how many they had sold tho but they use the Spinpoints as default in the computers they sell and the shop is surviving so... )

[edit]the Hitachi Feature Tool managed to find all three of them :) Great!

MikeC
Site Admin
Posts: 12285
Joined: Sun Aug 11, 2002 3:26 pm
Location: Vancouver, BC, Canada
Contact:

Post by MikeC » Tue May 11, 2004 7:24 pm

Correlating the data between these polls, looks like 4 failures out of 80 drives. 5% does not seem bad. Oh but if you add the 4 drives I missed, it gets slightly better. 4.76% ;)

sthayashi
*Lifetime Patron*
Posts: 3214
Joined: Wed Nov 12, 2003 10:06 am
Location: Pittsburgh, PA

Post by sthayashi » Tue May 18, 2004 9:46 pm

According to latest poll additions, there are over 102 drives that have been used by SPCR members. With only 4 drives that have failed, that puts the failure rate at less than 4% :shock: !!

sthayashi
*Lifetime Patron*
Posts: 3214
Joined: Wed Nov 12, 2003 10:06 am
Location: Pittsburgh, PA

Post by sthayashi » Mon Jul 12, 2004 10:30 pm

I'd like to officially change my vote from zero to one. Tonight, one half of my RAID-1 array has died.

This would bump the failure rate back up to 4.9%, which is still damn good in my opinion. Except that Luggage voted a failure when that didn't happen. So I think we're still legitimately at 3.9%

Anyone else out there that hasn't voted yet?

aphonos
*Lifetime Patron*
Posts: 954
Joined: Thu Jan 30, 2003 1:28 pm
Location: Tennessee, USA

Post by aphonos » Tue Jul 13, 2004 5:29 pm

Just added my 2 failures out of 7 drives (3 of which have not yet been installed for regular/daily use....so do they count?).

sthayashi
*Lifetime Patron*
Posts: 3214
Joined: Wed Nov 12, 2003 10:06 am
Location: Pittsburgh, PA

Post by sthayashi » Tue Jul 13, 2004 9:01 pm

Probably not, but you can post updates as you test them.

With 6 failed drives, and 110 drives in use, we're back up to 5.45% failure rate. I think this poll will level off at about there.

Viperoni
Posts: 161
Joined: Sun Sep 14, 2003 5:04 pm
Location: Brampton, ON
Contact:

Post by Viperoni » Fri Jul 16, 2004 7:07 pm

In general, I rarely see HD's rarely go bad. Out of 100 computers, I honestly doubt 5 develop bad HD's within 6 months.

NeilBlanchard
Moderator
Posts: 7681
Joined: Mon Dec 09, 2002 7:11 pm
Location: Maynard, MA, Eaarth
Contact:

poll amendment

Post by NeilBlanchard » Fri Jul 16, 2004 7:18 pm

Hello:

I have used at least 10 Samsungs (mostly SATA's) and up until yesterday, none had given any trouble. Now, it seems that one (of a RAID 1 pair) has failed in a client's computer, though I'll confirm it tomorrow -- it may just have lost power or the data cable my have slipped out... :evil:

So, maybe my vote of "None" will get changed to "one"...

dimva
Posts: 57
Joined: Sun Mar 14, 2004 6:01 pm

Post by dimva » Sun Jul 18, 2004 3:11 pm

I'm not going to vote none because I know my harddrives will fail soon after. :wink:

Especially with their 50C temperatures. Yeah I know it's not bad, but it's always better to have lower temperatures.

dasman
*Lifetime Patron*
Posts: 485
Joined: Thu Jan 08, 2004 10:59 am
Location: Erie, PA USA

Post by dasman » Mon Jul 19, 2004 7:22 am

Viperoni wrote:In general, I rarely see HD's rarely go bad. Out of 100 computers, I honestly doubt 5 develop bad HD's within 6 months.
I agree, I'm kinda shocked everyone else considers a 5% failure rate good. I think I've only had two HD's go on me in about 15 yrs -- that's probably a couple hundred drives -- MFM, RLL, SCSI and all the flavors of IDE...


Dave

Post Reply