Is there a problem with head parks on WD Green HDDs?

Silencing hard drives, optical drives and other storage devices

Moderators: NeilBlanchard, Ralf Hutter, sthayashi, Lawrence Lee

watchtower
Posts: 7
Joined: Mon Dec 22, 2008 2:33 pm
Location: US

Which number are you talking about? Current or Raw/Data?

Post by watchtower » Mon Dec 22, 2008 2:39 pm

I have this drive and when I check SpeedFan it has the Load Cycle Count "Value" as 300 and the "Raw" value at 121445.

When I check the count using HD Tune, it has the "Value" as 300 and the "Data" (referenced as Raw with SpeedFan) at 121445.

What number are you referencing? What's the difference? If it's the larger number, then folks upthread who said they had low load cycle counts may be referencing the wrong number and they don't know that their drives may be dying.

Mankey
Posts: 173
Joined: Tue Apr 20, 2004 4:39 pm

Post by Mankey » Mon Dec 22, 2008 4:36 pm

Mine is reporting a raw value at 1,342,596. Can this be possible?

watchtower
Posts: 7
Joined: Mon Dec 22, 2008 2:33 pm
Location: US

Post by watchtower » Mon Dec 22, 2008 4:52 pm

Is this problem exclusive to the green power Western Digital product line, or does it also occur with the black and blue product lines?

MikeC
Site Admin
Posts: 12285
Joined: Sun Aug 11, 2002 3:26 pm
Location: Vancouver, BC, Canada
Contact:

Post by MikeC » Mon Dec 22, 2008 5:23 pm

watchtower wrote:Is this problem exclusive to the green power Western Digital product line, or does it also occur with the black and blue product lines?
afaik, it seems unclear that this is a problem even in the GP drives. Yes, it seems to be happening, but does it really cause early failure? No one has yet reported a failure related to this, right?

watchtower
Posts: 7
Joined: Mon Dec 22, 2008 2:33 pm
Location: US

Post by watchtower » Mon Dec 22, 2008 5:35 pm

MikeC wrote:
watchtower wrote:Is this problem exclusive to the green power Western Digital product line, or does it also occur with the black and blue product lines?
afaik, it seems unclear that this is a problem even in the GP drives. Yes, it seems to be happening, but does it really cause early failure? No one has yet reported a failure related to this, right?
These drives may be failing, but folks may not know it's due to a high "load cycle count". I only came across this problem recently right when I was prepared to buy another green WD drive. WD doesn't appear to be acknowledging this. There are hundreds of reviews on New Egg that don't make any reference to this.

I'm surprised at how little known this issue is considering the popularity of this drive.

MikeC
Site Admin
Posts: 12285
Joined: Sun Aug 11, 2002 3:26 pm
Location: Vancouver, BC, Canada
Contact:

Post by MikeC » Mon Dec 22, 2008 5:53 pm

watchtower wrote:These drives may be failing, but folks may not know it's due to a high "load cycle count".
But can you or anyone confirm that any drives failed due specifically to high "load cycle count"? IS there really a cause and effect?

watchtower
Posts: 7
Joined: Mon Dec 22, 2008 2:33 pm
Location: US

Post by watchtower » Mon Dec 22, 2008 6:33 pm

MikeC wrote:
watchtower wrote:These drives may be failing, but folks may not know it's due to a high "load cycle count".
But can you or anyone confirm that any drives failed due specifically to high "load cycle count"? IS there really a cause and effect?
No, I haven't seen any claims of it failing due to a high "load cycle count", but take a look at Western Digital's official specs for this drive:
http://www.wdc.com/en/library/sata/2879-701229.pdf

Their "Reliability/Data Integrity" spec states:

Code: Select all

Load/unload cycles: 300,000
WD is telling us what the drive's "reliability" limit is. I have 5 drives and the 1TB Western Digital green is the only one with a load cycle count as high as 120,000. It's incrementing by 50+ every hour.

My 6 year old IBM drive has the second highest load cycle count of 2500. My other 3 drives have load cycle counts of 0.

MikeC
Site Admin
Posts: 12285
Joined: Sun Aug 11, 2002 3:26 pm
Location: Vancouver, BC, Canada
Contact:

Post by MikeC » Mon Dec 22, 2008 7:41 pm

watchtower wrote:No, I haven't seen any claims of it failing due to a high "load cycle count"....
Then your comment in your first post...
watchtower wrote:WD is doing their best to just let the drives die instead of warning the customers about the issue.
...is erroneous, inflammatory and unfair to WD.

Here's what I will do: Forward the text of your first post to my contacts at WD. We'll see how they respond.

watchtower
Posts: 7
Joined: Mon Dec 22, 2008 2:33 pm
Location: US

Post by watchtower » Mon Dec 22, 2008 10:19 pm

MikeC wrote:
watchtower wrote:No, I haven't seen any claims of it failing due to a high "load cycle count"....
Then your comment in your first post...
watchtower wrote:WD is doing their best to just let the drives die instead of warning the customers about the issue.
...is erroneous, inflammatory and unfair to WD.

Here's what I will do: Forward the text of your first post to my contacts at WD. We'll see how they respond.
I didn't type that second quote. You're mistaking me for the OP.

MikeC
Site Admin
Posts: 12285
Joined: Sun Aug 11, 2002 3:26 pm
Location: Vancouver, BC, Canada
Contact:

Post by MikeC » Mon Dec 22, 2008 11:45 pm

watchtower wrote:I didn't type that second quote. You're mistaking me for the OP.
oh, sorry....

watchtower
Posts: 7
Joined: Mon Dec 22, 2008 2:33 pm
Location: US

Post by watchtower » Tue Dec 23, 2008 12:18 am

MikeC wrote:
watchtower wrote:I didn't type that second quote. You're mistaking me for the OP.
oh, sorry....
No problem Mike.

I tried out the wdidle3.exe utility and it seems to have frozen the load cycle count completely. It hasn't incremented once in hours.

By default it's set to 8 second (8000 milliseconds).

I first changed it to 25 seconds:

Code: Select all

WDIDLE3 /S250
but it made no difference.

It then tried 25.5 seconds:

Code: Select all

WDIDLE3 /S255
and that seemed to work. I was actually planning to disable Intellipark completely:

Code: Select all

WDIDLE3 /D
but it doesn't look like I have to.

The thing is, I don't know if this really "fixed" anything. It's possible that all I did was stop the drive from "reporting" load cycles, but allowed the problem to remain silently.

By the way, I'm on a Windows system and the 1TB WD10EACS drive has 3024 power on hours.

dukla2000
*Lifetime Patron*
Posts: 1465
Joined: Sun Mar 09, 2003 12:27 pm
Location: Reading.England.EU

Post by dukla2000 » Tue Dec 23, 2008 1:21 am

MikeC wrote:
watchtower wrote:These drives may be failing, but folks may not know it's due to a high "load cycle count".
But can you or anyone confirm that any drives failed due specifically to high "load cycle count"? IS there really a cause and effect?
Would be really hard to prove the cause/effect if/when you land up with a duff drive?

In my case I am sitting on " 1 Currently unreadable (pending) sectors". Now as best as I can interpret from Google this sector should relocate itself when I try and read it, but despite running the short & long SMART self tests, and a full OS check (fsck) I cant shift the thing. Not sure if the log below indicates the sector number in question - if it is close to the parking zone would be interesting.

Code: Select all

Error 1 occurred at disk power-on lifetime: 6093 hours (253 days + 21 hours)
  When the command that caused the error occurred, the device was active or idle.

  After command completion occurred, registers were:
  ER ST SC SN CL CH DH
  -- -- -- -- -- -- --
  40 51 01 44 00 38 40

  Commands leading to the command that caused the error were:
  CR FR SC SN CL CH DH DC   Powered_Up_Time  Command/Feature_Name
  -- -- -- -- -- -- -- --  ----------------  --------------------
  60 08 00 3f 00 38 32 08  21d+09:43:30.995  READ FPDMA QUEUED
  60 08 00 47 00 34 32 08  21d+09:43:30.995  READ FPDMA QUEUED
  60 08 00 3f 00 34 32 08  21d+09:43:30.991  READ FPDMA QUEUED
  60 08 00 47 00 30 32 08  21d+09:43:30.990  READ FPDMA QUEUED
  60 08 00 3f 00 30 32 08  21d+09:43:30.978  READ FPDMA QUEUED

SMART Self-test log structure revision number 1
Num  Test_Description    Status                  Remaining  LifeTime(hours)  LBA_of_first_error
# 1  Conveyance offline  Completed without error       00%      6352         -
# 2  Extended offline    Completed without error       00%      6148         -
# 3  Short offline       Completed without error       00%        45         -

Nick Geraedts
SPCR Reviewer
Posts: 561
Joined: Tue May 30, 2006 8:22 pm
Location: Vancouver, BC

Post by Nick Geraedts » Tue Dec 23, 2008 1:35 am

watchtower wrote:The thing is, I don't know if this really "fixed" anything. It's possible that all I did was stop the drive from "reporting" load cycles, but allowed the problem to remain silently.
Sorry to sound blunt, but that sounds like pure FUD in the actual sense of the term. Why would anyone write a program to allow a device to lie about sensor data?

As Mike mentioned before, there's been no evidence of the high load/unload cycle counts being the true values (are the raw values or the actual number, as watchtower pointed out). Furthermore, nobody has confirmed if the data provided by SMART does/should correspond to the specs in the datasheets. Is SMART counting Intellipark cycles, when the datasheet lists full load/unload cycles? As far as I know, WD is the only company with this type of technology (and I believe it's only on the GP drives), so it's not unreasonable to hear about erroneous readings.

If you read the reviews on NewEgg, there's only one "1 egg" review that I could find that wasn't a DOA or early death. My experience is that with decent hardware in proper operating conditions, it'll die within 3 months, or after 3 years. Pretty much every one of the poor reviews on NewEgg fits this category so far (except the one mentioned).


For what it's worth, I've hammered on all 10 of my GP drives in my server a number of times. I found out a few months ago that my 3ware card had a faulty BIOS chip which was causing drives to fall out of the array on restarts. Understandably, this led to several rebuilds in a matter of weeks (each taking over a day) on the eight WD10EACS drives. I also had issues initially with poor quality SATA cables connecting the WD5000AACS drives to the system as well, again, leading to rebuilds of the system RAID1 array. *touch wood* I haven't seen any cause for concern with these drives (SMART data, temperatures, performance), and the only thing I've done with the firmware is enable TLER for all of them.

matt_garman
*Lifetime Patron*
Posts: 541
Joined: Sun Jan 04, 2004 11:35 am
Location: Chicago, Ill., USA
Contact:

Post by matt_garman » Wed Dec 24, 2008 6:46 am

FYI, I sent an email to WD asking for details on this matter. What I get out of their response is that the SMART attribute Load_Cycle_Count is indeed reflective of full head parks. But they also say that the drives operating specs (300k/600k parks) are guidelines, and "should" function well beyond those numbers. Here's my email and their reply, maybe someone else will take away a different interpretation.

My email:
Hello,

Looking at the SMART data for one of my many GreenPower drives, I see that the SMART attribute named "Load_Cycle_Count" has a very high value, 124002 (with Power_On_Hours = 4870, averaging about 25 Load_Cycle_Counts an hour).

The specifications for the consumer GP drives say they are rated to 300,000 load/unload cycles, and 600,000 for the enterprise/RE2 drives.

What I want to know is: is the "Load_Cycle_Count" SMART attribute the same as a load/unload cycle (as defined by the drive's specifications)?

In other words, should I be concerned that the Load_Cycle_Count attribute is growing so quickly?

At this rate, it will take less than three years for the Load_Cycle_Count to reach 600,000.

Thank you very much,
Matt
And WD's response:
Dear Matthew,

Thank you for contacting Western Digital Customer Service and Support.

The Load Cycle Count keeps track of how many times the read/write head on the unit takes off / parks from the initial point, to access the data on the drive. This happens each time the unit is turned on/off. An estimated count of 600,000 indicates a life span of 3-5 years on average. However, the units are designed to work well beyond that number. This number is provided as an average threshold of where we know that the unit is reliable and under perfect working conditions. Going past this, does not mean that the unit will fail to work. The unit should continue to work with no issues. However, beyond this number, it is not fully certain that it will continue without any issues.

Sincerely,
Johnny H.

watchtower
Posts: 7
Joined: Mon Dec 22, 2008 2:33 pm
Location: US

Post by watchtower » Wed Dec 24, 2008 10:18 am

watchtower wrote:I tried out the wdidle3.exe utility and it seems to have frozen the load cycle count completely. It hasn't incremented once in hours.
I'm quoting myself above. I was wrong. The wdidle3.exe utility fixed nothing. The drive has reverted back to incrementing the the load cycle count by one per minute.

Monkeh16
Posts: 507
Joined: Sun May 04, 2008 2:57 pm
Location: England

Post by Monkeh16 » Wed Dec 24, 2008 10:49 am

watchtower wrote:By default it's set to 8 second (8000 milliseconds).

I first changed it to 25 seconds:

Code: Select all

WDIDLE3 /S250
but it made no difference.

It then tried 25.5 seconds:

Code: Select all

WDIDLE3 /S255
and that seemed to work. I was actually planning to disable Intellipark completely:

Code: Select all

WDIDLE3 /D
but it doesn't look like I have to.
I think perhaps wdidle3.exe is poorly documented, or you're not quite understanding what it does. It should be changing the APM setting in the firmware of the drive, '255' would be off. 250 would be low power saving (spin-down not permitted, head unloading probably would be), the default setting of 8 would be very high power management (full spindown and head unload after a handful of seconds). The -B option to hdparm on Linux systems should have the same effect, but be volatile.

Tamas
Posts: 117
Joined: Fri Dec 17, 2004 11:53 pm
Location: Budapest, Hungary

Post by Tamas » Thu Dec 25, 2008 7:07 am

Here is the SMART information from my first gen WD GP: WD10EACS
I'm using it as a second data storage drive in WinXP/Vista enviroment.

Actually these SMART values are contains raw information. The real question is what are they means, how does the manufacturer configured these counters? (Load / Unload Cycle Counter increasing 25 or 1 per head unload?)

Somebody wrote that the Load / Unload Cycle Count increasing by 50 in every hour. If this really means that the hard disk does 50 head unloads an hour (1200/day), then this is badly configured drive with faulty firmware.
This number can be checked by power measurements on the drive.

Which is weird for me that there are users where the counter values seems to be normal, and lots of measurements where these values are unusually high.

I know that windows like to use the hard drives and caching almost constantly (it's hard to stop even my notebook drive when I read just one page). So maybe these systems rarely affected by this problem.

If a GP drive is configured that it does a head unload after 8 seconds of inactivity, and some linux systems does nothing for 0-8 sec but access the disk regularly in every >8sec-2min, it will cause at least 30 head unload per hour, which will greatly decrease the life expectancy of the drive in 24/7 enviroment. (at least 272000 unloads per year in this case)

I'll try out this wdidle program, I've already tried to disable this power management function with Hitachi Feature Tool without any success.

//////
I've just checked my drive with this program. My GP was configured to 8000ms as facory default. I set the drive first to 25,5sec then disabled this function. I'll check the Load Unload counter after several days.

Tamas
Posts: 117
Joined: Fri Dec 17, 2004 11:53 pm
Location: Budapest, Hungary

Post by Tamas » Thu Dec 25, 2008 10:35 pm

After several hours of usage, the Load / Unload Cycle Counter just increased by two (there was two switch on/off), which is absolutely normal for an APM disabled HDD.

This WDIdle3 program really works, and can switch off APM on WD Green Power. :)

m0002a
Posts: 2831
Joined: Wed Feb 04, 2004 2:12 am
Location: USA

Post by m0002a » Thu Dec 25, 2008 10:47 pm

I finally got a copy of the WDIdle3 program from WD (after much bantering back and forth) and tried to run it in the DOS command window on Win XP and got a bunch of errors:

C:\WDIDLE~1.00>wdidle3 /S255
WDIDLE3 Version 1.00
Copyright (C) 2005-2008 Western Digital Corp.
Configure Idle3.
CauseWay DOS Extender v3.49 Copyright 1992-99 Michael Devore.
All rights reserved.

Exception: 0D, Error code: 0000

EAX=02AC0060 EBX=02AE0060 ECX=00000001 EDX=00002AC0 ESI=02AC0060
EDI=02AE0094 EBP=02A9F994 ESP=02A9F964 EIP=02A2F604 EFL=00013216

CS=027F-FD5E0000 DS=0287-FD5E0000 ES=0287-FD5E0000
FS=0000-xxxxxxxx GS=028F-xxxxxxxx SS=0287-FD5E0000

CR0=00000000 CR2=00000000 CR3=00000000 TR=0000

Info flags=00008018

Writing CW.ERR file....

CauseWay error 09 : Unrecoverable exception. Program terminated.

I tried a DOS boot CD, but it does not support SATA drives. Can anyone help me as to how they successfully ran the program (WD will not support it). Just for the record, my machine is primarily used for Linux, but I have a dual-boot option to Win XP.

Tamas
Posts: 117
Joined: Fri Dec 17, 2004 11:53 pm
Location: Budapest, Hungary

Post by Tamas » Fri Dec 26, 2008 8:51 am

m0002a wrote: I tried a DOS boot CD, but it does not support SATA drives. Can anyone help me as to how they successfully ran the program (WD will not support it). Just for the record, my machine is primarily used for Linux, but I have a dual-boot option to Win XP.
Integrate WDIdle3.exe into the dos boot iso. (ultraiso etc.)
In the bios try to find this SATA option: compatibility/pata mode

whiic
Posts: 575
Joined: Wed Sep 06, 2006 11:48 pm
Location: Finland

Post by whiic » Sat Dec 27, 2008 6:23 am

Didn't notice this thread until it got many posts. I'll now comment on several post during past month.

zds: "default settings it parks the heads after 8 seconds of inactivity. What this means depends of your OS and usage pattern, but in these low-usage media server machines running Linux it means the drive will reach it's designed lifetime total or parkings in 200-300 days."

200-300 roughly corresponds to once every minute. It's not a problem with Linux only, SpeedFan does SMART polling with 1 minute periods so many SPCR silencing fanatics with software variable rpm fan speed suffer the same issue within Windows.

HDD has warranty period of 3 years and reported service life is 5 years. During 5 years, these drives can accumulate 2.6 million unload cycles at rate of 1/minute.

zds: "If you drive has a high value, contact WD and ask for the wdidle3.exe."

Thread in StorageReview suggests it can't be disabled but set to maximum around 30 seconds. If you poll the drive for SMART values once every 60 seconds, changing the time-out will have no effect at all and you'd still end up with 2.6 million cycles in it's service life (if it doesn't die or get retired before that).

It's definitely a concern but not necessarily alarming. We should ALWAYS keep back-ups. And aggressive power-saving unload time-out doesn't cause imminent threat (as in threatening to kill both the drive AND it's back-up at the same time).

Whatever WD states as unload count durability is the minimum number of unloads that could cause amount of wear that cause a risk of failure. It's not a date of death but more like equivalent of a male human reaching early 30's. It obviously increases likelyhood of heart-attack but certainly doesn't guarantee death in nearby future.

Strid: "My GP doesn't seem to suffer from said problem. It's been running for about three months now."

You posted our SMART values (I'll quote only few lines of them):

Code: Select all

  4 Start_Stop_Count        0x0032   100   100   000    Old_age   Always       -       274 
...
192 Power-Off_Retract_Count 0x0032   200   200   000    Old_age   Always       -       18 
193 Load_Cycle_Count        0x0032   200   200   000    Old_age   Always       -       267
Well, 267+18 isn't that much bigger than 274: only 11 times has the head unloaded without stopping platters (at least if we trust the SMART values).

Your drive is the newer 3-platter variant, WD10EACS-00D6B0. I have one of those too. It has

Code: Select all

  4 Start_Stop_Count        38
...
192 Power-Off_Retract_Count  8
193 Load_Cycle_Count        38
Your power on hours are 967, mine is 2527 hours. (Meaning I have not power-cycled my drive as ofter. I keep it only 24/7.) Only registered unloads are when platters are spun down.

I have two older 00ZJB0 variant 1TB GreenPowers as well. One of them is connected via USB so I can't post it's values (it's the older of the two with more cycles). The one that I can read for SMART without gouing through extra trouble of opening the enclosure and the case, and powering down the system to do it, has the following values:
Power on hours: 6124
Power cycles: 113
Start/stop cycles: 125
Power-off retract count: 29
Load/unload cycle count: 292959

The older one is already past the "deadline" in WD specs (at least I've heard they list 300000 cycles). The newer of the two old ones still has value 103, meaning the 300000 cycle minimum sustainable unloads will be 100. Value 0 will be reached at 600000 cycles.

No bad sectors on any of my WD10EACS. No corruption issue, no odd clicking (not counting the feature itself as "odd").

Either the "fixed" the unloading in new version of GP, or it's still going on, but it's hidden from SMART values. I know there's soft and hard unloads. "Load/unload cycle count" is definitely the soft one. "Power-off retract count" is the hard one. And even when you pwoer down the computer, they will be stopped with a command, before PSU cut's the electricity. Hard unloads occur on power outage or if power is otherwise cut suddenly. Cutting the power during boot-up up, during system freeze, etc. Though, at system freeze the HDD would notice what appears to be "idle time" and unload without external command, making it "soft".

I won't consider the situation alarming. If there's more than normal amount of drive deaths, it's bad. But I'm hesitant to believe there is. Because people are aware and worried about these SMART values, they tend to make each GreenPower drive death a bigger number than a death of non-GreenPower drive, creating a problem that doesn't actually exist.
Last edited by whiic on Sat Dec 27, 2008 12:10 pm, edited 1 time in total.

whiic
Posts: 575
Joined: Wed Sep 06, 2006 11:48 pm
Location: Finland

Post by whiic » Sat Dec 27, 2008 11:38 am

Strid: "Does anyone know what Spin_Up_Time stands for? Seems like most if not all of us have a high number in this parameter."

KlaymenDK: "In other words, it's perfectly normal to have a value of a couple of thousand, as it represents the drive's boot-up time -- only, it should not fluctuate too much over time."

Time from power on until either operational rpm reached, or until rpm reached and heads loaded on media. Manufacturer specific. Might be milliseconds... or some other unit, not necessarily something used elsewhere.

If you power down a HDD and power up it before the platters have stopped, usually you'll get a very low raw data for spin-up time when polled before next power cycle replaces the previous.

Nothing to worry about as long as the values (not raw data) are high (usually around 100).

Nick Geraedts: "There's also the fact that head "parking" has several stages involved. There are soft parks, hard parks, and stages inbetween. My guess is that the rated values on WD's website are for hard parks (the kind that happens when the drive is turned off), while Intellipark does a soft park - just moving it away from the disk."

zds: "If this is the case, then WD should come up, say "your drives are fine, we just goofed up and track soft parks in hard park category in SMART data"."

"Load/unload count" is the attribute that keeps climbing rapidly and that is the soft type of unload. IF it was the "hard" type of unload, what would "Power-off retract count" be then?

I don't think that what Nick said is feasible either. 300000 is most likely the number of soft parks, hard parks being lower. Luckily, it's the minimum amount of parks that could cause trouble, not the average number of parks before failure. With relatively high likelyhood, they will run without problems way past million cycles.

It was not that long ago when IBM/Hitachi drives (the first drives to implement load/unload technology to desktop sized drives) were rated for only 50000 soft unloads (where as contact start/stop drives were rated for the same 50000 start/stops even though start/stop is supposedly be more harmful than soft unloads). And Fujitsu laptop MHW20xxAT series is rated for 600000 times and 20000 times for "emergency retract".

In my opinion there is no_fucking_way that WD would be rated for 300000 emergency retracts. (That would place soft retracts around 10 million cycles if the relationship between soft and hard cycle counts wouls be the same.) But also, manufacturers may be more conservative in rating unload based HDDs than old-style contact-start/stop drives. Unload technology is newer and it's long-term reliability is less known (but in theory it should increase reliability).

Dukla2000: "In my case I am sitting on " 1 Currently unreadable (pending) sectors". Now as best as I can interpret from Google this sector should relocate itself when I try and read it, but despite running the short & long SMART self tests, and a full OS check (fsck) I cant shift the thing."

Have you tried the "SMART offline data collection routine"? It's a bit different offline scan... it's even more autonomously handled by the HDD itself. Other SMART offline scans require the software to be left running or the scan will terminate. Also, running other tests in HDDScan causes high CPU load where as "SMART offline data collection routine" runs even if HDDScan is closed and there's not change in CPU util. It just works silently in the background. Just start it and check back into it the next day. It'll take AT LEAST as long as "SMART extended selftest in offline mode" to complete and you will not be notified when it's done. It may even end in endless loop and start from beginning when complete...

___


If WD's IntelliPark can be configured via APM feature (the same feature that controls power saving aggressiveness of Hitachi drives), then using HDDScan in Windows is a good way to do it. :)
There's no reason to boot to DOS diskette and hazzle with DOS drivers for disk controller.

Newest of my three WDs doesn't increase it's unload count rapidly (around 60 cycles per hour if I don't shut down SpeedFan), which may be as simple as different default APM setting. The question is: if I change the value, will it be volatile. With Hitachis, it's permanent, and with Hitachis even spindown counter is permanent... but with some other drives, spindown counter is erased when power cycled. I wonder how it's with WD & APM... I might try it... but I need to take out the suspecting drive to listen to it unload and see if it makes a difference. My newest WD doesn't increase the count but I cannot be sure if it unloads (as my system has too many HDDs to make clicks at pseudo-random intervals).

If only WD published APM values and corresponding time-out values for head unload and platter spindown like Hitachi has done with their APM implementation. It certainly wouldn't hurt them to release such information. And if they want to change the aggressiveness of power saving, just set a different default value for APM.

___


EDIT: I just listened to the newest GP. If I leave the drive idle for ~10 seconds or longer, then poll SMART values, it will click, meaning it unloaded. But unload count doesn't increase! So the newer GreenPowers don't show the real number of soft unloads, probably due to general panic caused by later revision being too honest with it.

I like honesty but too bad being honest doesn't pay off like it should. People RMA drives for SMART being too honest even if it's just
RAW data error rate (very low values on Seagates)
Hardware ECC (Samsungs, probably Seagates as well)
Unload cycle count (rapidly increasing on GreenPowers)
Bad sectors (reallocatable)

Ignorance is bliss. Not knowing there's a few bad sectors that have been reallocated but not caused bluescreen, data corruption or even SMART value change, doesn't give you much harm.

But knowing that some manufacturers may hide the bad sectors and report "0" as long as spare sectors exist, isn't ignorance... you may be unaware of certainty of bad sectors but you also can't live in peace because you'll be suspecting good drives as bad as well. True ignorance would be ignorance of the possibility that SMART values may not be real.

And apparently HDDScan reports that these two WDs I have don't support APM feature. I can't write to the register with HDDScan as it's "unsupported". Maybe some utility can force write in it, and cause a change.

Ironically it reports that Samsung Spinpoint F1 does. And it is a CSS based drive so there is no possibility to have extra power saving modes.

J400uk
Posts: 2
Joined: Sun Dec 28, 2008 10:35 am
Location: London, UK

Post by J400uk » Sun Dec 28, 2008 10:38 am

Hi, I'm new here. I had a WD10EACS for a few days in November and had installed it in my linux-based buffalo linkstation pro nas.

I noticed that every few seconds it would make a clicking noise, which sounded like the head was parking (?).

I found this noise very annoying as I prefer my stuff to operate in silence. I put the hard drive temporaily in my vista based pc and found it did not click.

It has now been sent back and I replaced it with a 750GB seagate to go in the NAS, which runs nice and silent with no clicks.

Is the problem I describe the same as the one been disccussed in this thread?

whiic
Posts: 575
Joined: Wed Sep 06, 2006 11:48 pm
Location: Finland

Post by whiic » Sun Dec 28, 2008 12:20 pm

J400uk: "Is the problem I describe the same as the one been disccussed in this thread?"

If the drive was recognized and stayed accessible without occasional freezing or slowdowns, then it with very high probability was the feature discussed in this thread.

I doubt Seagate is quiet, let alone silent, but I do admit the load/unloads are more noticeable than AAM enabled seeks.

dukla2000
*Lifetime Patron*
Posts: 1465
Joined: Sun Mar 09, 2003 12:27 pm
Location: Reading.England.EU

Post by dukla2000 » Tue Dec 30, 2008 11:13 am

whiic wrote:Dukla2000: "In my case I am sitting on " 1 Currently unreadable (pending) sectors". Now as best as I can interpret from Google this sector should relocate itself when I try and read it, but despite running the short & long SMART self tests, and a full OS check (fsck) I cant shift the thing."

Have you tried the "SMART offline data collection routine"? ...
I have now (I think - I did
smartctl -o on -t offline /dev/sdb
)

The SMART report has changed

Code: Select all

smartctl -a /dev/sdb
...
General SMART Values:
Offline data collection status:  (0x82) Offline data collection activity
                                        was completed without error.    
                                        Auto Offline Data Collection: Enabled.
Self-test execution status:      (   0) The previous self-test routine completed
                                        without error or no self-test has ever  
                                        been run.                               
Total time to complete Offline                                                  
data collection:                 (13980) seconds.                               
...
Vendor Specific SMART Attributes with Thresholds:  
ID# ATTRIBUTE_NAME          FLAG     VALUE WORST THRESH TYPE      UPDATED  WHEN_FAILED RAW_VALUE
...
193 Load_Cycle_Count        0x0032   001   001   000    Old_age   Always       -       616065   !!! going for a world record on this one!
194 Temperature_Celsius     0x0022   125   107   000    Old_age   Always       -       22       
196 Reallocated_Event_Count 0x0032   200   200   000    Old_age   Always       -       0        
197 Current_Pending_Sector  0x0012   200   200   000    Old_age   Always       -       1        
...
SMART Self-test log structure revision number 1
Num  Test_Description    Status                  Remaining  LifeTime(hours)  LBA_of_first_error
# 1  Conveyance offline  Completed without error       00%      6352         -
# 2  Extended offline    Completed without error       00%      6148         -
# 3  Short offline       Completed without error       00%        45         -
...
but note the Pending_sector is still there and the message in the warning log every 30 mins.

whiic
Posts: 575
Joined: Wed Sep 06, 2006 11:48 pm
Location: Finland

Post by whiic » Tue Dec 30, 2008 3:11 pm

Offline reallocatable could probably be reallocated if offline routine can read the sector after retrys, then copy data to spare sector and reallocate the dead sector to where data was copied.

But if retrys don't work, offline routine won't force reallocation of the sector as the HDD doesn't know if there's some very important data on it. You need to either copy data off the drive, wipe it with zerofill pass, or check the LBA of the bad sector and wipe that single sector. If you overwrite that sector, the HDD will know the old content is worthless (because you are overwriting the old content) and write the new data directly to spare sector instead of the pending sector, then it will disable the pending sector as unusable (meaning you can't even attempt data recovery of it's contents).

So, did you have the LBA of the bad sector? It could be a "software" bad sector, meaning a power outage while writing it, resulting in mismatched checksum of partially written sector. I that case, the HDD might actually abort reallocation, attempt to write in it, read it back and (only if re-read is successful) take the "faulty" sector back to normal use. Anyway, whether to reallocate or return the sector back to use (that decision is made by HDD itself) you just need to overwrite it.

deadfones
Posts: 18
Joined: Thu Mar 29, 2007 2:04 pm
Location: Near Portland, OR

Post by deadfones » Wed Dec 31, 2008 1:32 am

whiic wrote: EDIT: I just listened to the newest GP. If I leave the drive idle for ~10 seconds or longer, then poll SMART values, it will click, meaning it unloaded. But unload count doesn't increase! So the newer GreenPowers don't show the real number of soft unloads, probably due to general panic caused by later revision being too honest with it.
Are you referring to the WD10EADS?

Anyone know how these are affected?

whiic
Posts: 575
Joined: Wed Sep 06, 2006 11:48 pm
Location: Finland

Post by whiic » Wed Dec 31, 2008 4:50 am

"Are you referring to the WD10EADS?"

No, I'm referring to newest of my three WD10EACS. Two older one are WD10EACS-00ZJB0 and the third is WD10EACS-00D6B0. 00D6B0 hides most unloads from being presented in Louad/unload cycle count attribute and only increases that count when HDD is powered on (value equals start/stop cycly count).

AndyHamp
Posts: 17
Joined: Thu Oct 16, 2008 2:04 am
Location: Surrey, UK

Post by AndyHamp » Tue Jan 06, 2009 9:42 am

Interesting thread ... I have one of the WD10EACS 00ZJB0 drives in my Thomson Sky HD box.

It works a treat but should i be worried ? Is it worth pulling the drive to look at the SMART counts ? Or should i go back to blissful ignorance ? :roll:

Andy

whiic
Posts: 575
Joined: Wed Sep 06, 2006 11:48 pm
Location: Finland

Post by whiic » Tue Jan 06, 2009 3:52 pm

PVR boxes rarely have an access frequency that makes the HDD unload and re-load frequently: either you're recording, watching or doing both the same time, or it's idle. There's no backgound indexing, no system restore points, etc. and there's no SpeedFan polling SMART values once a minute.

Don't worry a thing. At a rate of 10 load/unload cycles a day, the drive would take 82 years to reach specified 300000 cycles. (In a Windows environment with SpeedFan waking the drive up every f'ing minute, it'll reach the same number of cycles in little more than half a year if computer left on 24/7.)

Post Reply