Page 2 of 2

Posted: Sun Sep 05, 2010 8:56 am
by m0002a
Vicotnik wrote:If the computer is rebooted, not shut down and then turned on again, then the HDDs continue to spin throughout the reboot cycle. Unless the HDDs are spun down to begin with; I've noticed that my disk server spins up all the HDDs one by one before a reboot.
That is a good point, which is why the load/unload rating is more important. A complete shutdown and cold boot includes a load/unload cycle in addition to a stop/start cycle. A warm reboot also includes a load/unload cycle.

A warm reboot should not put any stress on the electronics due to electrical surges, but could put some "theoretical" stress on the heads/platters during a load/unload cycle, although certainly not anything to worry about if one does not exceed the rated count for load/unload.

Posted: Mon Sep 06, 2010 8:45 am
by Redzo
Monkeh16 wrote: It's called a package manager and a sane design. ie. Not Windows. Windows is pretty much unique in requiring a reboot for every single update.
You did not use Windows in years I presume ? Just as others have said it Windows 7 does not need to reboot after every update. Only critical ones. And sure, Linux is great for some stuff but for majority users out there, it's no good.

Posted: Mon Sep 06, 2010 8:55 am
by m0002a
Redzo wrote:You did not use Windows in years I presume ? Just as others have said it Windows 7 does not need to reboot after every update. Only critical ones. And sure, Linux is great for some stuff but for majority users out there, it's no good.
Even XP no longer requires a reboot for about 3/4 of all updates, including some critical ones. Obviously, it was not always this way, but they have made modifications to the OS.

I would like to use Linux on one of my computers where I only do web browsing (and use it as an application development server), but I find that the Firefox browser is not fully compatible with many of the websites I use. It is probably not a coincidence that one of them I need is web-based access to MS Mail Exchange server which does not work in enhanced mode on Firefox for Linux.

Posted: Mon Sep 06, 2010 10:03 am
by Monkeh16
Redzo wrote:
Monkeh16 wrote: It's called a package manager and a sane design. ie. Not Windows. Windows is pretty much unique in requiring a reboot for every single update.
You did not use Windows in years I presume ? Just as others have said it Windows 7 does not need to reboot after every update. Only critical ones. And sure, Linux is great for some stuff but for majority users out there, it's no good.
Every time I update my Win 7 box, it wants to reboot. I check for updates once a month, which is their release cycle for security patches, which are pretty much the only patches they ever release. You're almost never going to get through a month without them finding and patching another gaping hole (or five) which requires a reboot. Please come again.

Posted: Mon Sep 06, 2010 4:05 pm
by danimal
m0002a wrote:It turns out that WD no longer publishes that spec,
because it doesn't exist, as i stated earlier... btw, you are obviously unaware that the drive i posted has a five-year warranty, and it has no power cycle rating.
m0002a wrote:I don't understand what the warranty has to do with this discussion (you will have to explain that to me).
that doesn't surprise me, given how you told us was that "it would be absurd to suppose that one can reboot a computer an infinite number of times without some adverse affect to a hard drive".

you gave us that stupidly impossible scenario, in what i guess was a failed attempt to back up your earlier mistake.

now you can't understand how power cycling is so irrelevant to longevity that it's not excluded from even a five-year warranty :roll: :lol:

Posted: Mon Sep 06, 2010 8:49 pm
by m0002a
danimal wrote:because it doesn't exist, as i stated earlier... btw, you are obviously unaware that the drive i posted has a five-year warranty, and it has no power cycle rating.
There is a load/unload cycle count spec that does exist, and in effect supersedes the start/stop because each start or stop is one load/unload. Also, Seagate (and probably others) still do publish a start/stop cycle spec.

Even if the start/stop spec is not an issue (because one could not reasonably start and stop a computer 100,000 times in 5 years) does not mean that one can start/stop a computer (cold start) indefinitely without some theoretical wear and tear on the drive.

Since one cannot cold start/stop a computer 100,000 times in 5 years, there is not need to mention it in the warranty. The load/unload is not mentioned either, but they do publish a spec on it.

Furthermore, warranties have little to do with reliability, and everything to do with marketing. Hyundai Motors pioneered the 10 year - 100K power-train warranty in the US when their cars had serious reliability problems about 15 years ago (their reliability has improved significantly since then). Automakers with cars considered to be much more reliable offer shorter warranties, because they can (and people will still buy their cars).
danimal wrote:that doesn't surprise me, given how you told us was that "it would be absurd to suppose that one can reboot a computer an infinite number of times without some adverse affect to a hard drive".

you gave us that stupidly impossible scenario, in what i guess was a failed attempt to back up your earlier mistake.

now you can't understand how power cycling is so irrelevant to longevity that it's not excluded from even a five-year warranty.
I said right up front that on modern disk drives there is no real worry about a reasonable number of reboots. That does not mean there is not wear and tear on a disk drive from a cold reboot, because there is. It means that disk drive engineering has improved significantly so it is no longer a practical issue (Seagate rates there drives for 100,000 start/stop cylces). But at one time this was a real issue for disk drives.

Again, your understanding of warranties, and what they signify, is in error.

Posted: Tue Sep 07, 2010 9:21 pm
by danimal
m0002a wrote: There is a load/unload cycle count spec that does exist, and in effect supersedes the start/stop because each start or stop is one load/unload.


prove it.

show us the documentation from the hard drive manufacturers that make that claim.
m0002a wrote: Since one cannot cold start/stop a computer 100,000 times in 5 years, there is not need to mention it in the warranty.
since it's physically impossible to "cold stop" a hard drive, your entire line of reasoning has once again fallen apart.

first you acknowledged that there was a voltage spike on startup, then claimed that it was really due to loading/unloading the heads, now a powered-up drive can be "cold stopped".
m0002a wrote:Furthermore, warranties have little to do with reliability, and everything to do with marketing.
we are talking about hard drives here, not cars :roll: try to stay on-topic.

and try to understand what "profit margin" is, and why seagate has reduced the warranties on many of their hard drives from 5 years to 3 years.

you claim that the technology is much better, but then fail to understand why the warranties go down... after telling us that long warranties are all about marketing.

clearly, seagate disagrees with your logic, if it can be called that.

Posted: Tue Sep 07, 2010 10:09 pm
by m0002a
danimal wrote:
m0002a wrote: There is a load/unload cycle count spec that does exist, and in effect supersedes the start/stop because each start or stop is one load/unload.


prove it.

show us the documentation from the hard drive manufacturers that make that claim.
I don't need any documentation because I ran a test to prove it. First I ran smartctl (under Linux) and got the following output:

Code: Select all

[root@linux1 ~]# smartctl -A /dev/sda
smartctl version 5.38 [x86_64-redhat-linux-gnu] Copyright (C) 2002-8 Bruce Allen
Home page is http://smartmontools.sourceforge.net/

=== START OF READ SMART DATA SECTION ===
SMART Attributes Data Structure revision number: 16
Vendor Specific SMART Attributes with Thresholds:
ID# ATTRIBUTE_NAME          FLAG     VALUE WORST THRESH TYPE      UPDATED  WHEN_FAILED RAW_VALUE
  1 Raw_Read_Error_Rate     0x000f   200   200   051    Pre-fail  Always       -       0
  3 Spin_Up_Time            0x0003   167   166   021    Pre-fail  Always       -       4650
  4 Start_Stop_Count        0x0032   100   100   000    Old_age   Always       -       474
  5 Reallocated_Sector_Ct   0x0033   200   200   140    Pre-fail  Always       -       0
  7 Seek_Error_Rate         0x000e   200   200   051    Old_age   Always       -       0
  9 Power_On_Hours          0x0032   091   091   000    Old_age   Always       -       6577
 10 Spin_Retry_Count        0x0012   100   100   051    Old_age   Always       -       0
 11 Calibration_Retry_Count 0x0012   100   100   051    Old_age   Always       -       0
 12 Power_Cycle_Count       0x0032   100   100   000    Old_age   Always       -       387
192 Power-Off_Retract_Count 0x0032   200   200   000    Old_age   Always       -       24
193 Load_Cycle_Count        0x0032   163   163   000    Old_age   Always       -       111297
194 Temperature_Celsius     0x0022   114   109   000    Old_age   Always       -       33
196 Reallocated_Event_Count 0x0032   200   200   000    Old_age   Always       -       0
197 Current_Pending_Sector  0x0012   200   200   000    Old_age   Always       -       0
198 Offline_Uncorrectable   0x0010   200   200   000    Old_age   Offline      -       0
199 UDMA_CRC_Error_Count    0x003e   200   200   000    Old_age   Always       -       0
200 Multi_Zone_Error_Rate   0x0008   200   200   051    Old_age   Offline      -       0
Then I shut down the system and did a cold boot, and ran the command again. Notice that both Power_Cycle_Count and Load_Cycle_Count incremented by one.

Code: Select all

[root@linux1 ~]# smartctl -A /dev/sda
smartctl version 5.38 [x86_64-redhat-linux-gnu] Copyright (C) 2002-8 Bruce Allen
Home page is http://smartmontools.sourceforge.net/

=== START OF READ SMART DATA SECTION ===
SMART Attributes Data Structure revision number: 16
Vendor Specific SMART Attributes with Thresholds:
ID# ATTRIBUTE_NAME          FLAG     VALUE WORST THRESH TYPE      UPDATED  WHEN_FAILED RAW_VALUE
  1 Raw_Read_Error_Rate     0x000f   200   200   051    Pre-fail  Always       -       0
  3 Spin_Up_Time            0x0003   167   166   021    Pre-fail  Always       -       4650
  4 Start_Stop_Count        0x0032   100   100   000    Old_age   Always       -       475
  5 Reallocated_Sector_Ct   0x0033   200   200   140    Pre-fail  Always       -       0
  7 Seek_Error_Rate         0x000e   200   200   051    Old_age   Always       -       0
  9 Power_On_Hours          0x0032   091   091   000    Old_age   Always       -       6577
 10 Spin_Retry_Count        0x0012   100   100   051    Old_age   Always       -       0
 11 Calibration_Retry_Count 0x0012   100   100   051    Old_age   Always       -       0
 12 Power_Cycle_Count       0x0032   100   100   000    Old_age   Always       -       388
192 Power-Off_Retract_Count 0x0032   200   200   000    Old_age   Always       -       24
193 Load_Cycle_Count        0x0032   163   163   000    Old_age   Always       -       111298
194 Temperature_Celsius     0x0022   114   109   000    Old_age   Always       -       33
196 Reallocated_Event_Count 0x0032   200   200   000    Old_age   Always       -       0
197 Current_Pending_Sector  0x0012   200   200   000    Old_age   Always       -       0
198 Offline_Uncorrectable   0x0010   200   200   000    Old_age   Offline      -       0
199 UDMA_CRC_Error_Count    0x003e   200   200   000    Old_age   Always       -       0
200 Multi_Zone_Error_Rate   0x0008   200   200   051    Old_age   Offline      -       0
BTW, the above hard drive is a 500 GB WD Green Drive. The high load/unload count (111298) is the result of the Linux problem associated with these drives (and reported elsewhere), but I previously fixed the problem using the WD utility to disable the load/unload feature that occurs to save energy when the system is running. Obviously (proven by numbers above), a load/unload does occur when the system is shut down and cold booted.

Regarding the rest of your post, it is obvious that you are just trolling, so I will dignify it with a response.

Posted: Wed Sep 08, 2010 7:28 pm
by danimal
m0002a wrote: I don't need any documentation because I ran a test to prove it. First I ran smartctl (under Linux) and got the following output:
.
totally irrelevant, as usual.

you just told us that "even if it can happen (turn-on spikes damaging equipment)"... then went completely off-topic with some idiotic linux test that doesn't have a damn thing to do with turn-on spikes or cold starting specs.
m0002a wrote:so I will dignify it with a response.
um, you meant to say "NOT dignify it with a response" :roll:

but thanks for proving my point anyway.

Posted: Thu Sep 09, 2010 12:30 pm
by m0002a
danimal wrote:totally irrelevant, as usual.

you just told us that "even if it can happen (turn-on spikes damaging equipment)"... then went completely off-topic with some idiotic linux test that doesn't have a damn thing to do with turn-on spikes or cold starting specs.
It was not a "Linux" test, it was a report from S.M.A.R.T. that WD drives support (if you enable it in the system bios) to report various drive statistics and errors. This is a standard reporting protocol for almost all drives, and the reports can be run on almost any OS (including Windows) with the correct software.

What is shows (and what you denied before) is that if the machine is cold booted (shut down, and then booted), then the Power_Cycle_Count and the Load_Cycle_Count are both incremented by one. The WD drive is reporting these numbers to SMART. This means that even though WD no longer has a spec for Power_Cycle_Count (they did at one time) the Load_Cycle_Count they publish effectively takes the same thing into account (in addition to other things that can cause a Load/Unload besides a cold reboot).

So your assertion that one can do a cold boot on a system an infinite number of times without any adverse effect on a hard drive is erroneous.

If you want to know what is irrelevant, the stuff you posted about warranties is irrelevant.

However, even if we disagree about whether some damage could occur if one cold booted a system often enough, I suspect we both agree that the number of times that a system would have to be booted before any damage likely occurred is far beyond the number of cold boots that anyone would actually perform (unless they were doing boot/shutdown/boot etc 24x7).

Posted: Thu Sep 09, 2010 6:00 pm
by danimal
m0002a wrote: What is shows (and what you denied before) is that if the machine is cold booted (shut down, and then booted), then the Power_Cycle_Count and the Load_Cycle_Count are both incremented by one.
none of that is relevant to this thread.
m0002a wrote:So your assertion that one can do a cold boot on a system an infinite number of times without any adverse effect on a hard drive is erroneous.
i made no such assertion, stop posting lies about what's been said.
m0002a wrote:I suspect we both agree that the number of times that a system would have to be booted before any damage likely occurred is far beyond the number of cold boots that anyone would actually perform (unless they were doing boot/shutdown/boot etc 24x7).
my exact words were: "if it was a significant failure mode, there would be a limitation on it."

you have totally failed to prove that wrong, despite making every dumba$$ off-topic argument that you possibly could.

Posted: Thu Sep 09, 2010 7:56 pm
by m0002a
danimal wrote:my exact words were: "if it was a significant failure mode, there would be a limitation on it."

you have totally failed to prove that wrong, despite making every dumba$$ off-topic argument that you possibly could.
Actually you said the following:
danimal wrote:how many times have i argued this with the engineers that i used to support... if the voltage spike on cold startup was an issue, how come the manufacturer(s) never have a specification for power cycling?

if it was a significant failure mode, there would be a limitation on it.
As I have tried to explain to you:
  • 1. For disk drives, the "main" issue is not voltage spikes on the electrical components of the drives, rather the mechanical loading of the heads to near the platter (or something akin to this).

    2. Some manufacturers such as Seagate do have a spec for cold Start/Stop of their hard drives (contrary to your claim), even if WD does not.

    3. WD does have a spec for Load/Unload, and I have proved that a cold Start/Stop of a WD drive includes a Load/Unload cycle, so that may be why WD sees no need for a spec on cold Start/Stop. WD drives do report the number of Start/Stop cycles to SMART.
Therefore, you claim of an association between whether WD has a spec for cold Start/Stop (it does not) and if it had "a significant failure mode, there would be a limitation on it" seems to be a little misleading to me. Seagate rates most of their consumer drives at 100,000 Start/Stop Cycles, so I don't know exactly what your interpretation of that is. My interpretation is that if you did Start/Stop a drive 100,000 times the failure rate would be more than minuscule.

Obviously, if you do the math, it is very difficult to ever reach 100,000 cold Start/Stop Cycles (each requiring a system shutdown and cold boot), so we both agree that one should not worry about rebooting a computer when a software patch is applied, since that does not even involve a Start/Stop Cycle for such a warm reboot.

There is no reason to be abusive in your posts.

Posted: Fri Sep 10, 2010 3:16 pm
by danimal
m0002a wrote:Actually you said the following:
my exact words were: "if it was a significant failure mode, there would be a limitation on it."

which you have failed to disprove.
m0002a wrote:As I have tried to explain to you:
as i pointed out before, you have repeatedly gone off-topic, with posts about about car warranties and marketing, for example.
m0002a wrote:1. For disk drives, the "main" issue is not voltage spikes on the electrical components of the drives, rather the mechanical loading of the heads to near the platter (or something akin to this).
that is idle speculation, that is totally unsubstantiated.

there is no point in further discussion here.

Posted: Fri Sep 10, 2010 4:12 pm
by m0002a
danimal wrote:my exact words were: "if it was a significant failure mode, there would be a limitation on it."

which you have failed to disprove.
I don't have to disprove it, because I don't disagree. What I disagree with is your statement that:

"manufacturer(s) never have a specification for power cycling"

As I explained, Seagate does have a specification for power cycling (100,000 on consumer drives), so your statement is incorrect.

I also explained that, although WD does not directly have a spec for that, they have a spec for Load/Unload which includes power cycling, since each time the power is cycled the Load/Unload count increments by one (which I proved in an above post with my WD drive).

Posted: Fri Sep 10, 2010 5:32 pm
by danimal
m0002a wrote:disk drives are rated for number of turn on cycles.
as i explained, wd does NOT have a specification of any kind for power cycling, which proves that your statement is incorrect.

you then went on to post irrelevant gibberish, lies about what i said, and totally unsubstantiated claims of fact that you were too incompetent to back up, for example:
m0002a wrote: 1. For disk drives, the "main" issue is not voltage spikes on the electrical components of the drives, rather the mechanical loading of the heads to near the platter (or something akin to this).
when i confronted you with that falsehood, you failed to respond, because you know that your statement there is incorrect.
m0002a wrote: 3. WD does have a spec for Load/Unload, and I have proved that a cold Start/Stop of a WD drive includes a Load/Unload cycle
once again, m0002a, you failed to provide any shred of documentation from any hdd manufacturer that would make that idiotic claim relevant, which proves that your statement is incorrect.

lies, guesses, and half-truths are not grounds for a debate.

Posted: Fri Sep 10, 2010 6:31 pm
by danimal
m0002a wrote: 3. WD does have a spec for Load/Unload, and I have proved that a cold Start/Stop of a WD drive includes a Load/Unload cycle, so that may be why WD sees no need for a spec on cold Start/Stop.
no, contact start/stop means nothing, which is why wd does NOT include a spec for it:


"S.M.A.R.T. attribute list

Start/Stop Count
Count of start/stop cycles of spindle
This value does not directly affect the condition of the drive.

Drive Power Cycle Count
Number of complete power on/off cycles
This value does not directly affect the condition of the drive.

Power off Retract Cycle
Count of power off cycles
This value does not directly affect the condition of the drive."
http://www.hdsentinel.com/smart/smartattr.php

Posted: Fri Sep 10, 2010 7:21 pm
by m0002a
danimal wrote:no, contact start/stop means nothing, which is why wd does NOT include a spec for it
As already mentioned, Seagate does publish a spec on Start/Stop cycles, and WD reports the number to SMART.

The Load/Unload count includes Start/Stop cycles, since each Start/Stop cycle increments the Load/Unload cycle by one on WD drives (as proven above).

Posted: Fri Sep 10, 2010 7:35 pm
by danimal
m0002a wrote:As already mentioned, Seagate does publish a spec on Start/Stop cycles, and WD reports the number to SMART.
as already proven, wd does NOT have a spec for start/stop cycles.

fyi, wd smart reporting of events is NOT a specification :roll: it's a shame that you can't tell the difference.

Posted: Fri Sep 10, 2010 10:06 pm
by m0002a
danimal wrote:as already proven, wd does NOT have a spec for start/stop cycles.
The number of Start/Stop Cycles will be reflected in the Load/Unload Cycle count, since each Stop/Start Cycle results in one additional Load/Unload Cycle. WD does publish a spec for Load/Unload Cycles.

Posted: Sat Sep 11, 2010 5:29 pm
by danimal
m0002a wrote:The number of Start/Stop Cycles will be reflected in the Load/Unload Cycle count
wd does NOT have a specification for start/stop cycles, as i already proved; it is therefore irrelevant.

i also proved that start/stop cycles are NOT relevant to the longevity of the drive, per the smart data gathering standards listed above... start/stop specs have been superceded by load/unload counts:

"Load/unload technology was discovered in the mid-1990s
as a viable alternative to Contact Start-Stop (CSS), a
method where the sliders which carry the Read/Write
heads in hard disk drives land on the disk media at power
down, and remain stationed on the disk
until the power up cycle. Although still
used by some vendors in non-mobile
platform drives, CSS has inherent
limitations which are addressed by
load/unload technology."
http://www.hitachigst.com/tech/techlib. ... _FINAL.pdf

Posted: Sat Sep 11, 2010 6:56 pm
by m0002a
danimal wrote:i also proved that start/stop cycles are NOT relevant to the longevity of the drive, per the smart data gathering standards listed above... start/stop specs have been superceded by load/unload counts...
Hmm. That is what I posted above (Start/Stop count has been superceeded by the Load/Unload Count). Maybe you forgot. Here is a quote:
m0002a wrote:There is a load/unload cycle count spec that does exist, and in effect supersedes the start/stop because each start or stop is one load/unload.
I am glad that we finally agree and can now move on.

Posted: Sun Sep 12, 2010 5:21 pm
by danimal
m0002a wrote:There is a load/unload cycle count spec that does exist, and in effect supersedes the start/stop because each start or stop is one load/unload.
m0002a wrote:I am glad that we finally agree and can now move on.
no, you keep claiming that both are the same, which of course would mean that there would be no need for anything other than start/stop... perhaps it would help if you actually read the thread before replying?

"...CSS has inherent
limitations which are addressed by
load/unload technology
."

Posted: Sun Sep 12, 2010 11:08 pm
by m0002a
danimal wrote:no, you keep claiming that both are the same, which of course would mean that there would be no need for anything other than start/stop... perhaps it would help if you actually read the thread before replying?

"...CSS has inherent
limitations which are addressed by
load/unload technology
."
I never claimed they were the same. I said that one included the other, which is true. Every time you cold boot a system, the Load/Unload Cycle count is incremented. Obviously, the Load/Unload Cycle count includes other things (as you can tell from my SMART output above) besides a cold boot.

Seagate, which still has specs for both (Start/Stop and Load/Unload) on their consumer drives, lists 100,000 for Start/Stop Count, and 300,000 for Load/Unload Count. So if you cold booted your system 300,000 times (way beyond what anyone would actually do) then you would reach the "limit" or "spec" on Load/Unload. Of course, reaching or going beyond the spec doesn't mean the drive will fail, or it could fail long before that.

On WD Green drives, it appears that Load/Unload spec may be reached far sooner than one may think, especially under Linux (unless the Load/Unload frequency is disabled with the WD utility). However, even if the Load/Unload is disabled, the drive will do a Load/Unload cycle increment with a cold boot.

Posted: Mon Sep 13, 2010 7:11 pm
by danimal
m0002a wrote:Every time you cold boot a system, the Load/Unload Cycle count is incremented.
that doesn't account for the possibility that the load/unload count is incremented for reasons other than a cold boot... which of course makes load/unload useless as a start/stop counter; your attempt to cover up for your false claim that "all" drives have a start/stop spec is a failure.

Posted: Tue Sep 14, 2010 9:42 am
by m0002a
danimal wrote:that doesn't account for the possibility that the load/unload count is incremented for reasons other than a cold boot... which of course makes load/unload useless as a start/stop counter
I have repeatedly said that load/unload includes other things besides a start/stop, and pointed you to my own SMART numbers to prove it.

But if a drive is rated for 300,000 Load/Unload, and you do a cold boot 300,000 times, then you will have reached the spec, since each cold boot increments the Load/Unload. Even though that is extremely unlikely, it is possible. So by logical deduction, WD is saying to not do a cold boot more than 300,000 times, which although ridiculously high, is a limit set by the manufacturer.

If one wants to simply count Start/Stop, one can look at the SMART report.

Posted: Wed Sep 15, 2010 5:07 pm
by danimal
m0002a wrote:I have repeatedly said that load/unload includes other things besides a start/stop
which of course makes load/unload useless as a start/stop counter; your attempt to cover up for your false claim that "all" drives have a start/stop spec is a failure.
m0002a wrote:But if
stop posting irrelevant hypothetical garbage.

Posted: Wed Sep 15, 2010 6:10 pm
by m0002a
danimal wrote:which of course makes load/unload useless as a start/stop counter; your attempt to cover up for your false claim that "all" drives have a start/stop spec is a failure.
I am not covering up anything. I agreed a long time ago that WD no longer has a spec for start/stop, even though Seagate still does.

It seems to me that you claimed that no drive manufacturer publishes a spec on start/stop, and that is not accurate.
danimal wrote:stop posting irrelevant hypothetical garbage.
That a cold boot increments Load/Unload is not hypothetical, it is a fact. I tested it using a WD drive and SMART reporting.

Posted: Thu Sep 16, 2010 7:23 pm
by danimal
m0002a wrote: I am not covering up anything. I agreed a long time ago that WD no longer has a spec for start/stop, even though Seagate still does.
i proved that your start/stop spec is obsolete, which renders anything favorable you say about it pretty worthless.

m0002a wrote:That a cold boot increments Load/Unload is not hypothetical, it is a fact. I tested it using a WD drive and SMART reporting.
which of course makes load/unload useless as a start/stop counter; your attempt to cover up for your false claim that "all" drives have a start/stop spec is a failure.

stop posting irrelevant hypothetical garbage.