Yep, short stroking has value. It decreases stroke length and keeps the data on the part of the drive with the highest throughput.
It does not really make your drive faster, it just makes sure that it does not get slow.
This is an old trick. I was told back in the early 90s that the enormously huge TPC benchmarks done on big iron setups back then with hundreds of drive often used just the 5-10% outer cylinders of the drives to keep performance optimal and the technique is of course much older than that.
When Seagate introduced their fairly small but very expensive Cheetah X15 (worlds first 15k rpm drive), short stroking was actually part of their defense for the entire product.
http://www.storagereview.com/map/lm.cgi/ST318451LW
---
Through interviews with many of its top customers, the company found that its drives were being used with partitions that spanned only one-half or even one-third of total capacity. The reason? Clients wanted to improve on drive seek times by restricting seeks to only a fraction of the platter's span.
---
Nothing new here, the price/performance ratio between the different drives has obviously had the short stroking factor calculated in by vendors since long before "short stroking" got re-discovered by PC geeks.
I personally think that most of this is just a curiosity except for benchmarking competitions or where you really focus on getting the absolutely last little permille out of your computer just so you can laugh at the kid next door since your computer is faster than his or feel good because the next level of crysis loads 5 seconds faster (which I think might very well be a very valid reason
)
Most modern filesystems will in various ways try to group data together on the disk to reduce latency. This does not always mean that it groups all of it together at the start of the drive, but if you look at the "grandfather" of modern filesystems, the berkley fast file system, it will try to, for instance, group files in one directory close together as it will assume that they often related and more importantly, if you work in one directory, you might need to access all the files in it because you list the directory, search in it, etc.
If you have a very large drive like a 1.5TB disk and use only 100 GB of it, well I have to admit that I have never actually scanned such a disk under windows and looked at how things get allocated in detail ntfs, but I would suspect that most of the data gets allocated mainly in 2-3 areas of the drive and probably most, if not all, of that ends up somewhere in the first 50% of the drive.
Yes, short stroking give you some level of guarantee on your minimum performance, but in reality, if you only use a small part of the drive anyway, you will probably see a lot of the same advantage even if you do not partition or in other ways cut of the total disk size and you get the additional bonus of larger contiguous segments of free space which reduces filesystem fragmentation in general.
Bottom line, this is fun for benchmarks and might be critical for applications with certain real time like performance requirements, but most likely a curiosity for most other uses.
When that is said, I am kind of feeling like heading out to get a 1.5TB drive just to run my own test on it as well as finding out if the shorter arm movements affect noise