- Whatever you say, the price per GB is still much higher, mandating any serious system to now have a mandatory minimum of two disks: one SSD for the OS, and one HDD for the data
- This means that the cost of computing has gone up, not down. It also means investments in HDD technology are going down, which in turn will make (fast) HDDs more expensive; this then also drives up the cost, at least for disks that are now getting replaced by SSDs
- Defaulting to SSDs for system disks will make software designers lazy. Performance is so enormous compared to the past that developers will stop investing time in tuning and improving the performance of e.g. on-disk reads to no longer care about contigiousness of data. We have seen this before with OS makers (Microsoft) not optimizing code because "in one year time all of the hardware components will have caught up with our abysmal performance anyway". This then makes it mandatory to use SSD which in turn makes it mandatory to have 2 disks
- This simply gives rise to bad software design. The game Diablo III was rumoured and experienced to have extremely incomprehensible lag issues where CPU and GPU were not bottlenecking but perhaps the HDD was? However people with SSDs also reported this lag. If you are going to become lazy in designing software, and hence do not care about creating good architecture anymore, issues are going to sneak in that would otherwise have been caught by the tail and thrown out. Ignoring one important part of software design means you will also perform less in other areas that still matter, because your level of caring simply goes down.
- SSD is still at least 3-4 times as expensive as HDD technology. Most of the benefits apart from application load and game performance are unnecessary to most. Whether you can boot your system in 10 seconds or 50 should be irrelevant; long time ago people devised "standby" that should render booting your system pointless except for laptop users. In other words, those short boot times solve a problem that doesn't exist. Most of the time for a laptop you would use either suspend or hibernate, and booting from scratch is simply not part of normal or recommended computer usage even for laptops.
- Any advanced partitioning scheme is going to be hindered by having to use SSD for system and HDD for data. The thing is no longer homogenous and redundancy becomes almost impossible. How are you going to create RAID 1 redundancy for your SSD? Well, you need another SSD. Now you also need another HDD. How are you going to expand this with increased capacity? Well, that is entirely pointless for SSD, so you don't. For HDD you can still do it, but now you have FIVE drives in your system just to take care of that. You just cannot mix the SSD and HDD components of your system without dedicating that SSD to caching-function-only.
What I mean of course is that you now have 2 SSDs in mirror raid, and e.g. 3 HDDs in RAID 5.
- Using onboard "hybrid" SSHDs in which SSD and HDD are combined on disk, induces the same annoyance and waste that we saw in Hi-fi stereo towers. Inevitably one component broke rendering the rest of the system pointless even though most was still working. (In one SSHD test I saw the SSHD have longer boot times than any other disk in the test). Having a built-in system means you cannot fine tune anything yourself or make any choices about it. You must hope it works well or else. Further, you now cannot separate the two devices; you cannot put the HDD part into some other system where it needs no cache; you cannot take the SSD part and use it to cache something else. It becomes like the AMD APUs, not good for anything really. It doesn't know what to be. The SSD cache can or could also have security implications, and it is hard to tell in advance what the impact will be. You won't be able to address it directly because it is addressed as part of the total volume and you cannot tell what parts are cached and which aren't. Deleting or discarding data would clear the cache, but there will probably be no way to reset it manually. You woudn't know. Perhaps a trim operation would solve it, perhaps not.
- Having a real independent (separated) component-based caching feature or setup would solve the homogenity issue because you can take off the cache and the base system doesn't change. You could cache an entire RAID volume (or partition, or array) with a single SSD (or partition thereof) and be allowed to take it off whenever you want. Now you can do RAID whatever way you want and never be hindered by design issues preventing you from choosing the setup you want. You are not inhibited in your design because all disks have the same performance and you can combine them at will. However this does imply and require that certain basics for this are present in your OS or firmware or hardware system to account for this need. I suspect that availability or dependability is still minimal.
In current motherboards, M.2 would probably be most convenient for this; but the cheapest offering, Transcend, may not be as reliable. (I have tested the Transcend MSA370 and its write speeds are just abysmal. I do not understand how they can put anything on the market that cannot write faster than 14MB/s. Another product, the MTS400 (M.2) is said to crash systems due to an unsupported SATA III instruction. Link here but another one (ADATA Premier SP600NS34) appears to also suffer from it. It's the sort of unreliability and head-ache that didn't exist when we still used HDDs.
Microsoft Windows allows "ReadyBoost" by designating a file (created by the system) on a flash device (that has to be mounted as a drive letter) to be used for a disk cache. While very easy to install, it defeats the purpose of having something transparent to the system. Nevertheless, it could be useful. Dedicated SSD drives meant for caching come [or came] with DataPlex software, review here or here. Dataplex was acquired by Samsung in May 2015 and discontinued the product although a final no-license-restrictions version was made available read here although it only works (hardcoded) with a few selected SSD drives. Intel has Rapid Response technology that I assume is just part of their motherboards as a RAID offering.
FancyCache is considered to be an alternative which was replaced by PrimoCache at a cost of about $30 (which is reasonable, even though the cache drive itself may not cost more than that) althougth they do not market it as an SSD cache, but rather as an improved memory cache; it can use SSDs as ReadyBoost does, though. All in all we do not see a standardized or very thorough solution on the Windows platform here. AMD does not support any caching feature in its motherboards, apparently.
StarTech offers a 'dedicated' RAID controller with hybrid support here but it is a rather expensive solution at > € 80 and you never know how it is going to work cross operating system. Real hardware raid cards meanwhile still start at about €150.
I have personally used a slow device mentioned earlier in Linux using LVM Cache, although that is outside the scope of what I am writing here. I feel it is due to the Linux kernel and its many bottlenecks (as stated here and here for example) that I experienced frequent freezes of my system sometimes lasting longer than 2 minutes at a time due to some IO queue filling up and not being emptied soon enough. (The Linux kernel has a single IO queue for all devices in terms of buffers, as does the Windows kernel, but for Linux it works really really bad) - this was on a device with 14MB/s write speeds, as noted. Something that should ordinarily NEVER result in this sort of behaviour. However, it does imply that on Linux not everything is roses and moonshine either.