SSDs keep getting more cost-effective on a cost-per-bit basis, but you get what you pay for.
Every time I tackle building a new computer, or just replacing an HDD with an SSD in an existing computer to satisfy my “need for speed,” I’m struck by (and inevitably write about) how much more cost-effective they are on a cost-per-bit basis than they were the previous time I did it. Back in February 2016, for example, I excitedly told you all about the 240 GByte SSDs I’d bought for $57.50 each after rebate, along with the 480 GByte SSD I’d just seen for sale for $120. And 1.5 years later (and two years ago) I wrote about a 1 TByte ADATA SU800 SSD I’d just bought on sale for $229.99.
Fast forward to today, as I’m assembling various HP systems with Hackintosh enhancements in mind, and those prices have at least halved, if not dropped even further. That $229.99 1TB SU800 SSD I got two years ago, for example? I just bought two more of them direct from ADATA via Ebay for $92.14 each.
Even less expensive were the 1TB Samsung 860 QVO SSDs I recently snagged for $89.99 each … as the “Q” in the name suggests, they squeeze four bits of data into each storage transistor, a notable factor in their cost effectiveness. Although you’d think there’d be a substantial reliability impact, independent testing doesn’t bear this out, at least for common usage scenarios.
And for lower capacity needs, I’d also recently picked up a few WD Blue 500GB SSDs for $54.99 each.
That all said, specifically emphasizing my earlier “reliability impact” comment, the phrase “you get what you pay for” (or if you prefer, caveat emptor) definitely applies. At this early stage in the QLC life cycle, for example, I’d never even think of buying an SSD based on the technology from anyone other than a tier-one supplier like Samsung. And similarly, even with now-relatively-mature TLC technology, varying inherent quality (coupled with robustness of testing and screening) of flash memory storage media in combination with varying robustness of the controller used to manage that media can result in widely divergent results, with potentially disastrous consequences.
Let’s revisit that two-year-back writeup, for example. Basically what I learned, in striving to free myself from the performance shackles of the 5400 RPM mobile 2.5″ HDD then (and still) installed inside my 2011-era Mac mini, was that both the inherent performance of storage media and the latency and bandwidth of the interface connecting that media to the rest of the system were important. Specifically, I’d learned that two seemingly speedy RAID 0-striped Seagate or WD 7200 RPM HDDs connected to the Mac mini over 1.5 Gbps Thunderbolt were still dramatically slower than an ADATA SU800 SSD connected to that same system over 800 Mbps Firewire.
At the time, I was just doing some evaluation testing. More recently, however, finally fed up with the Mac mini HDD’s molasses-slow boot and application launch delays, I decided to definitively convert over to a SSD-based setup. However, I was still too “chicken” to dive inside and do a direct swap. So instead, echoing what I’d tested two years ago, I dropped that very same ADATA SU800 SSD back in the FireWire 800 enclosure, cloned the HDD image to the SSD, and started booting and running from the SSD instead.
All worked great for about a month. Then partway through one day, I started getting weird errors; Dropbox and various other programs, for example, indicated that I’d need to reinitiate them from scratch. And then my Dropbox sync hung partway through, refusing to complete. I rebooted the system, and it hung partway through the startup sequence. Unplugging Firewire and booting from the internal HDD instead still worked fine, so the system itself hadn’t gone haywire. Instead, I deduced, the file system on the SSD had experienced an unrecoverable “hiccup.”
Not interested in a repeat performance, I took several mitigation steps. First off, remembering misbehaving Thunderbolt cables of times past, I replaced the 6′ FireWire 800 cable I’d previously been using with a 3′ alternative. I didn’t necessary think that the original cable was flawed, but it was far longer than it needed to be for this particular application, and I figured that shorter was better from a signal integrity standpoint.
Secondly, I made sure that the SSD was running up-to-date firmware. This ended up being far easier in concept than in practice. Those of you who’ve already tackled this task already know that:
What did this all mean in my case? Well, first off, I needed to find a program that would uncompress RAR archives, because that’s the comparatively obscure (versus ZIP) format that ADATA supplied the firmware image in. Then I needed to make a bootable USB flash drive using that image. Then I needed to temporarily install the ADATA SSD connected via SATA in one of my Hackintosh systems. And only then could I boot off the USB flash drive and run the cryptic command line firmware update routine. And even that didn’t work the first few times I tried, because as it turns out a high-capacity USB flash drive formatted as exFAT wouldn’t work; I needed to use a low-capacity USB flash drive formatted as FAT16 or FAT32. Sigh.
When all was said and done, I reinstalled the SSD in the FireWire 800 enclosure, and connected it to the Mac mini over the shorter cable. After booting off the HDD recovery partition and reinitializing the SSD, I was able to restore a timely image of the SSD from Time Machine backup, so I thankfully didn’t lose anything important. And the error I’d previously experienced (or any other, for that matter) hasn’t resurfaced. Still, I really do need to bolster the courage to dispense with the FireWire 800 workaround and direct-install the SSD in the Mac mini eventually. And I’ll certainly keep Time Machine backups running regularly, just in case.
Comments? Questions? Sound off in the comments, folks!