High quality Hard Drive for large storage?
LeatherGryphon
Posts: 12,097
I'm looking to upgrade my main DAZ machine (it has a 1TB SATA-SSD and a 500GB NVMe-M.2-SSD) to include a larger internal hard drive for online backup & extra storage. I'm thinking about a 6TB 7200 rpm drive. HOWEVER, when I look at the NewEgg catalog and examine the customer comments about even the higher priced units, I see so many bad reports (1-egg) that say dead-on-arrival, piece-of-crap, don't-buy, or lasted 45 days, etc. I wouldn't mind if the percentage of negatives were in the 5% range, but most of them are in the 20% to 40% range even with the devices rated by many thousands of buyers. What gives? Are hard drives really that crappy these days?
Any recommedations? At this point I'm ready to admit that I can't get a bargain priced one, and am willing to pay for quality if I can find one.

Comments
I've never had a Caviar Black fail...that's what I use exclusively for HDD. For SSD I always go with Samsung EVO.
might be worth it to make a file server. mebbe from an older pc add a hd enclosure and upgrade the nics to fast ethernet. think is up to cat 8?
i'd rather have 5 2tb drives than one 10tb.
in the novell days, memory is foggy, but i think we mounted a single volume from umtiple drives. but i may be getting confused with AIX. we had ibm rs/6000 workstations, some were aix, some were novell. course drives were 200mb then.
Agree. IMHO the Western Digital Black drives are among the very best hard drives.
I lost 128GB the other day. I dropped it and it fell between the cracks in the floor.
Need to find a pair of tweezers to get it out.
That's been my experience also. Which is why I find the negative reviews so puzzling.
What about these new WD "Gold" drives?
There's always going to be a few bad eggs...and generally it's the bad experiences that get the reviews and feedback, and not to much the good ones. I think that's especially so with computer equipment and reviews on sites like NewEgg. One example that comes to mind is my AIO. I have a NZXT Kraken X64...just check out the reviews. They are not good. But my experience with it has been anything but. I haven't left a review to help bump up the average. I know I can't be alone in that.
trained hamsters can help with this.
remember when sticktion was a thing?
when computers were never turned off the intrenal read/write heads would get wonky
I personally (and yea, its really personal preference at this point) swear by Seagate Ironwolfs.. but the trick is to get two and then you're a bit more covered. But if it's important data opt for an offsite copy as well.
If you want security, go with a RAID 1 or 5 set up. It'll cost you four to six times what a standalone drive will cost you, but you'll always have backup.
I get Seagate once in a while. I have a few in my systems now. But back in the '80s Seagate was THE disk company. 300MB the size of a baby casket and weighing 100 lbs. (Puts my loss of 128GB in the crack of the floor in perspective doesn't it?
)
The only PC hard drives I ever had premature trouble with were Hitachi and Toshiba. I won't go near them anymore.
There are two good reasons for that - first, because the higher-end something is, the more people are likely to complain about issues. Ultimately, there is no "perfect" drive and there are always some failures, but someone who has a $500 drive fail is far more likely to go venting online than someone who lost a $50 one, and second, because there are many people who'll buy somethng because "it's the best" without ever really researching WHY it's considered the best. Hard Drives are designed for different purposes, and you have to match intended purpose to reviews.
How are RAID setups nowadays? I was always told that if you're not familiar with them and don't need them, don't bother with RAID setups because they can turn into a very big hassle if and when something goes wrong.
Backblaze tracks failure rates of various hard drive brands and models at various sizes. Granted, this seems to be a bit more server-oriented, but still, might be helpful.
Third quarter of 2019 (they haven't done fourth quarter or cumulated all of 2019 as yet)
All of 2018
The "Gold" drives are enterprise hard drives that are used in RAID & NAS configurations due to their higher reliability & also have a better/longer warranty. As a result, enterprise hard drives tend to be more expensive. If one were going to use a RAID configuration, these are the recommended mechanical hard drives to use.
Edit: Above post already linked Blackblaze!
It can be assumed almost all home users wont get as much spin time out of their drives as a corporate, so those 3% Samsungs would likely be less for most people.
It should also be noted that drives that are used in enterpise are never actually shut off, whereas most home computers are shut on and off constantly, so the actual wear and tear process is different.
Honestly, that's hard to say, as my own RAID setups have been put together by my company's old IT guy and he really knew what he was doing. Generally, though, software driven RAIDS are a lot tweakier than hardware driven setups. There are companies that sell complete RAID setups and they seem to have wildly varying reports. I think a lot of that is because most of the basi systems are shipped as RAID 0, which has no redundancy. I'm currently looking at one of the Western Digital MyCloud RAIDs, which ship as RAID 5, but, of course, those connect via etherrnet rather than USB 3 or thunderbolt.
Generally speaking the better NAS drives are the most reliable. Ironwolf Pro or WD Gold or Red. They are more expensive than the mainstream counterparts but the warranties make up for it. Ironwolf Pro in particular has a very good replacement and date recovery warranty.
On RAID arrays, Raid 1, 0 or 10 can generally set up directly in the UEFI of most systems. RAID 5, and the better IMO 50, require add in controllers which can be very expensive. Now I am an IT pro so I am comfortable with troubleshooting them if there is a problem, not that I can recall having one on my home system or home NAS ever.
On these prepacked NAS systems I'm more than skeptical. MyCloud, and Synology, are sellingyou very expesnive enclosures with otherwise pretty awful HW for very high prices. You can build a NAS out of a old computer, assuming it has SATA ports and drive bays to accomodate the amount of stoprage you want, very simply. If you live near a FreeGeek go down and get a 4 or 5 year old system, they install Linux which is fine. Then get FreeNAS, a linux distro meant for use in NAS. The setup will be very simple, follow instructions you can find plenty of places. Beyond having a central data store that simplifies backups etc. It can replace your DVR and allow you to stream to your devices anywhere both your DVR and your music.
I look for two things on a prospective new internal drive - workstation or server usage and five year warranty. If the manufacturer won't cover the drive for five years I won't purchase it.
I have had drive failures over the years, but with good backups they're just an inconvenience. I just lost a 1 TB Seagate and a 512 GB WD Blue this month as a matter of fact. Both were over ten years old. I should note that all my systems are on UPS and I only shut them down for cleaning or hardware changes - and I'm now going to SSD for internal drives (2 TB Crucial drives for the two that failed).
My backups all go to 2 and 4 TB external USB drives - two sets of drives, used on alternate weeks.
Ah, very noteworth!
Rather use a series of smaller 1Tb drives that are redundant and overlapping, rather than everything on a single large drive that can fail. Backup the back up.
WD Red are fine too - have 5 x 6 TB and no problems with any of them so far, after 4 years. Red Pro are a bit more expensive but faster (7200 RPM) and have 5 years of warranty (standard Red have 3), but for backup only 5400 RPM is usually fine.
I never use anything else than WD drives (I think I have purchased over 50 over the years) - some of my current system drives have a power-on time of 5+ years and are still working fine.
Beware of the infamous WD Intellipark feature though which plagued a number of their models during a period. I don't think they are using it anymore (AFAIK it was already turned off on my Reds when I checked), but I had to fix it on a number of my Green drives some years ago with the Intellipark utility which WD (thanks God) are offering. Apparently some of the Reds (older models I presume) are having this problem also, according to this (3½ year old) article: https://withblue.ink/2016/07/15/what-i-learnt-from-using-wd-red-disks-to-build-a-home-nas.html
I feel a bit uneasy with RAID too as the only backup system. For the same reason I always have two separate non-raid data drives in my systems, of which one is a copy of the other, and all backups are manual (using Second Copy). Then I regularly make a secondary backup of these drives to a WD RAID1 NAS (which so far has been without problems though).
At the datacenter we have lots of Red pro's and Ironwolf Pro's and while I tend toward WD for my own systems there really is nothing wrong with Seagate.
for HDD's it isn't so much whether power is applied full time or not, I guess it does slighly affect lifespan but trivially, but spin time. The mechanical components are what fail most of the time. Even a NAS, or other storage device, will not be likely to be reading or writing to every drive all the time. When the drives don't need to spin they are parked. So what matters most is spin time no on time.
It used to be the case, more than a decade ago, that powering on a storage device was quite demanding, all the drives started spinning together, and that really hammered the PSU but staggered start up controls that these days.
On using bunches of commodity drives, 1Tb WD blues for instance, as some sort of data integrity solution, it can certainly be done. However a bunch of cheap drives in RAID 1, or RAID 10, gets expensive once you're storing more than a few Tb's of data. Further a mirrored drive is not a backup. A backup needs to be physically seperate. We run tape backups locally and also remotedly. Obviously you guys don't have the sorts of needs where LTO drives make sense. What I'd do is use a cloud service to store scene files and other WIP. If your local storage dies you can recreate by getting your assets from the original sources or backups and then get your user created files off the cloud.
I'm not sure what you mean by "spin time" here. Platter spin and head activity/parking are two separate things - the platters keep spinning whether the heads are parked or not. And SMART power-on time means the time the platters are spinning (standby mode (sleep) time where the platters do not spin does not count as power-on time). So in that sense platter spin time and on-time is the same. And platter spin motor wear is therefore equivalent to power-on time, and nothing else. Likewise head bearings wear is equivalent to the amount of head activity, and nothing else.
Personally I've never seen a drive of any brand or type die because of mechanical wear though, not even with ball bearing spin motors, it's always been bad sectors or (very rare) a head crash that have killed them. I have an 80 GB Seagate drive with a power-on-time of almost 5.9 years (it was in use for over 8 years) which died from bad sectors, but it can still spin up and mount so all mechanical parts are still OK. Seagate have also said that the fluid bearings in their platter spin motors would "run forever", presumably meaning that something else would break before they did (I assume that they still use ball bearings for the heads, though I'm not sure).
RAID is not meant to be a backup solution; it is an availability solution.
I've had bed experiences with RAID solutions in the home environment; I've also worked with high-end IBM Disk Storage Subsystems that are nearly bullet proof. I check from time to time to see if any vendor has an acceptable price RAID system and I've not seen one yet.
At a minimum, I want an in-box spare drive, a controller that automatically and transparently rebuilds a failing drive on the spare, and a drive tray or drive bay indicator that a drive has been removed from use and can be hot-swapped. And the replacement drive needs to be forrnatted and integrated as the new spare.
I just use an old internal 2TB Seagate Firecuda hybrid SSHD that big enough to backup my 2TB system system disk that have all my DAZ data on it anyway.
I use Window 7 backup inclusive of a system disk image but am actually on Windows 10 build 1909 with all patches. It only runs every Sunday at 10PM. That leaves a small risk of loosing some data Monday - Saturday. but I don't see an obvious way to configure Windows 7 backup to do incrementals daily but then on Sunday do a full backup with a system disk image too. I could do daily fulls but that is more data throughput than is needed. The ssd & hdd from a daily full would get far more wear & tear from the backup than my actual usage. Each Sunday the old backup is overwritten. I'm not a bank or such an institute with potential lawsuits if I don't maintain X number of backups for Y time length intervals for Z length of time so I don't. I just do enough to avoid spending 2 or 3 days clean installing and copying saved Documents back over to the PC in event of PC crash of some sort.
Should Microsoft ever drop Windows 7 Backup I will buy the Acronis backup program that will let me schedule daily incrementals and a weekly full. I might do that anyway but I'm not particular fond of dropping $50 for something Windows 10 Pro should come with and that I have a low probability of using anyway, especially now with these new SSD storage devices. My mechanical magnetic backup HDD is likely to fail before my sytem SSD.
Heads are a mechanical part. head crashes, and motor failures, are far more common than enough sectors going bad to remove a drive from service.
No, platters are not kept spinning full time. That's a Windows thing. In Linux, depending on distro, a HDD can be set to spin down and park after a set amount of time idle.
Automatically and transparently rebuilding a failedrive is not really possible. You can have automatic rebuild, although I'd recommend against it, and if you have the right kind of RAID transparent rebuiild by doing it when the array is otherwise idle, although with current drive sizes that is less possible I had an 8Tb drive fail a few weeks ago and the rebuild took nearly 2 days.
For the rest you can get cases and RAID controllers for the rest. Though RAID controllers are not cheap and cases with 5 or more hot swap bays can easily be $500.
BTW formatting isn't really a thing the end user should be doing anymore. Every drive comes formatted now a days.
My system is still small enough to do multiple backups manually. But with serious infestations Raid takes care of my bugs.