High quality Hard Drive for large storage?

2»

Comments

  • namffuaknamffuak Posts: 4,422
    namffuak said:
    Taoz said:
    Cybersox said:

    If you want security, go with a RAID 1 or 5 set up.  It'll cost you four to six times what a standalone drive will cost you, but you'll always have backup.  

    How are RAID setups nowadays? I was always told that if you're not familiar with them and don't need them, don't bother with RAID setups because they can turn into a very big hassle if and when something goes wrong.

    I feel a bit uneasy with RAID too as the only backup system.  For the same reason I always have two separate non-raid data drives in my systems, of which one is a copy of the other, and all backups are manual (using Second Copy).  Then I regularly make a secondary backup of these drives to a WD RAID1 NAS (which so far has been without problems though).

    RAID is not meant to be a backup solution; it is an availability solution.

    I've had bed experiences with RAID solutions in the home environment; I've also worked with high-end IBM Disk Storage Subsystems that are nearly bullet proof. I check from time to time to see if any vendor has an acceptable price RAID system and I've not seen one yet.

    At a minimum, I want an in-box spare drive, a controller that automatically and transparently rebuilds a failing drive on the spare, and a drive tray or drive bay indicator that a drive has been removed from use and can be hot-swapped. And the replacement drive needs to be forrnatted and integrated as the new spare.

    Automatically and transparently rebuilding a failedrive is not really possible. You can have automatic rebuild, although I'd recommend against it, and if you have the right kind of RAID transparent rebuiild by doing it when the array is otherwise idle, although with current drive sizes that is less possible I had an 8Tb drive fail a few weeks ago and the rebuild took nearly 2 days. 

    For the rest you can get cases and RAID controllers for the rest. Though RAID controllers are not cheap and cases with 5 or more hot swap bays can easily be $500.

    BTW formatting isn't really a thing the end user should be doing anymore. Every drive comes formatted now a days.

     

    Like I said - I've been spoiled by IBM large Disk Storage subsystems- which provide all of these features - starting at just less than $1,000,000 for a 20 TB system that consisted of two 22-inch racks bolted together, weighed in at just over one ton, and required redundant 3-phase 240 volt 50 amp power feeds. Not something for even small companies. We had two, and our most critical data was software mirrored between RAID 5 arrays on both systems.

    So, for home use, I still want a 5 year warranty and weekly (at a minimum) backups.

  • namffuaknamffuak Posts: 4,422
    Taoz said:

    for HDD's it isn't so much whether power is applied full time or not, I guess it does slighly affect lifespan but trivially, but spin time. The mechanical components are what fail most of the time. Even a NAS, or other storage device, will not be likely to be reading or writing to every drive all the time. When the drives don't need to spin they are parked. So what matters most is spin time no on time.

    I'm not sure what you mean by "spin time" here.  Platter spin and head activity/parking are two separate things - the platters keep spinning whether the heads are parked or not.  And SMART power-on time means the time the platters are spinning (standby mode (sleep) time where the platters do not spin does not count as power-on time).  So in that sense platter spin time and on-time is the same.  And platter spin motor wear is therefore equivalent to power-on time, and nothing else.  Likewise head bearings wear is equivalent to the amount of head activity, and nothing else.

    Personally I've never seen a drive of any brand or type die because of mechanical wear though, not even with ball bearing spin motors, it's always been bad sectors or (very rare) a head crash that have killed them.  I have an 80 GB Seagate drive with a power-on-time of almost 5.9 years (it was in use for over 8 years) which died from bad sectors, but it can still spin up and mount so all mechanical parts are still OK.  Seagate have also said that the fluid bearings in their platter spin motors would "run forever", presumably meaning that something else would break before they did (I assume that they still use ball bearings for the heads, though I'm not sure).

    Heads are a mechanical part. head crashes, and motor failures, are far more common than enough sectors going bad to remove a drive from service.

    No, platters are not kept spinning full time. That's a Windows thing. In Linux, depending on distro, a HDD can be set to spin down and park after a set amount of time idle. 

    And most of us are running Windows.

    The ten year old Seagate that failed this month started by reporting sector reallocations. By the time I got the replacement drives it had dropped off the system entirely and ended up with a head crash about 4 to 6 hours before I was ready to shut down. The WD Blue drive didn't report any errors and was working fine when I shut down to replace the Seagates. It just didn't come up when I restarted the system - no errors in the system logs, just not there anymore.

  • Just a reminder.. RAID is not a backup..! But yea, better than nothing ;)

  • namffuak said:
    namffuak said:
    Taoz said:
    Cybersox said:

    If you want security, go with a RAID 1 or 5 set up.  It'll cost you four to six times what a standalone drive will cost you, but you'll always have backup.  

    How are RAID setups nowadays? I was always told that if you're not familiar with them and don't need them, don't bother with RAID setups because they can turn into a very big hassle if and when something goes wrong.

    I feel a bit uneasy with RAID too as the only backup system.  For the same reason I always have two separate non-raid data drives in my systems, of which one is a copy of the other, and all backups are manual (using Second Copy).  Then I regularly make a secondary backup of these drives to a WD RAID1 NAS (which so far has been without problems though).

    RAID is not meant to be a backup solution; it is an availability solution.

    I've had bed experiences with RAID solutions in the home environment; I've also worked with high-end IBM Disk Storage Subsystems that are nearly bullet proof. I check from time to time to see if any vendor has an acceptable price RAID system and I've not seen one yet.

    At a minimum, I want an in-box spare drive, a controller that automatically and transparently rebuilds a failing drive on the spare, and a drive tray or drive bay indicator that a drive has been removed from use and can be hot-swapped. And the replacement drive needs to be forrnatted and integrated as the new spare.

    Automatically and transparently rebuilding a failedrive is not really possible. You can have automatic rebuild, although I'd recommend against it, and if you have the right kind of RAID transparent rebuiild by doing it when the array is otherwise idle, although with current drive sizes that is less possible I had an 8Tb drive fail a few weeks ago and the rebuild took nearly 2 days. 

    For the rest you can get cases and RAID controllers for the rest. Though RAID controllers are not cheap and cases with 5 or more hot swap bays can easily be $500.

    BTW formatting isn't really a thing the end user should be doing anymore. Every drive comes formatted now a days.

     

    Like I said - I've been spoiled by IBM large Disk Storage subsystems- which provide all of these features - starting at just less than $1,000,000 for a 20 TB system that consisted of two 22-inch racks bolted together, weighed in at just over one ton, and required redundant 3-phase 240 volt 50 amp power feeds. Not something for even small companies. We had two, and our most critical data was software mirrored between RAID 5 arrays on both systems.

    So, for home use, I still want a 5 year warranty and weekly (at a minimum) backups.

    That's crazy expensive. No idea how long ago that was sold but I could part out a 20Tb RAID 5 for less than $5k (6 4Tb Ironwolf Pro's at $140 each plus a $1000 RAID rack with 6+ bays and some RAM and a cheap Xeon).

    We don't generally even build storage devices that small. We've been buying exclusively 12Tb drives for the last 18 months of so.

  • namffuak said:
    Taoz said:

    for HDD's it isn't so much whether power is applied full time or not, I guess it does slighly affect lifespan but trivially, but spin time. The mechanical components are what fail most of the time. Even a NAS, or other storage device, will not be likely to be reading or writing to every drive all the time. When the drives don't need to spin they are parked. So what matters most is spin time no on time.

    I'm not sure what you mean by "spin time" here.  Platter spin and head activity/parking are two separate things - the platters keep spinning whether the heads are parked or not.  And SMART power-on time means the time the platters are spinning (standby mode (sleep) time where the platters do not spin does not count as power-on time).  So in that sense platter spin time and on-time is the same.  And platter spin motor wear is therefore equivalent to power-on time, and nothing else.  Likewise head bearings wear is equivalent to the amount of head activity, and nothing else.

    Personally I've never seen a drive of any brand or type die because of mechanical wear though, not even with ball bearing spin motors, it's always been bad sectors or (very rare) a head crash that have killed them.  I have an 80 GB Seagate drive with a power-on-time of almost 5.9 years (it was in use for over 8 years) which died from bad sectors, but it can still spin up and mount so all mechanical parts are still OK.  Seagate have also said that the fluid bearings in their platter spin motors would "run forever", presumably meaning that something else would break before they did (I assume that they still use ball bearings for the heads, though I'm not sure).

    Heads are a mechanical part. head crashes, and motor failures, are far more common than enough sectors going bad to remove a drive from service.

    No, platters are not kept spinning full time. That's a Windows thing. In Linux, depending on distro, a HDD can be set to spin down and park after a set amount of time idle. 

    And most of us are running Windows.

    The ten year old Seagate that failed this month started by reporting sector reallocations. By the time I got the replacement drives it had dropped off the system entirely and ended up with a head crash about 4 to 6 hours before I was ready to shut down. The WD Blue drive didn't report any errors and was working fine when I shut down to replace the Seagates. It just didn't come up when I restarted the system - no errors in the system logs, just not there anymore.

    The WD probably had a motor failure on startup. That's a risk with any electric motor. It works fine until it is powered down and cools off then it never starts up again. That's why datacenters prefer hot swap. If one drive in a storage device goes bad it isn't inconceivable that the rest are not in great shape and a power down might kill the whole thing. So do a hotswap/rebuild and then get the data off the box and onto a new one.

  • namffuaknamffuak Posts: 4,422
    namffuak said:
    namffuak said:
    Taoz said:
    Cybersox said:

    If you want security, go with a RAID 1 or 5 set up.  It'll cost you four to six times what a standalone drive will cost you, but you'll always have backup.  

    How are RAID setups nowadays? I was always told that if you're not familiar with them and don't need them, don't bother with RAID setups because they can turn into a very big hassle if and when something goes wrong.

    I feel a bit uneasy with RAID too as the only backup system.  For the same reason I always have two separate non-raid data drives in my systems, of which one is a copy of the other, and all backups are manual (using Second Copy).  Then I regularly make a secondary backup of these drives to a WD RAID1 NAS (which so far has been without problems though).

    RAID is not meant to be a backup solution; it is an availability solution.

    I've had bed experiences with RAID solutions in the home environment; I've also worked with high-end IBM Disk Storage Subsystems that are nearly bullet proof. I check from time to time to see if any vendor has an acceptable price RAID system and I've not seen one yet.

    At a minimum, I want an in-box spare drive, a controller that automatically and transparently rebuilds a failing drive on the spare, and a drive tray or drive bay indicator that a drive has been removed from use and can be hot-swapped. And the replacement drive needs to be forrnatted and integrated as the new spare.

    Automatically and transparently rebuilding a failedrive is not really possible. You can have automatic rebuild, although I'd recommend against it, and if you have the right kind of RAID transparent rebuiild by doing it when the array is otherwise idle, although with current drive sizes that is less possible I had an 8Tb drive fail a few weeks ago and the rebuild took nearly 2 days. 

    For the rest you can get cases and RAID controllers for the rest. Though RAID controllers are not cheap and cases with 5 or more hot swap bays can easily be $500.

    BTW formatting isn't really a thing the end user should be doing anymore. Every drive comes formatted now a days.

     

    Like I said - I've been spoiled by IBM large Disk Storage subsystems- which provide all of these features - starting at just less than $1,000,000 for a 20 TB system that consisted of two 22-inch racks bolted together, weighed in at just over one ton, and required redundant 3-phase 240 volt 50 amp power feeds. Not something for even small companies. We had two, and our most critical data was software mirrored between RAID 5 arrays on both systems.

    So, for home use, I still want a 5 year warranty and weekly (at a minimum) backups.

    That's crazy expensive. No idea how long ago that was sold but I could part out a 20Tb RAID 5 for less than $5k (6 4Tb Ironwolf Pro's at $140 each plus a $1000 RAID rack with 6+ bays and some RAM and a cheap Xeon).

    We don't generally even build storage devices that small. We've been buying exclusively 12Tb drives for the last 18 months of so.

    Yeah - enterprise level storage, for the Z series, AS400s, RS-6000 systems, and large Intel farms. The only real competitor was EMC, and their systems were 40% more expensive. The last one I configured  12+ years ago was a 40 TB system with 16 fiber channel interfaces rated at 8 Gb/second throughput each. (4 cards, 4 ports each). The system  had two RS-6000 servers as controllers running pretty much standard AIX (IBM Unix variant) with custom device drivers for the fiber that looked like disk to the client system. Each server had direct access to all the drives but 'owned' half of them.These systems were designed to support mission critical and enterprise critical environments that required 5 nines availability (99.99999 percent uptime, - a maximum of 5.5 minutes per year of outage).

    Just look up "IBM DS8100" - it's a heckuva box.

  • namffuaknamffuak Posts: 4,422
    namffuak said:
    Taoz said:

    for HDD's it isn't so much whether power is applied full time or not, I guess it does slighly affect lifespan but trivially, but spin time. The mechanical components are what fail most of the time. Even a NAS, or other storage device, will not be likely to be reading or writing to every drive all the time. When the drives don't need to spin they are parked. So what matters most is spin time no on time.

    I'm not sure what you mean by "spin time" here.  Platter spin and head activity/parking are two separate things - the platters keep spinning whether the heads are parked or not.  And SMART power-on time means the time the platters are spinning (standby mode (sleep) time where the platters do not spin does not count as power-on time).  So in that sense platter spin time and on-time is the same.  And platter spin motor wear is therefore equivalent to power-on time, and nothing else.  Likewise head bearings wear is equivalent to the amount of head activity, and nothing else.

    Personally I've never seen a drive of any brand or type die because of mechanical wear though, not even with ball bearing spin motors, it's always been bad sectors or (very rare) a head crash that have killed them.  I have an 80 GB Seagate drive with a power-on-time of almost 5.9 years (it was in use for over 8 years) which died from bad sectors, but it can still spin up and mount so all mechanical parts are still OK.  Seagate have also said that the fluid bearings in their platter spin motors would "run forever", presumably meaning that something else would break before they did (I assume that they still use ball bearings for the heads, though I'm not sure).

    Heads are a mechanical part. head crashes, and motor failures, are far more common than enough sectors going bad to remove a drive from service.

    No, platters are not kept spinning full time. That's a Windows thing. In Linux, depending on distro, a HDD can be set to spin down and park after a set amount of time idle. 

    And most of us are running Windows.

    The ten year old Seagate that failed this month started by reporting sector reallocations. By the time I got the replacement drives it had dropped off the system entirely and ended up with a head crash about 4 to 6 hours before I was ready to shut down. The WD Blue drive didn't report any errors and was working fine when I shut down to replace the Seagates. It just didn't come up when I restarted the system - no errors in the system logs, just not there anymore.

    The WD probably had a motor failure on startup. That's a risk with any electric motor. It works fine until it is powered down and cools off then it never starts up again. That's why datacenters prefer hot swap. If one drive in a storage device goes bad it isn't inconceivable that the rest are not in great shape and a power down might kill the whole thing. So do a hotswap/rebuild and then get the data off the box and onto a new one.

    That was what I figured; a ten year old drive that hadn't been shut down in four months. The box now has two 2 TB SATA drives, to be replaced with SSD when the 4 TB SSD become less expensive, three 2 TB SSD, and a 500 GB SSD as the system drive.

  • namffuak said:
    namffuak said:
    namffuak said:
    Taoz said:
    Cybersox said:

    If you want security, go with a RAID 1 or 5 set up.  It'll cost you four to six times what a standalone drive will cost you, but you'll always have backup.  

    How are RAID setups nowadays? I was always told that if you're not familiar with them and don't need them, don't bother with RAID setups because they can turn into a very big hassle if and when something goes wrong.

    I feel a bit uneasy with RAID too as the only backup system.  For the same reason I always have two separate non-raid data drives in my systems, of which one is a copy of the other, and all backups are manual (using Second Copy).  Then I regularly make a secondary backup of these drives to a WD RAID1 NAS (which so far has been without problems though).

    RAID is not meant to be a backup solution; it is an availability solution.

    I've had bed experiences with RAID solutions in the home environment; I've also worked with high-end IBM Disk Storage Subsystems that are nearly bullet proof. I check from time to time to see if any vendor has an acceptable price RAID system and I've not seen one yet.

    At a minimum, I want an in-box spare drive, a controller that automatically and transparently rebuilds a failing drive on the spare, and a drive tray or drive bay indicator that a drive has been removed from use and can be hot-swapped. And the replacement drive needs to be forrnatted and integrated as the new spare.

    Automatically and transparently rebuilding a failedrive is not really possible. You can have automatic rebuild, although I'd recommend against it, and if you have the right kind of RAID transparent rebuiild by doing it when the array is otherwise idle, although with current drive sizes that is less possible I had an 8Tb drive fail a few weeks ago and the rebuild took nearly 2 days. 

    For the rest you can get cases and RAID controllers for the rest. Though RAID controllers are not cheap and cases with 5 or more hot swap bays can easily be $500.

    BTW formatting isn't really a thing the end user should be doing anymore. Every drive comes formatted now a days.

     

    Like I said - I've been spoiled by IBM large Disk Storage subsystems- which provide all of these features - starting at just less than $1,000,000 for a 20 TB system that consisted of two 22-inch racks bolted together, weighed in at just over one ton, and required redundant 3-phase 240 volt 50 amp power feeds. Not something for even small companies. We had two, and our most critical data was software mirrored between RAID 5 arrays on both systems.

    So, for home use, I still want a 5 year warranty and weekly (at a minimum) backups.

    That's crazy expensive. No idea how long ago that was sold but I could part out a 20Tb RAID 5 for less than $5k (6 4Tb Ironwolf Pro's at $140 each plus a $1000 RAID rack with 6+ bays and some RAM and a cheap Xeon).

    We don't generally even build storage devices that small. We've been buying exclusively 12Tb drives for the last 18 months of so.

    Yeah - enterprise level storage, for the Z series, AS400s, RS-6000 systems, and large Intel farms. The only real competitor was EMC, and their systems were 40% more expensive. The last one I configured  12+ years ago was a 40 TB system with 16 fiber channel interfaces rated at 8 Gb/second throughput each. (4 cards, 4 ports each). The system  had two RS-6000 servers as controllers running pretty much standard AIX (IBM Unix variant) with custom device drivers for the fiber that looked like disk to the client system. Each server had direct access to all the drives but 'owned' half of them.These systems were designed to support mission critical and enterprise critical environments that required 5 nines availability (99.99999 percent uptime, - a maximum of 5.5 minutes per year of outage).

    Just look up "IBM DS8100" - it's a heckuva box.

    Yeah we don't do the IBM ecosystem. I get it if you've got legacy DB2 stuff but otherwise you're just overpaying. 5 nines uptime is pretty much standard in the enterprise now. We offer it, some clients don't care about the difference between 4 seconds a year and 40 so we offer them a lower guarantee, and do it with not just local redundancy but mirroring systems between physical datacenters. 

    We had a bunch of infiniband storage devices but increasingly we're switching to 40Gbit NIC's. It's just as scalable, just add more NIC's if you need more throughput, and the physical hardware and cables are cheaper. Personally I'll be quite happy to see the last of our fiber gone.

  • LeatherGryphonLeatherGryphon Posts: 12,120
    edited February 2020

    Made my decision, made my purchase.  After all the bad reviews and sleepless nights, I took a chance and ordered an external (instead of internal) WD 6GB drive.  It was on sale, I couldn't resist the price and it fits into my existing zoo of several external, variously sized, hard drives nicely.  Letting me move a recently purchased WD 2TB drive that has been in use as an archive for at least a year, currently being used as an external but will now be moved to be an internal in my DAZ machine giving me a total of 5.5TB internal on that machine.  I'm playing musical hard drives but once I get the data all shifted around I'll have multiple redundancies for both my main machines.  However, to protect myself against the new 6TB external drive failing the day after the manufacturer's warrantee expires I did buy the longer warrantee offered.  I don't usually do that but it wasn't expensive and I'm getting skittish in my old age.frown

    Post edited by LeatherGryphon on
Sign In or Register to comment.