Welcome, Guest: Register On Nairaland / LOGIN! / Trending / Recent / New
Stats: 3,152,296 members, 7,815,515 topics. Date: Thursday, 02 May 2024 at 01:47 PM

Difference Between SSD Hard Drives And HDD Hard Drives - Computers - Nairaland

Nairaland Forum / Science/Technology / Computers / Difference Between SSD Hard Drives And HDD Hard Drives (2958 Views)

Difference Between Hibernate And Sleep On A Computer / 500GB SSD Hard Disk For Sale / The PS4 Will Soon Support External Drives And 3D Movies In VR (2) (3) (4)

(1) (Reply) (Go Down)

Difference Between SSD Hard Drives And HDD Hard Drives by onlykay(m): 5:48pm On Sep 23, 2015
Performance: SSD Wins

Hands down, SSD performance is faster. HDDs have the inescapable overhead of physically scanning disk for reads/writes. Even the fastest 15 RPM HDDs may bottleneck a high-traffic environment. Parallel disk, caching, and lots of extra RAM will certainly help. But eventually the high rate of growth will pull well ahead of the finite ability of HDDs to go faster.

DRAM-based SSD is the faster of the two but NAND is faster than hard drives by a range of 80-87% -- a very narrow range between low-end consumer SSDs and high-end enterprise SSDs. The root of the faster performance lies in how quickly SSDs and HDDs can access and move data: SSDs have no physical tracks or sectors and thus no physical seek limits. The SSD can reach memory addresses much faster than the HDD can move its heads.

The distinction is unavoidable given the nature of IO. In a hard disk array, the storage operating system directs the IO read or write requests to physical disk locations. In response, the platter spins and disk drive heads seek the location to write or read the IO request. Non-contiguous writes multiply the problem and latency is the result.

In contrast, SSDs are the fix to HDDs in high IO environments, particularly in Tier 0, high IO Tier 1 databases, and caching technologies. Since SSDs have no mechanical movement they accelerate IO requests far faster than even the fastest HDD.

Reliability: HDD Scores Points

Performance may be a slam dunk but reliability is not. Granted that SSD’s physical reliability in hostile environments is clearly better than HDDs given their lack of mechanical parts. SSDs will survive extreme cold and heat, drops, and multiple G’s. HDDs… not so much.

However, few data centers will experience rocket liftoffs or sub-freezing temperatures, and SSDs have their own unique stress points and failures. Solid state architecture avoids the same type of hardware failures as the hard drive: there are no heads to misalign or spindles to wear out. But SSDs still have physical components that fail such as transistors and capacitors. Firmware fails too, and wayward electrons can cause real problems. And in the case of a DRAM SSD, the capacitors will quickly fail in a power loss. Unless IT has taken steps to protect stored data, that data is gone.

Wear and tear over time also enters the picture. As an SSD ages its performance slows. The processor must read, modify, erase and write increasing amounts of data. Eventually memory cells wear out. Cheaper consumer TLC is generally relegated to consumer devices and may wear out more quickly because it stores more data on a reduced area. (Thus goes the theory; studies do not always bear it out.)

For example, since the MLC stores multiple bits (electronic charges) per cell instead of SLC’s one bit, you would expect MLC SSDs to have a higher failure rate. (MLC NAND is usually two bits per cell but Samsung has introduced a three-bit MLC.) However, as yet there is no clear result that one-bit-per-cell SLC is more reliable than MLC. Part of the reason may be that newer and denser SSDS, often termed enterprise MLC (eMLC), has more mature controllers and better error checking processes.

So are SSDS more or less reliable than HDDs? It’s hard to say with certainty since HDD and SSD manufacturers may overstate reliability. (There’s a newsflash.) Take HDD vendors and reported disk failure rates. Understandably, HDD vendors are sensitive to disk failure numbers. When they share failure rates at all, they report the lowest possible numbers as the AFR, annualized (verifiable) failure rates. This number is based on the vendor’s verification of failures: i.e., attributable to the disk itself. Not environmental factors, not application interface problems, not controller errors: only the disk drive. Fair enough in a limited sort of way, although IT is only going to care that their drive isn’t working; verified or not. General AFR rates for disk-only failures run between .55% and .90%.

However, what the HDD manufacturers do not report is the number of under-warranty disk replacements each year, or ARR – annualized rates of return. If you substitute these numbers for reported drive failures, you get a different story. We don’t need to know why these warrantied drives failed, only that they did. These rates range much, much higher from about 0.5% to as high as 13.5%.
Now, in practice those higher percentages are not earth shattering. Most modern storage has redundant technology that minimizes data damage from a failed disk and allows hot replacements. But when you are talking about drive reliability, clearly that number is worth talking about.

Again, small blame to the HDD vendors for putting their best foot forward. No one really expects them to publish reams of data on how often their products fail… especially since the SSD vendors do the same thing. And on the whole, HDDs tend to fail more gracefully in that there may be more warning than a suddenly failing SSD. This does not negate the huge performance advantages of SSD but does give one pause.
SSD’s Reliability Failures

Some SSD failures are common to any storage environment, but they do tend to have different causes than HDD failures. Common points of failure include:
Bit errors: Random data bits stored to cells, although it sounds much more impressive to say that the electrons leaked.

· Flying or shorn writes: Correct writes written in the wrong location, or truncated writes due to power loss.

· Unserializability: A hard-to-pronounce term that means writes are recorded in the wrong order.

· Firmware: Ah, firmware. Firmware fails, corrupts, or upgrades improperly throughout the computing universe: SSD firmware is no exception.

· Electronic failures: In spite of no moving parts, physical components like chips and transistors fail, taking the SSD down right along with it.

· Power outages: DRAM SSDs have volatile memory and will lose data if they lack a battery power supply. NAND SSDs are also subject to damaged file integrity if they are reading/writing during power interruptions.

Improving Reliability

As SSDs mature, manufacturers are improving their reliability processes. Wear leveling is a controller-run process that tracks data movement and component wear across cells, and levels writes and erases across multiple cells to extend the life of the media. Wear leveling maps logical block addresses (LBA) to physical memory addresses. It then either rewrites data to a new block each time (dynamic), or reassigns low usage segments to active writes (static) in order to avoid consistent wear to the same segment of memory. Note that writes are not the only issue: so is deletion. HDDs can write and read from the same sector, and in case of modified data can simply overwrite the sector. SSDs don’t have it this easy: they cannot overwrite but must erase blocks and write to new ones.

Data integrity checks are also crucial to data health. Error correction code (ECC) checks data reads and corrects hardware-based errors to a point. Cyclic Redundancy Check (CRC) checks written data to be sure that it is returned intact to a read request. Address translation guards against location-based errors by verifying that a read is occurring from the correct logical address, while versioning retrieves the current version of data.

Garbage collection helps to reclaim sparsely used blocks. NAND SSD only writes to empty blocks, which will quickly fill up an SSD. The firmware can analyze the cells for partially filled blocks, merge data into new blocks, and erase the old ones to free them up for new writes.

Data redundancy is also a factor. External redundancy of course occurs outside of the SSD with backup, mirroring, replication, and so on. Internal redundancy measures include internal batteries in DRAM SSDs, and striped data parity in NAND flash memory.
So Which Wins, SDD or HDD?

SSDs are clearly faster in performance, and if an HDD vendor argues otherwise then consider the source. However, reliability is an ongoing issue outside of hostile environments. We find that SSD reliability is improving and is commensurate with, or moving slightly ahead of, HDDs. SSD warranties have stretched from 3 to 5 years with highly reliable Intel leading the way. Intel and other top NAND SSD manufacturers like Samsung (at present, the world’s largest NAND developer), Kingston and OCZ are concentrating on SSD reliability by improving controllers, firmware, and troubleshooting processes.

The final score between NAND/DRAM SSDs and HDDs? Costs are growing commensurate. Reliability is about the same. Performance is clearly faster, and should rule the final decision between SSD and HDD. Hard drives will have their place for a long time yet in secondary storage, but I believe that they already lost their edge in high IO computing. For that, look to SSDs.

Re: Difference Between SSD Hard Drives And HDD Hard Drives by lawrence7: 9:15pm On Sep 23, 2015
The grammar in this post is too technical for most. I got quite a bit of what you said but most people will not understand it.
Please use less technical grammar next time.
Re: Difference Between SSD Hard Drives And HDD Hard Drives by gnaira(m): 7:31pm On Nov 03, 2015
onlykay:
Performance: SSD Wins

Hands down, SSD performance is faster. HDDs have the inescapable overhead of physically scanning disk for reads/writes. Even the fastest 15 RPM HDDs may bottleneck a high-traffic environment. Parallel disk, caching, and lots of extra RAM will certainly help. But eventually the high rate of growth will pull well ahead of the finite ability of HDDs to go faster.

DRAM-based SSD is the faster of the two but NAND is faster than hard drives by a range of 80-87% -- a very narrow range between low-end consumer SSDs and high-end enterprise SSDs. The root of the faster performance lies in how quickly SSDs and HDDs can access and move data: SSDs have no physical tracks or sectors and thus no physical seek limits. The SSD can reach memory addresses much faster than the HDD can move its heads.

The distinction is unavoidable given the nature of IO. In a hard disk array, the storage operating system directs the IO read or write requests to physical disk locations. In response, the platter spins and disk drive heads seek the location to write or read the IO request. Non-contiguous writes multiply the problem and latency is the result.

In contrast, SSDs are the fix to HDDs in high IO environments, particularly in Tier 0, high IO Tier 1 databases, and caching technologies. Since SSDs have no mechanical movement they accelerate IO requests far faster than even the fastest HDD.

Reliability: HDD Scores Points

Performance may be a slam dunk but reliability is not. Granted that SSD’s physical reliability in hostile environments is clearly better than HDDs given their lack of mechanical parts. SSDs will survive extreme cold and heat, drops, and multiple G’s. HDDs… not so much.

However, few data centers will experience rocket liftoffs or sub-freezing temperatures, and SSDs have their own unique stress points and failures. Solid state architecture avoids the same type of hardware failures as the hard drive: there are no heads to misalign or spindles to wear out. But SSDs still have physical components that fail such as transistors and capacitors. Firmware fails too, and wayward electrons can cause real problems. And in the case of a DRAM SSD, the capacitors will quickly fail in a power loss. Unless IT has taken steps to protect stored data, that data is gone.

Wear and tear over time also enters the picture. As an SSD ages its performance slows. The processor must read, modify, erase and write increasing amounts of data. Eventually memory cells wear out. Cheaper consumer TLC is generally relegated to consumer devices and may wear out more quickly because it stores more data on a reduced area. (Thus goes the theory; studies do not always bear it out.)

For example, since the MLC stores multiple bits (electronic charges) per cell instead of SLC’s one bit, you would expect MLC SSDs to have a higher failure rate. (MLC NAND is usually two bits per cell but Samsung has introduced a three-bit MLC.) However, as yet there is no clear result that one-bit-per-cell SLC is more reliable than MLC. Part of the reason may be that newer and denser SSDS, often termed enterprise MLC (eMLC), has more mature controllers and better error checking processes.

So are SSDS more or less reliable than HDDs? It’s hard to say with certainty since HDD and SSD manufacturers may overstate reliability. (There’s a newsflash.) Take HDD vendors and reported disk failure rates. Understandably, HDD vendors are sensitive to disk failure numbers. When they share failure rates at all, they report the lowest possible numbers as the AFR, annualized (verifiable) failure rates. This number is based on the vendor’s verification of failures: i.e., attributable to the disk itself. Not environmental factors, not application interface problems, not controller errors: only the disk drive. Fair enough in a limited sort of way, although IT is only going to care that their drive isn’t working; verified or not. General AFR rates for disk-only failures run between .55% and .90%.

However, what the HDD manufacturers do not report is the number of under-warranty disk replacements each year, or ARR – annualized rates of return. If you substitute these numbers for reported drive failures, you get a different story. We don’t need to know why these warrantied drives failed, only that they did. These rates range much, much higher from about 0.5% to as high as 13.5%.
Now, in practice those higher percentages are not earth shattering. Most modern storage has redundant technology that minimizes data damage from a failed disk and allows hot replacements. But when you are talking about drive reliability, clearly that number is worth talking about.

Again, small blame to the HDD vendors for putting their best foot forward. No one really expects them to publish reams of data on how often their products fail… especially since the SSD vendors do the same thing. And on the whole, HDDs tend to fail more gracefully in that there may be more warning than a suddenly failing SSD. This does not negate the huge performance advantages of SSD but does give one pause.
SSD’s Reliability Failures

Some SSD failures are common to any storage environment, but they do tend to have different causes than HDD failures. Common points of failure include:
Bit errors: Random data bits stored to cells, although it sounds much more impressive to say that the electrons leaked.

· Flying or shorn writes: Correct writes written in the wrong location, or truncated writes due to power loss.

· Unserializability: A hard-to-pronounce term that means writes are recorded in the wrong order.

· Firmware: Ah, firmware. Firmware fails, corrupts, or upgrades improperly throughout the computing universe: SSD firmware is no exception.

· Electronic failures: In spite of no moving parts, physical components like chips and transistors fail, taking the SSD down right along with it.

· Power outages: DRAM SSDs have volatile memory and will lose data if they lack a battery power supply. NAND SSDs are also subject to damaged file integrity if they are reading/writing during power interruptions.

Improving Reliability

As SSDs mature, manufacturers are improving their reliability processes. Wear leveling is a controller-run process that tracks data movement and component wear across cells, and levels writes and erases across multiple cells to extend the life of the media. Wear leveling maps logical block addresses (LBA) to physical memory addresses. It then either rewrites data to a new block each time (dynamic), or reassigns low usage segments to active writes (static) in order to avoid consistent wear to the same segment of memory. Note that writes are not the only issue: so is deletion. HDDs can write and read from the same sector, and in case of modified data can simply overwrite the sector. SSDs don’t have it this easy: they cannot overwrite but must erase blocks and write to new ones.

Data integrity checks are also crucial to data health. Error correction code (ECC) checks data reads and corrects hardware-based errors to a point. Cyclic Redundancy Check (CRC) checks written data to be sure that it is returned intact to a read request. Address translation guards against location-based errors by verifying that a read is occurring from the correct logical address, while versioning retrieves the current version of data.

Garbage collection helps to reclaim sparsely used blocks. NAND SSD only writes to empty blocks, which will quickly fill up an SSD. The firmware can analyze the cells for partially filled blocks, merge data into new blocks, and erase the old ones to free them up for new writes.

Data redundancy is also a factor. External redundancy of course occurs outside of the SSD with backup, mirroring, replication, and so on. Internal redundancy measures include internal batteries in DRAM SSDs, and striped data parity in NAND flash memory.
So Which Wins, SDD or HDD?

SSDs are clearly faster in performance, and if an HDD vendor argues otherwise then consider the source. However, reliability is an ongoing issue outside of hostile environments. We find that SSD reliability is improving and is commensurate with, or moving slightly ahead of, HDDs. SSD warranties have stretched from 3 to 5 years with highly reliable Intel leading the way. Intel and other top NAND SSD manufacturers like Samsung (at present, the world’s largest NAND developer), Kingston and OCZ are concentrating on SSD reliability by improving controllers, firmware, and troubleshooting processes.

The final score between NAND/DRAM SSDs and HDDs? Costs are growing commensurate. Reliability is about the same. Performance is clearly faster, and should rule the final decision between SSD and HDD. Hard drives will have their place for a long time yet in secondary storage, but I believe that they already lost their edge in high IO computing. For that, look to SSDs.

Evening. Please do you have an idea where I can get an SSHD (Internal drive)?
Re: Difference Between SSD Hard Drives And HDD Hard Drives by onlykay(m): 4:13pm On Nov 04, 2015
you can actually get it from any reputable laptop seller (company)
Re: Difference Between SSD Hard Drives And HDD Hard Drives by bright1985: 8:52pm On Nov 04, 2015
I have ssd 256 gb just call or whatsapp me 08064404857 thanks

(1) (Reply)

HP 14 Notebook Laptop14";core I3, 500/4GB -nationwide Delivery>>>(SOLD❌❌❌) / IT Gadgets Place In Ibadan (computer, Printers, Scanners Etc) / Problem With Sound On Toshiba Laptop

(Go Up)

Sections: politics (1) business autos (1) jobs (1) career education (1) romance computers phones travel sports fashion health
religion celebs tv-movies music-radio literature webmasters programming techmarket

Links: (1) (2) (3) (4) (5) (6) (7) (8) (9) (10)

Nairaland - Copyright © 2005 - 2024 Oluwaseun Osewa. All rights reserved. See How To Advertise. 47
Disclaimer: Every Nairaland member is solely responsible for anything that he/she posts or uploads on Nairaland.