World Library  
Flag as Inappropriate
Email this Article


Article Id: WHEBN0007067435
Reproduction Date:

Title: Raid-5  
Author: World Heritage Encyclopedia
Language: English
Subject: CHKDSK
Publisher: World Heritage Encyclopedia


The standard RAID levels are a basic set of RAID configurations that employ the techniques of striping, mirroring, or parity to create large reliable data stores from general purpose computer hard disk drives. The most common types today are RAID 0 (striping), RAID 1 and variants (mirroring), RAID 5 (distributed parity) and RAID 6 (dual parity).

Alternatives to the above designs include nested RAID levels, non-standard RAID levels, and non-RAID drive architectures. RAID levels and their associated data formats are standardized by the Storage Networking Industry Association in the Common RAID Disk Drive Format (DDF) standard.[1]


A RAID 0 (also known as a stripe set or striped volume) splits data evenly across two or more disks (striped) without parity information for speed. RAID 0 was not one of the original RAID levels and provides no data redundancy. RAID 0 is normally used to increase performance, although it can also be used as a way to create a large logical disk out of two or more physical ones.

A RAID 0 can be created with disks of differing sizes, but the storage space added to the array by each disk is limited to the size of the smallest disk. For example, if a 100 GB disk is striped together with a 350 GB disk, the size of the array will be 200 GB (100 GB × 2).

\begin{align} \mathrm{Size} & = 2 \cdot \min \left( 100\,\mathrm{GB}, 350\,\mathrm{GB} \right) \\

& = 2 \cdot 100\,\mathrm{GB} \\ & = 200\,\mathrm{GB} \end{align}

The diagram shows how the data is distributed into Ax stripes to the disks. Accessing the stripes in the order A1, A2, A3, ... provides the illusion of a larger and faster drive. Once the stripe size is defined on creation it needs to be maintained at all times.


RAID 0 is also used in some computer gaming systems where performance is desired and data integrity is not very important. However, real-world tests with computer games have shown that RAID-0 performance gains are minimal, although some desktop applications will benefit.[2][3] Another article examined these claims and concludes: "Striping does not always increase performance (in certain situations it will actually be slower than a non-RAID setup), but in most situations it will yield a significant improvement in performance." [4]


An exact copy (or mirror) of a set of data on two disks. This is useful when read performance or reliability is more important than data storage capacity. Such an array can only be as big as the smallest member disk. A classic RAID 1 mirrored pair contains two disks (see reliability geometrically) over a single disk. Since each member contains a complete copy and can be addressed independently, ordinary wear-and-tear reliability is raised by the power of the number of self-contained copies.


Since all the data exists in two or more copies, each with its own hardware, the read performance can go up roughly as a linear multiple of the number of copies. That is, a RAID 1 array of two drives can be reading in two different places at the same time, though most implementations of RAID 1 do not do this.[5] To maximize performance benefits of RAID 1, independent disk controllers are recommended, one for each disk. Some refer to this practice as splitting or duplexing (for two disk arrays) or multiplexing (for arrays with more than two disks). When reading, both disks can be accessed independently and requested sectors can be split evenly between the disks. For the usual mirror of two disks, this would, in theory, double the transfer rate when reading. The apparent access time of the array would be half that of a single drive. Unlike RAID 0, this would be for all access patterns, as all the data are present on all the disks. In reality, the need to move the drive heads to the next block (to skip blocks already read by the other drives) can effectively mitigate speed advantages for sequential access. Read performance can be further improved by adding drives to the mirror. Many older IDE RAID 1 controllers read only from one disk in the pair, so their read performance is always that of a single disk. Some older RAID 1 implementations read both disks simultaneously to compare the data and detect errors. The error detection and correction on modern disks makes this less useful in environments requiring normal availability. When writing, the array performs like a single disk, as all mirrors must be written with the data. Note that these are best case performance scenarios with optimal access patterns.


A RAID 2 stripes data at the bit (rather than block) level, and uses a Hamming code for error correction. The disks are synchronized by the controller to spin at the same angular orientation (they reach Index at the same time), so it generally cannot service multiple requests simultaneously. Extremely high data transfer rates are possible. This is the only original level of RAID that is not currently used.[6][7]

All hard disks eventually implemented Hamming code error correction. This made RAID 2 error correction redundant and unnecessarily complex. Like RAID 3, this level quickly became useless and is now obsolete. There are no commercial applications of RAID 2.[6][7]


A RAID 3 uses byte-level striping with a dedicated parity disk. RAID 3 is very rare in practice. One of the characteristics of RAID 3 is that it generally cannot service multiple requests simultaneously. This happens because any single block of data will, by definition, be spread across all members of the set and will reside in the same location. So, any I/O operation requires activity on every disk and usually requires synchronized spindles.

However, the performance characteristic of RAID 3 is very consistent, unlike that for higher RAID levels. The size of a stripe is less than the size of a sector or OS block. As a result, reading and writing accesses the entire stripe every time. The performance of the array is therefore identical to the performance of one disk in the array except for the transfer rate, which is multiplied by the number of data drives less the parity drives.

This makes it suitable for applications that demand the highest transfer rates in long sequential reads and writes, for example uncompressed video editing. Applications that make small reads and writes from random disk locations will get the worst performance out of this level.[7]

The requirement that all disks spin synchronously, a.k.a. lockstep, added design considerations to a level that didn't give significant advantages over other RAID levels, so it quickly became useless and is now obsolete.[6] Both RAID 3 and RAID 4 were quickly replaced by RAID 5.[8] RAID 3 was usually implemented in hardware, and the performance issues were addressed by using large disk caches.[7]


A RAID 4 uses block-level striping with a dedicated parity disk. This allows each member of the set to act independently when only a single block is requested.

In the example on the right, a read request for block A1 would be serviced by disk 0. A simultaneous read request for block B1 would have to wait, but a read request for B2 could be serviced concurrently by disk 1.

RAID 4 is very uncommon, but one enterprise level company that has previously used it is NetApp. The aforementioned performance problems were solved with their proprietary Write Anywhere File Layout (WAFL), an approach to writing data to disk locations that minimizes the conventional parity RAID write penalty. By storing system metadata (inodes, block maps, and inode maps) in the same way application data is stored, WAFL is able to write file system metadata blocks anywhere on the disk. This approach in turn allows multiple writes to be "gathered" and scheduled to the same RAID stripe—eliminating the traditional read-modify-write penalty prevalent in parity-based RAID schemes.[9]


A RAID 5 uses block-level striping with parity data distributed across all member disks. RAID 5 has achieved popularity because of its low cost of redundancy. This can be seen by comparing the number of drives needed to achieve a given capacity. For an array of n drives, with S_{\mathrm{min}} being the size of the smallest disk in the array, other RAID levels that yield redundancy give only a storage capacity of S_{\mathrm{min}} (for RAID 1), or S_{\mathrm{min}} \times (n/2) (for RAID 1+0). In RAID 5, the yield is S_{\mathrm{min}} \times (n - 1). For example, four 1 TB drives can be made into two separate 1 TB redundant arrays under RAID 1 or 2 TB under RAID 1+0, but the same four drives can be used to build a 3 TB array under RAID 5. Although RAID 5 may be implemented in a disk controller, some have hardware support for parity calculations (hardware RAID cards with onboard processors) while some use the main system processor (a form of software RAID in vendor drivers for inexpensive controllers). Many operating systems also provide software RAID support independently of the disk controller, such as Windows Dynamic Disks, Linux mdadm, or RAID-Z. In most implementations, a minimum of three disks is required for a complete RAID 5 configuration. In some implementations a degraded RAID 5 disk set can be made (three disk set of which only two are online), while mdadm supports a fully functional (non-degraded) RAID 5 setup with two disks - which functions as a slow RAID-1, but can be expanded with further volumes.

In the example, a read request for block A1 would be serviced by disk 0. A simultaneous read request for block B1 would have to wait, but a read request for B2 could be serviced concurrently by disk 1.

Parity handling

A concurrent series of blocks - one on each of the disks in an array - is collectively called a stripe. If another block, or some portion thereof, is written on that same stripe, the parity block, or some portion thereof, is recalculated and rewritten. For small writes, this requires:

  • Read the old data block
  • Read the old parity block
  • Compare the old data block with the write request. For each bit that has flipped (changed from 0 to 1, or from 1 to 0) in the data block, flip the corresponding bit in the parity block
  • Write the new data block
  • Write the new parity block

The disk used for the parity block is staggered from one stripe to the next, hence the term distributed parity blocks. RAID 5 writes are expensive in terms of disk operations and traffic between the disks and the controller.

The parity blocks are not read on data reads, since this would add unnecessary overhead and would diminish performance. The parity blocks are read, however, when a read of blocks in the stripe fails due to failure of any one of the disks, and the parity block in the stripe are used to reconstruct the errant sector. The CRC error is thus hidden from the main computer. Likewise, should a disk fail in the array, the parity blocks from the surviving disks are combined mathematically with the data blocks from the surviving disks to reconstruct the data from the failed drive on-the-fly.


When a disk record is randomly accessed there is a delay as the disk rotates sufficiently for the data to come under the head for processing. This delay is called latency. On average, a single disk will need to rotate 1/2 revolution. Thus, for a 7200 RPM disk the average latency is 4.2 milliseconds. In RAID 5 arrays all the disks must be accessed so the latency can become a significant factor. In a RAID 5 array, with n randomly oriented disks, the mean latency is \frac{n}{n+1} revolutions and the median latency is 2^{-1/n}. In order to mitigate this problem, well-designed RAID systems will synchronize the angular orientation of their disks. In this case the random nature of the angular displacements goes away, the average latency returns to 1/2 revolution, and a saving of up to 50% in latency is achieved. Since solid state drives do not have disks, their latency does not follow this model.

Effect of Angular Desynchronization
Number of Disks Mean Latency (Rev) Median Latency (Rev)
1 0.50 0.50
2 0.67 (+33%) 0.71 (+41%)
3 0.75 (+50%) 0.79 (+59%)
4 0.80 (+60%) 0.84 (+68%)
5 0.83 (+67%) 0.87 (+74%)
6 0.86 (+71%) 0.89 (+78%)
7 0.88 (+75%) 0.91 (+81%)
8 0.89 (+78%) 0.92 (+83%)

Usable capacity

Parity uses up the capacity of one drive in the array. (This can be seen by comparing it with RAID 4: RAID 5 distributes the parity data across the disks, while RAID 4 centralizes it on one disk, but the amount of parity data is the same.) If the drives vary in capacity, the smallest one sets the limit. Therefore, the usable capacity of a RAID 5 array is (N-1) \cdot S_{\mathrm{min}}, where N is the total number of drives in the array and S_{\mathrm{min}} is the capacity of the smallest drive in the array.

The number of hard disks that can belong to a single array is limited only by the capacity of the storage controller in hardware implementations, or by the OS in software RAID. One caveat is that unlike RAID 1, as the number of disks in an array increases, the probability of data loss due to multiple drive failures also increases. This is because there is a reduced ratio of "losable" drives (the number of drives that can fail before data is lost) to total drives.


RAID 6 extends RAID 5 by adding an additional parity block; thus it uses block-level striping with two parity blocks distributed across all member disks.

Performance (speed)

RAID 6 does not have a performance penalty for read operations, but it does have a performance penalty on write operations because of the overhead associated with parity calculations. Performance varies greatly depending on how RAID 6 is implemented in the manufacturer's storage architecture – in software, firmware or by using firmware and specialized ASICs for intensive parity calculations. It can be as fast as a RAID-5 system with one fewer drive (same number of data drives).[10]

Efficiency (potential waste of storage)

RAID 6 is no less space efficient than RAID 5 with a hot spare drive when used with a small number of drives, but as arrays become bigger and have more drives, the loss in storage capacity becomes less important, although the probability of data loss is greater with larger arrays. RAID 6 provides protection against data loss during an array rebuild, when a second drive is lost, a bad block read is encountered, or when a human operator accidentally removes and replaces the wrong disk drive when attempting to replace a failed drive.

The usable capacity of a RAID 6 array is (N-2) \cdot S_{\mathrm{min}}, where N is the total number of drives in the array and S_{\mathrm{min}} is the capacity of the smallest drive in the array.


According to the Storage Networking Industry Association (SNIA), the definition of RAID 6 is: "Any form of RAID that can continue to execute read and write requests to all of a RAID array's virtual disks in the presence of any two concurrent disk failures. Several methods, including dual check data computations (parity and Reed-Solomon), orthogonal dual parity check data and diagonal parity, have been used to implement RAID Level 6."[11]

Computing parity

Two different syndromes need to be computed in order to allow the loss of any two drives. One of them, P can be the simple XOR of the data across the stripes, as with RAID 5. A second, independent syndrome is more complicated and requires the assistance of field theory.

To deal with this, the Galois field GF(m) is introduced with m=2^k, where GF(m) \cong F_2[x]/(p(x)) for a suitable irreducible polynomial p(x) of degree k. A chunk of data can be written as d_{k-1}d_{k-2}...d_0 in base 2 where each d_i is either 0 or 1. This is chosen to correspond with the element d_{k-1}x^{k-1} + d_{k-2}x^{k-2} + ... + d_1x + d_0 in the Galois field. Let D_0,...,D_{n-1} \in GF(m) correspond to the stripes of data across hard drives encoded as field elements in this manner (in practice they would probably be broken into byte-sized chunks). If g is some generator of the field and \oplus denotes addition in the field while concatenation denotes multiplication, then \mathbf{P} and \mathbf{Q} may be computed as follows (n denotes the number of data disks):

\mathbf{P} = \bigoplus_i{D_i} = \mathbf{D}_0 \;\oplus\; \mathbf{D}_1 \;\oplus\; \mathbf{D}_2 \;\oplus\; ... \;\oplus\; \mathbf{D}_{n-1}

\mathbf{Q} = \bigoplus_i{g^iD_i} = g^0\mathbf{D}_0 \;\oplus\; g^1\mathbf{D}_1 \;\oplus\; g^2\mathbf{D}_2 \;\oplus\; ... \;\oplus\; g^{n-1}\mathbf{D}_{n-1}

For a computer scientist, a good way to think about this is that \oplus is a bitwise XOR operator and g^i is the action of a linear feedback shift register on a chunk of data. Thus, in the formula above,[12] the calculation of P is just the XOR of each stripe. This is because addition in any characteristic two finite field reduces to the XOR operation. The computation of Q is the XOR of a shifted version of each stripe.

Mathematically, the generator is an element of the field such that g^i is different for each nonnegative i satisfying i < n.

If one data drive is lost, the data can be recomputed from P just like with RAID 5. If two data drives are lost or a data drive and the drive containing P are lost, the data can be recovered from P and Q or from just Q, respectively, using a more complex process. Working out the details is extremely hard with field theory. Suppose that D_i and D_j are the lost values with i \neq j. Using the other values of D, constants A and B may be found so that D_i \oplus D_j = A and g^iD_i \oplus g^jD_j = B:

A = \bigoplus_{\ell:\;\ell\not=i\;\mathrm{and}\;\ell\not=j}{D_\ell} = \mathbf{P} \;\oplus\; \mathbf{D}_0 \;\oplus\; \mathbf{D}_1 \;\oplus\; \dots \;\oplus\; \mathbf{D}_{i-1} \;\oplus\; \mathbf{D}_{i+1} \;\oplus\; \dots \;\oplus\; \mathbf{D}_{j-1} \;\oplus\; \mathbf{D}_{j+1} \;\oplus\; \dots \;\oplus\; \mathbf{D}_{n-1}

B = \bigoplus_{\ell:\;\ell\not=i\;\mathrm{and}\;\ell\not=j}{g^{\ell}D_\ell} = \mathbf{Q} \;\oplus\; g^0\mathbf{D}_0 \;\oplus\; g^1\mathbf{D}_1 \;\oplus\; \dots \;\oplus\; g^{i-1}\mathbf{D}_{i-1} \;\oplus\; g^{i+1}\mathbf{D}_{i+1} \;\oplus\; \dots \;\oplus\; g^{j-1}\mathbf{D}_{j-1} \;\oplus\; g^{j+1}\mathbf{D}_{j+1} \;\oplus\; \dots \;\oplus\; g^{n-1}\mathbf{D}_{n-1}

Multiplying both sides of the equation for B by g^{n-i} and adding to the former equation yields (g^{n-i+j}\oplus1)D_j = g^{n-i}B\oplus A and thus a solution for D_j, which may be used to compute D_i.

The computation of Q is CPU intensive compared to the simplicity of P. Thus, a RAID 6 implemented in software will have a more significant effect on system performance, and a hardware solution will be more complex.

Non-standard RAID levels and non-RAID drive architectures

Other non-standard RAID levels include:

  • RAID 1.5
  • RAID 7 (a hardware-supported, proprietary RAID developed by Storage Computer Corp. of Nashua, NH)
  • RAID S or parity RAID
  • Matrix RAID
  • RAID-K
  • RAIDn
  • Linux MD RAID 10
  • IBM ServeRAID 1E
  • unRAID
  • ineo Complex RAID
  • Drobo BeyondRAID
  • Synology Hybrid RAID
  • Microsoft Storage Spaces

There are also non-RAID drive architectures, which are referred to by similar acronyms, notably SLED, Just a Bunch of Disks, SPAN/BIG, and MAID.


External links

  • RAID Calculator for Standard RAID Levels and Other RAID Tools
  • IBM summary on RAID levels
  • RAID 5 Parity explanation and checking tool.
  • Dell animations and details on RAID levels 0, 1, 5
  • The Open-E Blog. "How does RAID 5 work? The Shortest and Easiest explanation ever!"
This article was sourced from Creative Commons Attribution-ShareAlike License; additional terms may apply. World Heritage Encyclopedia content is assembled from numerous content providers, Open Access Publishing, and in compliance with The Fair Access to Science and Technology Research Act (FASTR), Wikimedia Foundation, Inc., Public Library of Science, The Encyclopedia of Life, Open Book Publishers (OBP), PubMed, U.S. National Library of Medicine, National Center for Biotechnology Information, U.S. National Library of Medicine, National Institutes of Health (NIH), U.S. Department of Health & Human Services, and, which sources content from all federal, state, local, tribal, and territorial government publication portals (.gov, .mil, .edu). Funding for and content contributors is made possible from the U.S. Congress, E-Government Act of 2002.
Crowd sourced content that is contributed to World Heritage Encyclopedia is peer reviewed and edited by our editorial staff to ensure quality scholarly research articles.
By using this site, you agree to the Terms of Use and Privacy Policy. World Heritage Encyclopedia™ is a registered trademark of the World Public Library Association, a non-profit organization.

Copyright © World Library Foundation. All rights reserved. eBooks from Project Gutenberg are sponsored by the World Library Foundation,
a 501c(4) Member's Support Non-Profit Organization, and is NOT affiliated with any governmental agency or department.