9/7/2010RAID - Wikipedia, the free encyclopediaincreases with more disks in thearray (at a minimum, catastrophicdata loss is twice as likelycompared to single drives withoutRAID 0 RAID). A single disk failuredestroys the entire array becausewhen data is written to a RAID 0volume, the data is broken intofragments called blocks. Thenumber of blocks is dictated by thestripesize, which is aconfiguration parameter of thearray. The blocks are written totheirrespectivediskssimultaneously on the same sector.This allows smaller sections of theentire chunk of data to be read offthe drive in parallel, increasingbandwidth. RAID 0 does notimplement error checking, so anyerror is uncorrectable. More disksin the array means higherbandwidth, but greater risk of dataloss.Mirroringstriping.210 (none)21/nn-1 diskswithout parity orData is written identically tomultiple disks (a "mirrored set").Although many implementationscreate sets of 2 disks, sets maycontain 3 or more disks. Arrayprovides fault tolerance from diskerrors or failures and continues toRAID 1 operate as long as at least onedrive in the mirrored set isfunctioning.Increasedreadperformance occurs when using amulti-threaded operating systemthat supports split seeks, as well asa very small performance reductionwhen writing. Using RAID 1 with aseparate controller for each disk issometimes called duplexing.Bit-level striping with dedicated diskwhen the3/19

9/7/2010RAID - Wikipedia, the free encyclopediaHamming-code parity.All disk spindle rotation issynchronized, and data is stripedsuch that each sequential bit is on aRAID 2 different disk. Hamming-codeparity is calculatedacrosscorresponding bits on disks andstored on one or more parity disks.Extremely high data transfer ratesare possible.Byte-levelstripingdedicated parity.31 - 1/n1 disk31 - 1/n1 diskwithIdentical to RAID 5, but confinesall parity data to a single disk,which can create a performancebottleneck. In this setup, files canbe distributed between multipledisks. Each diskoperatesRAID 4independently which allows I/Orequests to be performed inparallel, though data transferspeeds can suffer due to the type ofparity. The error detection isachieved through dedicated parityand is stored in a separate, singledisk unit.Block-levelstripingdistributed parity.-recoverrecordcode.withAll disk spindle rotation issynchronized, and data is stripedsuch that each sequential byte is onRAID 3a different disk. Parity is calculatedacross corresponding bytes ondisks and stored on a dedicatedparity disk. Very high data transferrates are possible.Block-levelstripingdedicated parity.3fact thatthe disk iscorruptisn't foundby any1 - 1/n ·log2(n-1) thing butthehammingwithDistributed parity requires all drives

9/7/2010RAID - Wikipedia, the free encyclopediabut one to be present to operate;drive failure requires replacement,but the array is not destroyed by asingle drive failure. Upon drivefailure, any subsequent reads canbe calculated from the distributedparity such that the drive failure isRAID 5masked from the end user. Thearray will have data loss in theevent of a second drive failure andis vulnerable until the data that wason the failed drive is rebuilt onto areplacement drive. A single drivefailure in the set will result inreduced performance of the entireset until the failed drive has beenreplaced and rebuilt.31 - 1/n1 disk41 - 2/n2 disksBlock-level striping with doubledistributed parity.Provides fault tolerance from twodrive failures; array continues tooperate with up to two faileddrives. This makes larger RAIDgroups more practical, especiallyfor high-availability systems. Thisbecomes increasingly important aslarge-capacity drives lengthen thetime needed to recover from theRAID 6failure of a single drive. Singleparity RAID levels are asvulnerable to data loss as a RAID0 array until the failed drive isreplaced and its data rebuilt; thelarger the drive, the longer therebuild will take. Double paritygives time to rebuild the arraywithout the data being at risk if asingle additional drive fails beforethe rebuild is complete.Nested (hybrid) RAIDMain article: Nested RAID levels

9/7/2010RAID - Wikipedia, the free encyclopediaIn what was originally termed hybrid RAID,[5] many storage controllers allow RAID levels to be nested. Theelements of a RAID may be either individual disks or RAIDs themselves. Nesting more than two deep is unusual.As there is no basic RAID level numbered larger than 9, nested RAIDs are usually unambiguously described byconcatenating the numbers indicating the RAID levels, sometimes with a " " in between. The order of the digits in anested RAID designation is the order in which the nested array is built: for RAID 1 0 first pairs of drives arecombined into two or more RAID 1 arrays (mirrors), and then the resulting RAID 1 arrays are combined into aRAID 0 array (stripes). It is also possible to combine stripes into mirrors (RAID 0 1). The final step is known asthe top array. When the top array is a RAID 0 (such as in RAID 10 and RAID 50) most vendors omit the " ",though RAID 5 0 is clearer.RAID 0 1: striped sets in a mirrored set ( minimum four disks; even number of disks) provides faulttolerance and improved performance but increases complexity.The key difference from RAID 1 0 is that RAID 0 1 creates a second striped set to mirror a primarystriped set. The array continues to operate with one or more drives failed in the same mirror set, but if drivesfail on both sides of the mirror the data on the RAID system is lost.RAID 1 0: mirrored sets in a striped set (minimum two disks but more commonly four disks to takeadvantage of speed benefits; even number of disks) provides fault tolerance and improved performance butincreases complexity.The key difference from RAID 0 1 is that RAID 1 0 creates a striped set from a series of mirrored drives.In a failed disk situation, RAID 1 0 performs better because all the remaining disks continue to be used. Thearray can sustain multiple drive losses so long as no mirror loses all its drives.RAID 5 1: mirrored striped set with distributed parity (some manufacturers label this as RAID 53).Whether an array runs as RAID 0 1 or RAID 1 0 in practice is often determined by the evolution of the storagesystem. A RAID controller might support upgrading a RAID 1 array to a RAID 1 0 array on the fly, but require alengthy offline rebuild to upgrade from RAID 1 to RAID 0 1. With nested arrays, sometimes the path of leastdisruption prevails over achieving the preferred configuration.New RAID classificationIn 1996, the RAID Advisory Board introduced an improved classification of RAID systems[citation needed]. Itdivides RAID into three types: Failure-resistant disk systems (that protect against data loss due to disk failure),failure-tolerant disk systems (that protect against loss of data access due to failure of any single component), anddisaster-tolerant disk systems (that consist of two or more independent zones, either of which provides access tostored data).The original "Berkeley" RAID classifications are still kept as an important historical reference point and also torecognize that RAID Levels 0-6 successfully define all known data mapping and protection schemes for disk.Unfortunately, the original classification caused some confusion due to assumption that higher RAID levels implyhigher redundancy and performance. This confusion was exploited by RAID system manufacturers, and gave birthto the products with such names as RAID-7, RAID-10, RAID-30, RAID-S, etc. The new system describes thedata availability characteristics of the RAID system rather than the details of its 19

9/7/2010RAID - Wikipedia, the free encyclopediaThe next list provides criteria for all three classes of RAID:- Failure-resistant disk systems (FRDS) (meets a minimum of criteria 1 - 6):1. Protection against data loss and loss of access to data due to disk drive failure2. Reconstruction of failed drive content to a replacement drive3. Protection against data loss due to a "write hole"4. Protection against data loss due to host and host I/O bus failure5. Protection against data loss due to replaceable unit failure6. Replaceable unit monitoring and failure indication- Failure-tolerant disk systems (FTDS) (meets a minimum of criteria 7 - 15 ):7. Disk automatic swap and hot swap8. Protection against data loss due to cache failure9. Protection against data loss due to external power failure10. Protection against data loss due to a temperature out of operating range11. Replaceable unit and environmental failure warning12. Protection against loss of access to data due to device channel failure13. Protection against loss of access to data due to controller module failure14. Protection against loss of access to data due to cache failure15. Protection against loss of access to data due to power supply failure- Disaster-tolerant disk systems (DTDS) (meets a minimum of criteria 16 - 21):16. Protection against loss of access to data due to host and host I/O bus failure17. Protection against loss of access to data due to external power failure18. Protection against loss of access to data due to component replacement19. Protection against loss of data and loss of access to data due to multiple disk failure20. Protection against loss of access to data due to zone failure21. Long-distance protection against loss of data due to zone failureNon-standard levelsMain article: Non-standard RAID levelsMany configurations other than the basic numbered RAID levels are possible, and many companies, organizations,and groups have created their own non-standard configurations, in many cases designed to meet the specialisedneeds of a small niche group. Most of these non-standard RAID levels are proprietary.Storage Computer Corporation used to call a cached version of RAID 3 and 4, RAID 7. Storage ComputerCorporation is now defunct.EMC Corporation used to offer RAID S as an alternative to RAID 5 on their Symmetrix systems. Their latestgenerations of Symmetrix, the DMX and the V-Max series, do not support RAID S (instead they supportRAID-1, RAID-5 and RAID-6.)The ZFS filesystem, available in Solaris, OpenSolaris and FreeBSD, offers RAID-Z, which solves RAID 5'swrite hole problem.Hewlett-Packard's Advanced Data Guarding (ADG) is a form of RAID 6.

9/7/2010RAID - Wikipedia, the free encyclopediaNetApp's Data ONTAP uses RAID-DP (also referred to as "double", "dual", or "diagonal" parity), is a formof RAID 6, but unlike many RAID 6 implementations, does not use distributed parity as in RAID 5. Instead,two unique parity disks with separate parity calculations are used. This is a modification of RAID 4 with anextra parity disk.Accusys Triple Parity (RAID TP) implements three independent parities by extending RAID 6 algorithmson its FC-SATA and SCSI-SATA RAID controllers to tolerate three-disk failure.Linux MD RAID10 (RAID10) implements a general RAID driver that defaults to a standard RAID 1 with 2drives, and a standard RAID 1 0 with four drives, but can have any number of drives, including oddnumbers. MD RAID10 can run striped and mirrored, even with only two drives with the f2 layout (mirroringwith striped reads, giving the read performance of RAID 0; normal Linux software RAID 1 does not stripereads, but can read in parallel[6]).[7]Infrant (now part of Netgear) X-RAID offers dynamic expansion of a RAID5 volume without having to backup or restore the existing content. Just add larger drives one at a time, let it resync, then add the next driveuntil all drives are installed. The resulting volume capacity is increased without user downtime. (It should benoted that this is also possible in Linux, when utilizing Mdadm utility. It has also been possible in the EMCClariion and HP MSA arrays for several years.) The new X-RAID2 found on x86 ReadyNas, that isReadyNas with Intel CPUs, offers dynamic expansion of a RAID-5 or RAID-6 volume (note X-RAID2 DualRedundancy not available on all X86 ReadyNas) without having to back up or restore the existing contentetc. A major advantage over X-RAID, is that using X-RAID2 you do not need to replace all the disks to getextra space, you only need to replace two disks using single redundancy or four disks using dual redundancyto get more redundant space.BeyondRAID, created by Data Robotics and used in the Drobo series of products, implements bothmirroring and striping simultaneously or individually dependent on disk and data context. It offersexpandability without reconfiguration, the ability to mix and match drive sizes and the ability to reorder disks.It supports NTFS, HFS , FAT32, and EXT3 file systems.[8] It also uses thin provisioning to allow for singlevolumes up to 16 TB depending on the host operating system support.Hewlett-Packard's EVA series arrays implement vRAID - vRAID-0, vRAID-1, vRAID-5, and vRAID-6.The EVA allows drives to be placed in groups (called Disk Groups) that form a pool of data blocks on topof which the RAID level is implemented. Any Disk Group may have "virtual disks" or LUNs of any vRAIDtype, including mixing vRAID types in the same Disk Group - a unique feature. vRAID levels are moreclosely aligned to Nested RAID levels - vRAID-1 is actually a RAID 1 0 (or RAID10), vRAID-5 isactually a RAID 5 0 (or RAID50), etc. Also, drives may be added on-the-fly to an existing Disk Group,and the existing virtual disks data is redistributed evenly over all the drives, thereby allowing dynamicperformance and capacity growth.IBM (Among others) has implemented a RAID 1E (Level 1 Enhanced). With an even number of disks it issimilar to a RAID 10 array, but, unlike a RAID 10 array, it can also be implemented with an odd number ofdrives. In either case, the total available disk space is n/2. It requires a minimum of three drives.Hadoop has a RAID system that generates a parity file by xor-ing a stripe of blocks in a single HDFS file.More details can be found here [9]Data backupsA RAID system used as a main drive is not a replacement for backing up data. In parity configurations it willprovide the backup-like feature to protect from catastrophic data loss caused by physical damage or errors on asingle drive. Many other features of backup systems cannot be provided by RAID arrays alone. The most notable

9/7/2010RAID - Wikipedia, the free encyclopediais the ability to restore an earlier version of data, which is needed to protect against software errors causingunwanted data to be written to the disk, and to recover from user error or malicious deletion. RAID can also beoverwhelmed by catastrophic failure that exceeds its recovery capacity and, of course, the entire array is at risk ofphysical damage by fire, natural disaster, or human forces. RAID is also vulnerable to controller failure since it is notalways possible to migrate a RAID to a new controller without data loss.[10]RAID drives can make excellent backup drives, when employed as backup devices to main storage, andparticularly when located offsite from the main systems. However, the use of RAID as the only storage solutioncannot replace backups.Implementations(Specifically, the section comparing hardware / software raid)The distribution of data across multiple drives can be managed either by dedicated hardware or by software. Whendone in software the software may be part of the operating system or it may be part of the firmware and driverssupplied with the card.Operating system based ("software RAID")Software implementations are now provided by many operating systems. A software layer sits above the (generallyblock-based) disk device drivers and provides an abstraction layer between the logical drives (RAIDs) andphysical drives. Most common levels are RAID 0 (striping across multiple drives for increased space andperformance) and RAID 1 (mirroring two drives), followed by RAID 1 0, RAID 0 1, and RAID 5 (data stripingwith parity) are supported.Apple's Mac OS X Server[11] and Mac OS X[12] support RAID 0, RAID 1 and RAID 1 0.FreeBSD supports RAID 0, RAID 1, RAID 3, and RAID 5 and all layerings of the above via GEOMmodules[13][14] and ccd.[15], as well as supporting RAID 0, RAID 1, RAID-Z, and RAID-Z2 (similar toRAID-5 and RAID-6 respectively), plus nested combinations of those via ZFS.Linux supports RAID 0, RAID 1, RAID 4, RAID 5, RAID 6 and all layerings of the above.[16][17]Microsoft's server operating systems supports RAID 0, RAID 1, and RAID 5. Some of the Microsoftdesktop operating systems support RAID such as Windows XP Professional which supports RAID level 0in addition to spanning multiple disks but only if using dynamic disks and volumes. Windows XP supportsRAID 0, 1, and 5 with a simple file patch.[18] RAID functionality in Windows is slower than hardwareRAID, but allows a RAID array to be moved to another machine with no compatibility issues.NetBSD supports RAID 0, RAID 1, RAID 4 and RAID 5 (and any nested combination of those like 1 0)via its software implementation, named RAIDframe.OpenBSD aims to support RAID 0, RAID 1, RAID 4 and RAID 5 via its software implementation softraid.OpenSolaris and Solaris 10 supports RAID 0, RAID 1, RAID 5 (or the similar "RAID Z" found only onZFS), and RAID 6 (and any nested combination of those like 1 0) via ZFS and now has the ability to bootfrom a ZFS volume on both x86 and UltraSPARC. Through SVM, Solaris 10 and earlier versions supportRAID 1 for the boot filesystem, and adds RAID 0 and RAID 5 support (and various nested combinations)for data drives.Software RAID has advantages and disadvantages compared to hardware RAID. The software must run on a hostserver attached to storage, and server's processor must dedicate processing time to run the RAID software. The

9/7/2010RAID - Wikipedia, the free encyclopediaadditional processing capacity required for RAID 0 and RAID 1 is low, but parity-based arrays require morecomplex data processing during write or integrity-checking operations. As the rate of data processing increases withthe number of disks in the array, so does the processing requirement. Furthermore all the buses between theprocessor and the disk controller must carry the extra data required by RAID which may cause congestion.Over the history of hard disk drives, the increase in speed of commodity CPUs has been consistently greater thanthe increase in speed of hard disk drive throughput.[19] Thus, over-time for a given number of hard disk drives, thepercentage of host CPU time required to saturate a given number of hard disk drives has been dropping. e.g. TheLinux software md RAID subsystem is capable of calculating parity information at 6GB/s (100% usage of a singlecore on a 2.1 GHz Intel "Core2" CPU as of Linux v2.6.26). A three-drive RAID5 array using hard disks capableof sustaining a write of 100MB/s will require parity to be calculated at the rate of 200MB/s. This will require theresources of just over 3% of a single CPU core during write operations (parity does not need to be calculated forread operations on a RAID5 array, unless a drive has failed).Software RAID implementations may employ more sophisticated algorithms than hardware RAID implementations(for instance with respect to disk scheduling and command queueing), and thus may be capable of increasedperformance.Another concern with operating system-based RAID is the boot process. It can be difficult or impossible to set upthe boot process such that it can fail over to another drive if the usual boot drive fails. Such systems can requiremanual intervention to make the machine bootable again after a failure. There are exceptions to this, such as theLILO bootloader for Linux, loader for FreeBSD,[20] and some configurations of the GRUB bootloader nativelyunderstand RAID-1 and can load a kernel. If the BIOS recognizes a broken first disk and refers bootstrapping tothe next disk, such a system will come up without intervention, but the BIOS might or might not do that as intended.A hardware RAID controller typically has explicit programming to decide that a disk is broken and fall through tothe next disk.Hardware RAID controllers can also carry battery-powered cache memory. For data safety in modern systems theuser of software RAID might need to turn the write-back cache on the disk off (but some drives have their ownbattery/capacitors on the write-back cache, a UPS, and/or implement atomicity in various ways, etc.). Turning offthe write cache has a performance penalty that can, depending on workload and how well supported commandqueuing in the disk system is, be significant. The battery backed cache on a RAID controller is one solution to havea safe write-back cache.Finally operating system-based RAID usually uses formats specific to the operating system in question so it cannotgenerally be used for partitions that are shared between operating systems as part of a multi-boot setup. However,this allows RAID disks to be moved from one computer to a computer with an operating system or file system ofthe same type, which can be more difficult when using hardware RAID (e.g. #1: When one computer uses ahardware RAID controller from one manufacturer and another computer uses a controller from a differentmanufacturer, drives typically cannot be interchanged. e.g. #2: If the hardware controller 'dies' before the disks do,data may become unrecoverable unless a hardware controller of the same type is obtained, unlike with firmwarebased or software-based RAID).Most operating system-based implementations allow RAIDs to be created from partitions rather than entire physicaldrives. For instance, an administrator could divide an odd number of disks into two partitions per disk, mirrorpartitions across disks and stripe a volume across the mirrored partitions to emulate IBM's RAID 1E configuration.Using partitions in this way also allows mixing reliability levels on the same set of disks. For example, one couldhave a very robust RAID 1 partition for important files, and a less robust RAID 5 or RAID 0 partition for lessimportant data. (Some BIOS-based controllers offer similar features, e.g. Intel Matrix RAID.) Using two partitions

9/7/2010RAID - Wikipedia, the free encyclopediaon the same drive in the same RAID is, however, dangerous. (e.g. #1: Having all partitions of a RAID-1 on thesame drive will, obviously, make all the data inaccessible if the single drive fails. e.g. #2: In a RAID 5 arraycomposed of four drives 250 250 250 500 GB, with the 500-GB drive split into two 250 GB partitions, afailure of this drive will remove two partitions from the array, causing all of the data held on it to be lost).Hardware-basedHardware RAID controllers use different, proprietary disk layouts, so it is not usually possible to span controllersfrom different manufacturers. They do not require processor resources, the BIOS can boot from them, and tighterintegration with the device driver may offer better error handling.A hardware implementation of RAID requires at least a special-purpose RAID controller. On a desktop system thismay be a PCI expansion card, PCI-e expansion card or built into the motherboard. Controllers supporting mosttypes of drive may be used – IDE/ATA, SATA, SCSI, SSA, Fibre Channel, sometimes even a combination. Thecontroller and disks may be in a stand-alone disk enclosure, rather than inside a computer. The enclosure may bedirectly attached to a computer, or connected via SAN. The controller hardware handles the management of thedrives, and performs any parity calculations required by the chosen RAID level.Most hardware implementations provide a read/write cache, which, depending on the I/O workload, will improveperformance. In most systems the write cache is non-volatile (i.e. battery-protected), so pending writes are not loston a power failure.Hardware implementations provide guaranteed performance, add no overhead to the local CPU complex and cansupport many operating systems, as the controller simply presents a logical disk to the operating system.Hardware implementations also typically support hot swapping, allowing failed drives to be replaced while thesystem is running.However, inexpensive hardware RAID controllers can be slower than software RAID due to the dedicated CPUon the controller card not being as fast as the CPU in the computer/server. More expensive RAID controllers havefaster CPUs, capable of higher throughput speeds and do not present this slowness.Firmware/driver-based RAID ("FakeRAID")Operating system-based RAID doesn't always protect the boot process and is generally impractical on desktopversions of Windows (as described above). Hardware RAID controllers are expensive and proprietary. To fill thisgap, cheap "RAID controllers" were introduced that do not contain a RAID controller chip, but simply a standarddisk controller chip with special firmware and drivers. During early stage bootup the RAID is implemented by thefirmware; when a protected-mode operating system kernel such as Linux or a modern version of MicrosoftWindows is loaded the drivers take over.These controllers are described by their manufacturers as RAID controllers, and it is rarely made clear topurchasers that the burden of RAID processing is borne by the host computer's central processing unit, not theRAID controller itself, thus introducing the aforementioned CPU overhead from which hardware controllers don'tsuffer. Firmware controllers often can only use certain types of hard drives in their RAID arrays (e.g. SATA forIntel Matrix RAID), as there is neither SCSI nor PATA support in modern Intel ICH southbridges; however,motherboard makers implement RAID controllers outside of the southbridge on some motherboards. Before theirintroduction, a "RAID controller" implied that the controller did the processing, and the new type has becomeknown by some as "fake RAID" even though the RAID itself is implemented correctly. Adaptec calls them

9/7/2010RAID - Wikipedia, the free encyclopedia"HostRAID". Some Linux distributions will refuse to work with "fake RAID".[citation needed]Network-attached storageMain article: Network-attached storageWhile not directly associated with RAID, Network-attached storage (NAS) is an enclosure containing disk drivesand the equipment necessary to make them available over a computer network, usually Ethernet. The enclosure isbasically a dedicated computer in its own right, designed to operate over the network without screen or keyboard.It contains one or more disk drives; multiple drives may be configured as a RAID.Hot sparesBoth hardware and software RAIDs with redundancy may support the use of hot spare drives, a drive physicallyinstalled in the array which is inactive until an active drive fails, when the system automatically replaces the faileddrive with the spare, rebuilding the array with the spare drive included. This reduces the mean time to recovery(MTTR), though it doesn't eliminate it completely. Subsequent additional failure(s) in the same RAID redundancygroup before the array is fully rebuilt can result in loss of the data; rebuilding can take several hours, especially onbusy systems.Rapid replacement of failed drives is important as the drives of an array will all have had the same amount of use,and may tend to fail at about the same time rather than randomly.[citation needed] RAID 6 without a spare uses thesame number of drives as RAID 5 with a hot spare and protects data against simultaneous failure of up to twodrives, but requires a more advanced RAID controller. Further, a hot spare can be shared by multiple RAID sets.Reliability termsFailure rateTwo different kinds of failure rates are applicable to RAID systems. Logical failure is defined as the loss of asingle drive and its rate is equal to the sum of individual drives' failure rates. System failure is defined as lossof data and its rate will depend on the type of RAID. For RAID 0 this is equal to the logical failure rate, asthere is no redundancy. For other types of RAID, it will be less than the logical failure rate, potentiallyapproaching zero, and its exact value will depend on the type of RAID, the number of drives employed, andthe vigilance and alacrity of its human administrators.Mean time to data loss (MTTDL)In this context, the average time before a loss of data in a given array.[21] Mean time to data loss of a givenRAID may be higher or lower than that of its constituent hard drives, depending upon what type of RAID isemployed. The referenced report assumes times to data loss are exponentially distributed. This means 63.2%of all data loss will occur between time 0 and the MTTDL.Mean time to recovery (MTTR)In arrays that include redundancy for reliability, this is the time following a failure to restore an array to itsnormal failure-tolerant mode of operation. This includes time to replace a failed disk mechanism as well astime to re-build the array (i.e. to replicate data for redundancy).Unrecoverable bit error rate (UBE)

9/7/2010RAID - Wikipedia, the free encyclopediaThis is the rate at which a disk drive will be unable to recover data after application of cyclic redundancycheck (CRC) codes and multiple retries.Write cache reliabilitySome RAID systems use RAM write cache to increase performance. A power failure can result in data lossunless this sort of disk buffer is supplemented with a battery to ensure that the buffer has enough time to writefrom RAM back to disk.Atomic write failureAlso known by various terms such as torn writes, torn pages, incomplete writes, interrupted writes, nontransactional, etc.Problems with RAIDCorrelated failuresThe theory behind the erro

RAID 0 array (stripes). It is also possible to combine stripes into mirrors (RAID 0 1). The final step is known as the top array. When the top array is a RAID 0 (such as in RAID 10 and RAID 50) most vendors omit the " ", though RAID 5 0 is clearer. RAID 0 1: striped sets in a mirrored set (