In this mode, Write back cache is automatically turn on if BBU (battery Back-Up module) is presented and in optimal state. If BBU is not present or not in optimal state, it automatically switches to write-thorough mode. Adaptive cache option is only applicable to adapter models support hardware DDR cache and BBU.
This is a destructive process, it will erase data on the first 64 K of the virtual disk
In this mode, the mirrored or the parity data will be updated to ensure the virtual disk members data are consistence to each other.
Marvell RAID Utility
Cannot be blank.
No initialization will be perform.
In this mode, no extra block of data will be pre-fetched into the cache memory.
Physical Disk
(Redundant Array of Independent Disks). A family of techniques for managing multiple disks to deliver desirable cost, data availability, and performance characteristics to host environments.
Provides increased reading and writing speed by spreading the transfer of data across multiple channels and drives. However, RAID 0 does not provide fault-tolerance, so all of the data is lost if one or more physical disks fail.
Provides increased read performance because data can be requested in parallel. However, write performance is decreased because two writes are required for each write command. Also, RAID 1 has 50% capacity efficiency.
Features automatic fault-tolerance and provides increased reading and writing speed by spreading the transfer of data across multiple channels and drives. However, RAID 10 uses just 50 percent of the total physical disk space and scalability is limited at a high inherent cost.
Allows an odd number of disk to be written. Read request can be satisfied by data read from either disk or both disks. Data is striped across drives and mirrored. RAID 1E can use an odd number of drives with a minimum number of three drives. RAID 1E data transfer performance is parallel to RAID 10 and RAID 1E can tolerate a single drive failure without data loss.
Requires parity updates and its data can be read from each disk independently. A dedicated disk is required for hot spares. Data is striped across drives like RAID 0 but with the addition of interleaved parity. RAID 5 can tolerate single drive failure without data loss. However, RAID 5 performance is reduced because of parity need. But this reduced performance can be improved with hardware acceleration devices. RAID 5 has a minimum configuration of three drives and a maximum of eight drives.
Data and parity are striped across all RAID 5 arrays. Read requests can occur simultaneously on every drive in an array. RAID 50 consists of a minimum of two RAID 5 sets that data is striped across. RAID 50 can tolerate single drive failure without data loss. RAID 50 minimum configuration is six drives and its maximum is 32 drives.
In this mode, the adapter will read extra block from the hard drive to the cache memory, assuming the data will be required in the next read command from the application. For most sequential type of operation, enable Read Ahead cache will improve read performance. For random type of operation, enabling Read Ahead Cache may slightly degrade the read performance since the extra read operations become an unnecessary overhead.
Two kinds available: Read Ahead and No Read Ahead.
Available option are 16K, 32K, 64K (default) and 128K. For RAID 5 / 50, the stripe size is limited to 64K. For most applications, 64K stripe size should provide the best setting for best performance.
In this mode, the controller write cache is enabled to improve write performance. Write data will be temporary stored on the cache memory and flush to the hard disk at appropriate time. Because of the delay in writing the data to the disk, there is a risk of data integrity if power is loss or system hang when data is not written to the disk. A BBU (Battery Back-Up module) is recommended if Write Back mode is used.
In this mode, all write operation will go to the hard disk before an completion status is returned to the OS.