Buffalo Forums

Products => Storage => Topic started by: mattew12 on October 05, 2009, 03:03:11 PM

Title: Questions about RAID level and volume migration in areca attached stonefly SAN
Post by: mattew12 on October 05, 2009, 03:03:11 PM
   Hi,
The user manual and guides recommends weekly parity checks.
How do we schedule unattended parity checks of the raid sets?
No schedule option in UI. 3WARE 9xxxx has weekly,monthly task panel.

RAID-10 is described as two RAID-0 arrays which are mirrored.
Is this the real implementation?
Common practice is to have multiple RAID-1 arrays which are striped.
That is the recommend in many articles.

Volume migration to recover from failed controller not described in
user guide.

Default drive attribute is write-back. That gives out of the box big
performance numbers but
is less safe.

Manual claims sequential read ahead best performance. Conflicts with
infortrend recommendation.

Manual claims their RAID-5 and RAID-6 has similar performance.
Diagram shows P+Q parity encoding. So unless sufficient cache ram to
buffer multiple
tracks, P+Q RAID-6 has 2 parity writes for each data write.

Claims 3 days battery hold up of cache. Table only shows battery
holding 52 hours with
128M module. What happens when you use 512M module?

thanks in advance.

kind regards

Matt

Browser ID: smf (is_webkit)
Templates: 1: Printpage (default).
Sub templates: 4: init, print_above, main, print_below.
Language files: 1: index+Modifications.english (default).
Style sheets: 0: .
Hooks called: 45 (show)
Files included: 27 - 1055KB. (show)
Memory used: 734KB.
Tokens: post-login.
Queries used: 10.

[Show Queries]