Buffalo Forums
Products => Storage => : zyxtie April 09, 2018, 04:39:57 AM
-
Hi,
I'm currently running a ls441d with its original WD Blue 3 TB disks in a RAID10 setup, but I want to replace these with WD Purple 4 TB disks and start from scratch (i.e factory reset of the nas and fully reconfigure all setttings on the nas, yes my data is backupped). This should be a simple operation, but since I've had a bit of a meltdown on this nas when I tried to replace a disk that started to show errors, I'm somewhat hesitant with ordering the new disks.
My question is, can I simply do a factory reset of the NAS and replace the WD Blue 3 TB disks with WD Purple 4 TB disks, reconfigure RAID, restore data backups etc.? Imo it should be as simple as that, but I want to be certain I'm not missing anything.
Any ideas, suggestions?
-
Two options:
1. Replace disk1 with a higher capacity disk -> Rebuild the RAID, once done replace disk2 -> Rebuild the RAID -> Do the same with each disk in turn (each rebuild takes around a day!), once all four disks are done -> Delete the RAID and create a new RAID, only once this is done will you see the higher capacity. Total time to do this, about 4 days, (no, I'm not joking!)
2. Replace all the disks at once -> Get the TFTP file (Buffalo won't supply this directly) -> Use this to put the unit into EM mode -> Reinstall the firmware -> Done. Total time, about 30 minutes.
-
thx for replying.
option (1) though long seems to be the easiest option so I'll try that first.
But allthough mdadm (acp_commander => ssh) tells me my raid volume is healthy, the webinterface doesn't "see" my raid interface anymore (ie is says No RAID array is configured), after my disk replace actions. I hope by doing a factory reset first, the web interface will see the raid again. I also got an informational alert that says "Checking system area", which it has been saying since last Friday afternoon.
[root@LS441D1C9 ~]# mdadm -D /dev/md10
/dev/md10:
Version : 1.2
Creation Time : Fri Feb 16 18:01:16 2018
Raid Level : raid0
Array Size : 5829487616 (5559.43 GiB 5969.40 GB)
Raid Devices : 2
Total Devices : 2
Persistence : Superblock is persistent
Update Time : Fri Feb 16 18:01:16 2018
State : clean
Active Devices : 2
Working Devices : 2
Failed Devices : 0
Spare Devices : 0
Chunk Size : 512K
Name : LS441D1C9:10 (local to host LS441D1C9)
UUID : ec7f48d5:7fedc12b:be5c7b3c:c596ea88
Events : 0
Number Major Minor RaidDevice State
0 9 11 0 active sync /dev/md11
1 9 12 1 active sync /dev/md12
[root@LS441D1C9 ~]# mdadm -D /dev/md11
/dev/md11:
Version : 1.2
Creation Time : Fri Feb 16 18:01:14 2018
Raid Level : raid1
Array Size : 2914744128 (2779.72 GiB 2984.70 GB)
Used Dev Size : 2914744128 (2779.72 GiB 2984.70 GB)
Raid Devices : 2
Total Devices : 2
Persistence : Superblock is persistent
Intent Bitmap : Internal
Update Time : Mon Apr 9 13:21:37 2018
State : active
Active Devices : 2
Working Devices : 2
Failed Devices : 0
Spare Devices : 0
Name : LS441D1C9:11 (local to host LS441D1C9)
UUID : c78d6705:415a366a:35163a3d:ed81a757
Events : 16720
Number Major Minor RaidDevice State
0 8 6 0 active sync /dev/sda6
1 8 22 1 active sync /dev/sdb6
[root@LS441D1C9 ~]# mdadm -D /dev/md12
/dev/md12:
Version : 1.2
Creation Time : Fri Feb 16 18:01:15 2018
Raid Level : raid1
Array Size : 2914744128 (2779.72 GiB 2984.70 GB)
Used Dev Size : 2914744128 (2779.72 GiB 2984.70 GB)
Raid Devices : 2
Total Devices : 2
Persistence : Superblock is persistent
Intent Bitmap : Internal
Update Time : Mon Apr 9 13:21:43 2018
State : active
Active Devices : 2
Working Devices : 2
Failed Devices : 0
Spare Devices : 0
Name : LS441D1C9:12 (local to host LS441D1C9)
UUID : f5467ed0:561d6e32:fc38eb3d:9317f11b
Events : 17106
Number Major Minor RaidDevice State
0 8 38 0 active sync /dev/sdc6
1 8 54 1 active sync /dev/sdd6
[root@LS441D1C9 ~]# cat /proc/mdstat
Personalities : [linear] [raid0] [raid1] [raid10] [raid6] [raid5] [raid4]
md10 : active raid0 md11[0] md12[1]
5829487616 blocks super 1.2 512k chunks
md11 : active raid1 sda6[0] sdb6[1]
2914744128 blocks super 1.2 [2/2] [UU]
bitmap: 0/22 pages [0KB], 65536KB chunk
md12 : active raid1 sdc6[0] sdd6[1]
2914744128 blocks super 1.2 [2/2] [UU]
bitmap: 0/22 pages [0KB], 65536KB chunk
md0 : active raid1 sda1[0] sdd1[3] sdc1[2] sdb1[1]
999872 blocks [4/4] [UUUU]
bitmap: 0/1 pages [0KB], 65536KB chunk
md1 : active raid1 sda2[0] sdd2[4] sdc2[2] sdb2[1]
4995008 blocks super 1.2 [4/4] [UUUU]
md2 : active raid1 sda5[0] sdd5[4] sdc5[2] sdb5[1]
999424 blocks super 1.2 [4/4] [UUUU]
bitmap: 0/1 pages [0KB], 65536KB chunk
unused devices: <none>
So I'm a bit worried that I might have to resort to option (2), if (1) fails. Can you tell me how I can get the tftp file (PM you for it?) and what is the procedure for getting the nas in em mode? There's a lot of info on how to do this (e.g. http://buffalo.nas-central.org/wiki/EM_Mode) and none of it is very straightforward I'm afraid