Hi,
I own this NAS for some years and I'm happy with it.
Some monthes ago, I lost one of the HDD, changed it but had to reconstruct the array from start (bad point) but in final all was OK.
Some weeks ago I saturated the FS and had to do some cleaning.
Some days ago I woke with a blinking NAS. After a quick check, I figured the 2 HDD were still mounted but the data remain unavailable.
My last backup has 7 monthes and I would like to restore some recent pictures of my young daughter (babies are babies only for a moment...)
I mounted the HDD on a linux (ubuntu) box and tried to access the data. The 2 HDD present the same behaviour.
(Raid have been auto assembled by Ubuntu)
lsblk :
sdb 8:16 0 931,5G 0 disk
├─sdb1 8:17 0 977M 0 part
│ └─md0 9:0 0 977M 0 raid1
├─sdb2 8:18 0 4,8G 0 part
│ └─md1 9:1 0 4,8G 0 raid1
├─sdb3 8:19 0 1M 0 part
├─sdb4 8:20 0 1M 0 part
├─sdb5 8:21 0 977M 0 part
│ └─md10 9:10 0 977M 0 raid1
└─sdb6 8:22 0 917,2G 0 part
└─md2 9:2 0 917,2G 0 raid1
blkid
/dev/sdb1: UUID="6fe58b05-1c5b-4175-0637-9411a52746e2" TYPE="linux_raid_member"
/dev/sdb2: UUID="2a4461da-fe42-3263-267f-49d472a009ae" UUID_SUB="37b6ee71-50eb-04cc-e3b2-9cce05572284" LABEL="LS-WXL-EM225:1" TYPE="linux_raid_member"
/dev/sdb5: UUID="4ffcc174-e989-4d4f-7744-82a4cb235e43" UUID_SUB="55d02a4c-71e8-2a65-8b4b-158a16fbf92f" LABEL="LS-WXL-EM225:10" TYPE="linux_raid_member"
/dev/sdb6: UUID="8158dd32-9744-deeb-e634-df26b6173bf8" UUID_SUB="477c14b5-87df-0598-ec60-b33e0c2b6744" LABEL="LS-WXL225:2" TYPE="linux_raid_member"
/dev/md0: UUID="8f419901-c122-4607-870c-2faa91fa4c9e" SEC_TYPE="ext2" TYPE="ext3"
/dev/md1: UUID="973b9114-d397-42bc-8546-b9d63aa3643e" TYPE="ext3"
/dev/md10: TYPE="swap"
/dev/md2: UUID="a28edc0e-a0d2-43e1-993d-2f0093bceaf2" TYPE="xfs"
I can mount the ext3 system partitions (md0, md1)
But if I try to mount the big xfs partition, the mounting process hangs
If I try a "xfs_check /dev/md2" I have as result :
"ERROR: The filesystem has valuable metadata changes in a log which needs to
be replayed. Mount the filesystem to replay the log, and unmount it before
re-running xfs_check. If you are unable to mount the filesystem, then use
the xfs_repair -L option to destroy the log and attempt a repair.
Note that destroying the log may cause corruption -- please attempt a mount
of the filesystem before doing this.
If I try a xfs_repair then :
Phase 1 - find and verify superblock...
Phase 2 - using internal log
- zero log...
+ same message than xfs_check..
If I try a xfs_repair -L, process hangs...
Do exist other, may be lower level, way to access my lost data ?
Do you think it is a NAS malfunction that triggered this situation ?
Thanks for your advices
Franck