Buffalo Forums

Products => Storage => Topic started by: bmg444 on October 22, 2012, 01:50:51 AM

Title: Buffalo LS-W1.0TGL/R1
Post by: bmg444 on October 22, 2012, 01:50:51 AM

How do I repair a bad, missing, or corropt XFS superblock on Buffalo LS-W1.0TGL/R1 RAID0? 

 

[The device is not visible over the network, and there is not a serial port to ssh into, even if Buffalo allowed it, which I doubt they would. tftp only on other devices that had the serial port. The device runs, apparently normal, but simply does nothing at all useful. NasiNav cant see it even with a direct, known-good ethernet cable. Therefore, firmware revision is unknown.]

 

 

Backstory: 

 

A User has lost access to data stored on a NAS device drive Buffalo LS-W1.0TGL/R1.  THe unit has a default filessystem setup, using the entire available drive space for storage RAID0

 

Drive contains the primary and only copies of digital photos of her children since their birth, and other sentimental items. She thought bysaving it on the device she was performing a "backup". This is the only copy of the date in existence if the data still resides on the drives.  Buffalo support has not been helpful thus far, by phone, according to the User.  Therefore, I have ended up with this project. 

 

The partition (table listing is below) uses an XFS file system. 

 

This article practically reiterates the entire problem we are also dealing with. the XFS component has made the project particularly enjoyable for the user and myself! :-)

http://forums.popphoto.com/showthread.php?614198-Beware-of-Buffalo-drives

 

 

I have been using Ubuntu12.04 Secure Remix, installed to a computer, and also Rescue Remix as a LiveCD

 

ATTEMPTS THUS FAR: 

 

After plugging the drives into Windows7 and MacOS, just to see if I got lucky, I ran Matrox Kryptonite which balked since XFS support is weak. [XFS and Mac's BSD Unix just made my eyes bleed]. 

 

So, I rolled up my sleeves and used my trusty Ubuntu 12 laptop.  It's an HP laptop, so it keeps the room nice and toasty! ;-)

 

DDrepair ubuntu and related tools have been used. I rigged up a direct cabling SATA to USB to one of the two drives inside the NAS, and connected via USB. 

 

sudo ddrescue -r 3 -C /dev/sdx /media/rescuedata/image1.img /media/rescuedate/logfile

 

The results were not useful having tried to run ddrescue as follows: one drive at a time, and rescue files individually from each image, and also trying to append one drive to another (-C) . All images generated by ddrescue were not mountable.  I tried to run mmls imagefile -b

but, the mount command would not run.. I tried

sudo mount -o loop,offset=16384 /media/rescuedata/image1.img /media/rescuedata2

<and>

sudo mount -t xfs -o loop,offset=16384 /media/rescuedata/image1.img /media/rescuedata2

 

The problem is that mmls appears to only support: dos mac, bsd, sun, gpt

mmls -t gpt -i raw /media/rescuedata/rescue1.img 

yields: 

"Invalid Magic Value" and the rest varies per filesystem type (bsd/gpt/dos/sun etc) 

so I cant really see the offset size.. and therefore cant really mount the image.

 

I am running FOREMOST right now.. and that is going to take a while.. so I thought i was ask the forums their thoughts. I will update posts if Foremost does the job, otherwise, assume this is an open issue. 

 

 

XFS_repair/XFS_check utilities yielded no joy.. 

The output of xfs_check is:

xfs_check /dev/sdx

xfs_check: unexpected XFS SB magic number 0x4449534b

xfs_check: WARNING filesystem V1 dirs, limited functionality provided.

xfs_check: read failed: Invalid argument

xfs_check: data size check failed

cache_node_purge:refcount was 1, not zero (node=0xa0a4900)

xfs_check: cannot read root inode (22) 

bad superblock magic number 4449534b, giving up

 

And when I try xfs_repair I get: 

xfs_repair -n /dev/sdx (runs for a hours)

Phase 1 - find and verify superblock...

bad primary superblock - bad magic number !!!

attempting to find secondary superblock...

...found candidate secondary superblock... error reading superblock 115 -- seek to offset  493913702400 failed unable to verify superblock, continuing...

[or on another pass...]

...found candidate secondary superblock... error reading superblock 117 -- seek to offset 502503505920 failed unable to verify superblock, continuing...

[etc etc the above appears a few times, until finally]

...Sorry, could not find valid secondary superblock

Exiting now.

 

 

 

Here is a sample of one of the two drives (both are identical):

 

Disk /dev/sdc: 500.1 GB, 500107862016 bytes

255 heads, 63 sectors/track, 60801 cylinders, total 976773168 sectors

Units = sectors of 1 * 512 = 512 bytes

Sector size (logical/physical): 512 bytes / 512 bytes

I/O size (minimum/optimal): 512 bytes / 512 bytes

Disk identifier: 0xf1bab5ab

 

  Device Boot      Start         End      Blocks   Id  System

/dev/sdc1              63     2008124     1004031   83  Linux

/dev/sdc2         2008125    12016619     5004247+  83  Linux

/dev/sdc4        12016620   976768064   482375722+   5  Extended

/dev/sdc5        12016683    14024744     1004031   82  Linux swap / Solaris

/dev/sdc6        14024808   974984849   480480021   83  Linux

.

TestDisk 6.13, Data Recovery Utility, November 2011

Christophe GRENIER <grenier@cgsecurity.org>

http://www.cgsecurity.org

 

Disk /dev/sdc - 500 GB / 465 GiB - CHS 60801 255 63

 

The harddisk (500 GB / 465 GiB) seems too small! (< 1261 GB / 1175 GiB)

Check the harddisk size: HD jumpers settings, BIOS detection...

 

The following partitions can't be recovered:

    Partition               Start        End    Size in sectors

  Linux                  873   1  1 120506 248 38 1921919744

>  Linux                33769   1  1 153402 248 38 1921919744

[ Continue ]

XFS 6.2+ - bitmap version, 984 GB / 916 GiB

 

 

[Next I will be assembling a mock up RAID1 or RAID0 array with USB cables after I return from the Holy GeekStube, MicroCenter.  I thought I would try ufsexplorer on various OS Platforms to see if it makes a difference.]

 

[I have a couple of other ideas, but it is becoming a question of time invested etc.. at this point... this is a reasonably slow process and some of it is necessarily trial and error.]

 

Bryan Grant Atlanta GA 

 

PS: 

Why does Buffalo use XFS on consumer devices? Most home users never think to buy and use a UPS, so.. why set them up for failure and data loss by selecting XFS?! I am just saying

"Due to balance of speed and safety, there is the possibility to lose data on abrupt power loss (outdated journal metadata), but not filesystem consistence"

 

 

 

Title: Re: Buffalo LS-W1.0TGL/R1
Post by: davo on October 23, 2012, 06:25:31 AM

only option is to mount the HDDs and use UFS explorer (or another data recovery software)

 

However given the fact that it was RAID1 will make the recovery process alot more complicated.

Title: Buffalo LS-W1.0TGL/R1 - Next step mount the RAID0 in linux using mdadm
Post by: bmg444 on October 26, 2012, 01:30:37 PM

Thanks for that suggestion. I had already been running UFXExplorer for a few days at that point.  

One correction, the drives are now confirmed to be set up as  RAID0, at least for the user data areas (the setup was a bit complex.. RAID1 for the system files, RAID0 for the user data space, but mounting the user data space should be plain vanilla.)

 

The software costs Euro100 (~$130) for a single user personal license. The software runs but will not allow a user to view any file over 64 KILObytes as the CrippleWare feature. 

 

I will be using this process as a next step:
http://forums.buffalotech.com/t5/Storage/LS-WVL-RAID-Mode-Not-Configured/td-p/74706

 

IMHO: If it fails, I will break down and hand UFSexplorer  the money, since it does seem to work. Besides, since a lot of people have Buffalo drives, and they all seem to have XFS, I may just set up shop as a data recovery specialist fixing Buffalo's problems that their call center will not help. 

 

;-)

 

 

 

Browser ID: smf (is_webkit)
Templates: 1: Printpage (default).
Sub templates: 4: init, print_above, main, print_below.
Language files: 1: index+Modifications.english (default).
Style sheets: 0: .
Hooks called: 59 (show)
Files included: 27 - 1055KB. (show)
Memory used: 740KB.
Tokens: post-login.
Queries used: 14.

[Show Queries]