Author Topic: No partition table on second disk from NAS  (Read 2683 times)

zzrg

  • Calf
  • *
  • Posts: 3
No partition table on second disk from NAS
« on: April 16, 2020, 11:51:17 AM »
I had 2 x 2TB HDDs in a Buffalo CloudStation Pro Duo NAS. Several years ago, I decided that I do not need this NAS anymore and wanted to put them in a disk enclosure to connect the disks via USB. In my stupidness, I clicked on "Remove disk" in the web administration panel, as I thought that this would somehow allow me to mount them when I attach the disks via USB. Well, it turned out that this was not the case (at least for the second disk). I was also not able to remount the disks after I put them in the NAS again.

I do not have this NAS anymore, but the HDDs (cloudshare1 and cloudshare2) and now I finally have the time to try to recover the data.

This NAS supports JBOD, RAID0, and RAID1. Unfortunately, I cannot remember whether I had the disks in a RAID0 array or JBOD. However, I am sure that it was not RAID1, I used the 4TB disk space and the NAS mounted the disks in two separate locations cloudshare1 and cloudshare2.

As already indicated, I tried to attach both HDDs via USB and one disk (cloudshare1) can be mounted but the other disk (cloudshare2) is not mountable (manual recovery via, e.g., UFS explorer works to some extent though).

Strangely, cloudshare1 is mounted as a RAID1 array. Output of mdadm for cloudshare1:

Code: [Select]
➜  ~ sudo mdadm --examine /dev/sdb6                       
/dev/sdb6:
          Magic : a92b4efc
        Version : 1.2
    Feature Map : 0x0
     Array UUID : fffdc80a:25d33b5f:0be1a38d:204d070d
           Name : UNINSPECT-EM888:21
  Creation Time : Tue Dec 20 05:41:32 2011
     Raid Level : raid1
   Raid Devices : 2

 Avail Dev Size : 3876622952 (1848.52 GiB 1984.83 GB)
     Array Size : 1938311340 (1848.52 GiB 1984.83 GB)
  Used Dev Size : 3876622680 (1848.52 GiB 1984.83 GB)
    Data Offset : 2048 sectors
   Super Offset : 8 sectors
   Unused Space : before=1968 sectors, after=272 sectors
          State : clean
    Device UUID : bd313c92:f0e29bd4:de01bfb1:b9fbbbe2

    Update Time : Sun Apr  5 13:42:41 2020
       Checksum : 3dd971e4 - correct
         Events : 317549


   Device Role : Active device 0
   Array State : A. ('A' == active, '.' == missing, 'R' == replacing)

Here is the /etc/mdadm.conf from the NAS' OS partition:
Code: [Select]
ARRAY /dev/md127 UUID=3fe75e08:516527a9:c6b301b5:60cfb4cb
ARRAY /dev/md/1 metadata=1.2 UUID=3fff9623:1e04746a:85c2835f:7c463bdb name=UNINSPECT-EM888:1
ARRAY /dev/md/10 metadata=1.2 UUID=bec2d33d:321320a4:f2ecd9b0:9b68692e name=UNINSPECT-EM888:10
ARRAY /dev/md/21 metadata=1.2 UUID=fffdc80a:25d33b5f:0be1a38d:204d070d name=UNINSPECT-EM888:21

Output of fdisk -l when cloudshare1 is attached via USB:

Code: [Select]
Disk /dev/sdb: 1,84 TiB, 2000398934016 bytes, 3907029168 sectors
Disk model: 3.0             
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 4096 bytes
I/O size (minimum/optimal): 4096 bytes / 4096 bytes
Disklabel type: gpt
Disk identifier: E886515F-4153-44FD-9842-7C9A80F94BC3

Device        Start        End    Sectors  Size Type
/dev/sdb1      2048    2002943    2000896  977M Microsoft basic data
/dev/sdb2   2002944   12003327   10000384  4,8G Microsoft basic data
/dev/sdb3  12003328   12005375       2048    1M Microsoft basic data
/dev/sdb4  12005376   12007423       2048    1M Microsoft basic data
/dev/sdb5  12007424   14008319    2000896  977M Microsoft basic data
/dev/sdb6  14008320 3890633319 3876625000  1,8T Microsoft basic data


Disk /dev/md127: 976,96 MiB, 1024393216 bytes, 2000768 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 4096 bytes
I/O size (minimum/optimal): 4096 bytes / 4096 bytes


Disk /dev/md126: 4,79 GiB, 5119135744 bytes, 9998312 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 4096 bytes
I/O size (minimum/optimal): 4096 bytes / 4096 bytes


Disk /dev/md125: 976,101 MiB, 1024446464 bytes, 2000872 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 4096 bytes
I/O size (minimum/optimal): 4096 bytes / 4096 bytes


Disk /dev/md124: 1,82 TiB, 1984830812160 bytes, 3876622680 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 4096 bytes
I/O size (minimum/optimal): 4096 bytes / 4096 bytes

Output of fdisk -l when cloudshare2 is attached via USB:

Code: [Select]
Disk /dev/sdb: 1,84 TiB, 2000398934016 bytes, 3907029168 sectors
Disk model: 3.0             
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 4096 bytes
I/O size (minimum/optimal): 4096 bytes / 4096 bytes

Running mdadm on cloudshare2 provides me with this error:
Code: [Select]
➜  ~ sudo mdadm --examine /dev/sdb
mdadm: No md superblock detected on /dev/sdb.

There appears to be no partition table on this disk and I cannot mount it. However, note that I did not format this disk, yet I am not sure what exactly the "remove disk" action in the web panel did.

I am completely lost. How can I recover data from cloudshare2?

1000001101000

  • Debian Wizard
  • Big Bull
  • *****
  • Posts: 1128
  • There's no problem so bad you cannot make it worse
Re: No partition table on second disk from NAS
« Reply #1 on: April 16, 2020, 12:43:46 PM »
I'll start with the usual disclaimer: if there is data you care about on this disk, consider hiring a professional service to attempt recovery.

That said.

I'm also not familiar with what ejecting the disk in the web ui does. Maybe someone on here does, otherwise you could probably poke around at the firmware and find the script that it runs to do so. Judging by the fact the partitions are gone it seems like it wipes the partition table, hopefully that's all it does. If it wipes the mdadm superblocks too you'd have to research what can be done about that, I've never done it but I've seen articles where people created a new superblock without modifying the rest of the disk in recovery scenarios. I'm guessing it wouldn't have wiped the data because that would take hours.

If it were me I would probably try something like this:

1. make an image of the drive to protect yourself against making things worse by accident (if you have that kind of space available)
2. use gdisk to copy the partition table from drive1 to a file then restore it on drive2. If these disks are identical, the firmware would have created the same partition table on both so this should work. If the disks aren't the same you'd need to look at utilities for rebuilding partition tables.
3. use mdadm --examine to see if it finds the superblock from the array on partition 6, if so you can try to start/mount it.
4. if not look into creating a metadata-only superblock with the same parameters as the one on disk1 and try examine/start/mount again.
5. if that fails start looking for data recovery utilities for xfs volumes.




zzrg

  • Calf
  • *
  • Posts: 3
Re: No partition table on second disk from NAS
« Reply #2 on: April 16, 2020, 02:46:03 PM »
Thank you so much! I will attempt to do the 5 steps you recommended.

Judging by the fact the partitions are gone it seems like it wipes the partition table, hopefully that's all it does.

Only the partitions on disk 2 are gone. I can mount disk 1 without a problem. I am not knowledgeable about file systems etc. but is it be possible that disk 1 also includes the partitions of disk 2 in a JBOD setup or maybe even LVM? Also, I guess this "Remove disk" action in web interface would have then also removed the partition tables on disk 1.

Are you aware of any other config files in the NAS OS that would maybe reveal whether the disks were in RAID0 or JBOD mode? There is no "level=" value in the mdadm.conf file.

1000001101000

  • Debian Wizard
  • Big Bull
  • *****
  • Posts: 1128
  • There's no problem so bad you cannot make it worse
Re: No partition table on second disk from NAS
« Reply #3 on: April 16, 2020, 06:04:55 PM »
Did you just “eject” disk2 or both?

Each disk would have its own partition table for all configurations. these don’t use lvm (and lvm still uses regular partitions). JBOD refers to both “linear” arrays and non-RAID configs, both would have a partition table.

You can see that disk1 was configured as raid1 from the mdadm detail. It could be that both drives had separate 1-disk raid1 (i guess).

In any case the method I outlined earlier should allow you to restore the partition table and examine the mdadm superblock if it still exist (though i’d recommend imaging the drive first)

zzrg

  • Calf
  • *
  • Posts: 3
Re: No partition table on second disk from NAS
« Reply #4 on: April 17, 2020, 01:59:12 AM »
Did you just “eject” disk2 or both?

As far as I can remember, I "removed" (don't know whether this actually means eject) both disks. I will try to find out what exactly this means by looking into the sources if I can find them.

Each disk would have its own partition table for all configurations. these don’t use lvm (and lvm still uses regular partitions). JBOD refers to both “linear” arrays and non-RAID configs, both would have a partition table.

You can see that disk1 was configured as raid1 from the mdadm detail. It could be that both drives had separate 1-disk raid1 (i guess).

Alright, this would make more sense. Still strange that mdadm shows that disk 1 is raid1 configured. I am sure that RAID1 was not the setup I was running these disks.

In any case the method I outlined earlier should allow you to restore the partition table and examine the mdadm superblock if it still exist (though i’d recommend imaging the drive first)

Thanks, I will do this and then send an update.

1000001101000

  • Debian Wizard
  • Big Bull
  • *****
  • Posts: 1128
  • There's no problem so bad you cannot make it worse
Re: No partition table on second disk from NAS
« Reply #5 on: April 17, 2020, 09:27:10 AM »
There's one other thing to consider. You should probably start by looking at the SMART data for the drive and check for pending/relocated sectors. If the partition table disappeared because the drive is starting to fail that may be cause for greater concern.

It wouldn't really change the approach much, it really just makes copying the drive first more important.... just in case it's actually about to fail.