News:

Buffalo provides Data Recovery services. Read about it here.

Main Menu

LS_QVL quad pro, swapping drive packs between units what is the result?

Started by hca, February 02, 2021, 09:53:33 AM

Previous topic - Next topic

hca

 I have 2 units, purchased the same time in 2013, each with 4 2Tb drives, both started showing drive errors a few years ago and have been in the cupboard till now.

One has 3 good drives installed limping in raid 5 which is a different issue, the just other flashes red 7 times with or without disks in. I suspect the frame firmware.

All 4 drives from the red flash unit runup clean and have file systems intact, but I know there were some errors in them before.

All 4 drives report the same in the Debian box with fdisk -l except for disk identifier. But that's not to say they are without errors.

Disk /dev/sdb: 1.8 TiB, 2000398934016 bytes, 3907029168 sectors
Disk model: WDC WD20EARX-00P
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 4096 bytes
I/O size (minimum/optimal): 4096 bytes / 4096 bytes
Disklabel type: gpt
Disk identifier: 0F3C1BD4-3189-42F0-BD5F-DF919F6B1192

Device        Start        End    Sectors  Size Type
/dev/sdb1      2048    2002943    2000896  977M Microsoft basic data
/dev/sdb2   2002944   12003327   10000384  4.8G Microsoft basic data
/dev/sdb3  12003328   12005375       2048    1M Microsoft basic data
/dev/sdb4  12005376   12007423       2048    1M Microsoft basic data
/dev/sdb5  12007424   14008319    2000896  977M Microsoft basic data
/dev/sdb6  14008320 3890633319 3876625000  1.8T Microsoft basic data


I put the 4 disk array in the good frame unit. This is the result, maybe if the array is tied to the serial of the unit its causing the below?.

Blue flashing led after start press then:
4 green steady
4 green flash
4 green steady and also the yellow flash, ping reply starts on arrays ip from the old unit.

The yellow flash count of 12 I read it is error for can't mount the array.

While it responds to ping it, refuses ssh connection, no web response and not is found by NAS Navigator

Nothing changes in 30 minutes
Pressing the power button causes a few flutters on all 4 green leds, same on release but no power down occurs, yellow 13 flash continues.


Can I expect this response from putting the disks in another known ok unit? I don't intend to try the 3 disks from the good unit in the red flasher.

If I TQFP new firmware into the red flashing unit is is reasonable to expect it to at least try and run up the array providing hardware is all ok?


1000001101000

There are a couple of things that may be relevant:

1. The firmware is stored on the drives, though you may have different versions of the boot loader on the devices this typically doesn't affect anything.
2. It is possible the riser board or main board have some sort of problem, but this is pretty rare. much much more rare than drive problems.

From your description I'd suspect one or more drives is failing which sometimes causes it to error out in the bootloader resulting in the 7-red lights. If the drive(s) completely fail it should be able to boot without them but sometimes these units will error out if the drive initially works but starts throwing error later in the drive detect process.

If you don't need data off of them I'd recommend blanking them and looking for reallocated sectors in the SMART data for each. Usually that's a pretty good indicator that the drive is failing.

Here's a thread on restoring the stock firmware starting with blank drives:
http://forums.buffalotech.com/index.php?topic=30419.0

You can also install Debian instead of the stock firmware if that interests you:
https://github.com/1000001101000/Debian_on_Buffalo

hca



Ok thanks for the reply, with that in mind I have done further investigations these last couple of days. Bit of a close out.

On the rear label, most obvious near the MAC is stated "LS-QVL Series" lower down by the serial in small print its states LS-QV8  OTL/R5-AP.  Also manufactured Jul 2012.

Running in raid 5 so no data recovery option from 2 drives.

Ssh access: login admin and session terminates when correct password is validated, probably old ssh library used in firmware which is no longer accepted.

On start, flashes blue, then goes to 7 red flashes. Network lights are active. shutdown not working, it responds to the function button by changing from red flash back to blue. The drives all power up and seek, then quiet. The fan starts and runs.
Opening it up reveals no damage or obvious fault, extender board is seated ok. All chips are warm but nothing is running hot.   

I suspect because of the above the boot firmware is ok, processor appears to be up and ok and now its looking for the drive boot tracks. 7 red red flash may mean hardware issue with the sata bridge.
So If I were to bother fixing it, I would be looking at replacing the sata bridge chip first. Better to just load up a linux machine with drives, and serve them with nfs and samba and either raid or mirror (raid 1). Normal drive setups that are a damn side easier to deal with than the buffalo setup.



For what its worth, some observations on the drives from testing that may interest someone else down the track.

Putting them in a Debian pc all 4 have the same partition data as read with fdisk.

In another identical good unit each disk one by one in any slot. Pertaining to the OS or firmware partition, not data partitions, 2 scenarios occur.

A drive will boot with flashing blue, go green, ping after a minute or so, another couple of minutes it will have ssh and web access,
Nas Navi access, normal green light activity and red flash for error 16. Of course no raid array is available.
Empty drive bays show red. Any drives that come up like this will all come up together, shutdown on button press.
These drives will all come up on their original IP.I deem this a good drive on the firmware aspect.

Otherwise it will only get as far as steady green with ping, that's it. No shutdown button operation so you must pull the power plug.
Putting good drives in along with one like this does not help, the process stops at the same point.  In my case that was 2 of the 4 drives.

Next I took two of these sick drives and put it in a win7 box, to be told all the partitions needed formatting.
Disk management in admin tools showed all the partitions as did Debian.

I deleted them, made a single partition, drive letter and quick format and windows is happy with it.
Next after 7 hours of disk check, both came up with out any reported errors and work fine. A good idea to use a better test than MS to be sure of this.

Browser ID: smf (is_webkit)
Templates: 4: index (default), Display (default), GenericControls (default), GenericControls (default).
Sub templates: 6: init, html_above, body_above, main, body_below, html_below.
Language files: 5: index+Modifications.english (default), Post.english (default), Editor.english (default), Drafts.english (default), StopForumSpam.english (default).
Style sheets: 4: index.css, attachments.css, jquery.sceditor.css, responsive.css.
Hooks called: 155 (show)
Files included: 35 - 1354KB. (show)
Memory used: 968KB.
Tokens: post-login.
Queries used: 15.

[Show Queries]