Author Topic: How to enable SMBv2 on Linkstation LS-WXL systems so it works with modern OSes  (Read 126716 times)

LesG

  • Calf
  • *
  • Posts: 5
I had to do this a couple of years ago. Tonight I saw that a new firmware (1.75) was out. Against my better judgment, I installed it and promptly lost access to all of my files again, so I had to stumble my way through this solution again with the help of Internet Archive (https://web.archive.org/web/20190718105044/http://nerdkey.co.uk:80/guides/enable-ssh-linkstation-stock-firmware/):


Thanks for that link philadopolis. My Linkstation is on an older version of the firmware and I have been ignoring the new firmware messages purely because (like you) it was a while ago that I 'cracked' the LS-WXL. The 'how-to' instructions were on another (now gone) forum so it's a relief to me that you have found that.

I may now upgrade it, but I'm tempted to put Debian on instead.
« Last Edit: January 23, 2021, 01:44:07 am by LesG »

LesG

  • Calf
  • *
  • Posts: 5
I said...

Quote
I may now upgrade it, but I'm tempted to put Debian on instead.

I installed Debian successfully... at first... then things started going downhill. The default RAID0 parition (md2) is xfs and wouldn't mount, I was getting V1 inode errors. I was unable to get around that, xfs_repair reported no superblock. I think the partition's version of xfs was too old for the Debian xfs utils progs. Then the /boot partition wouldn't mount and the fstab became unreadable (as if it has become a binary file. Yet it still booted ??? It seemed as if I was running into one problem after another so I restored everything back to stock (latest firmware)

Maybe I'll try again one day.

1000001101000

  • Debian Wizard
  • Big Bull
  • *****
  • Posts: 915
  • There's no problem so bad you cannot make it worse
That sounds ... weird to say the least.

There shouldn't be any xfs compatibility issues to worry about, I recently mounted an XFS volume from ~2006 on a modern system without issue. My first guess would be to wonder if you have drive(s) that are starting to fail some of what you describe might make sense if you had a bunch of read failures happening.

LesG

  • Calf
  • *
  • Posts: 5
I've totally rebuild the NAS now, to stock 1.75. That didn't go without problem either so perhaps there was some underlying disk problem. Disks have been erased, reformated and the stock firmware reflashed (from emergency mode) so hopefully I now have a clean system to work with.

I will try updating it to debian again because I think there are advantages to it. I use a modified version of the 'custom sleep' program that worked with the Buffalo 'auto sleep' mode functionality - that program will need some modification I suspect.

In retrospect, and tbh, I think I missed a step from your 'post installations' instructions (I was using Novaspirit Tech's YouTube video as my prime source and he whizzed through the process) As a result, I trashed my md2 partition and my attempts to recover it were only making matters worse. I don't know RAID and should have been circumspect in my actions. But that's how I learn; leaping in, trashing stuff and recovering from it! Not recommended practice but it adds to the thrills :)

Edit: having successfully installed debian and got the nas up, restored and running, I set to modifying my 'custom sleep' programs. I quickly realised there was more work in that than I wanted - so I've reverted back to the stock firmware. In short, the programs need to be able to find the status of the auto switch and that was a feature of the stock firmware. I looked phy_restart.sh  (from the device specific notes) but I was unable to work out if that'd tell me what I need to know and, if so, how.

It's been a fun exercise and has kept me amused in these 'lock down' times! 
« Last Edit: January 29, 2021, 10:59:44 am by LesG »

1000001101000

  • Debian Wizard
  • Big Bull
  • *****
  • Posts: 915
  • There's no problem so bad you cannot make it worse