News:

Buffalo provides Data Recovery services. Read about it here.

Main Menu

Maximum drive size for Terastation 3400

Started by Kane88, January 29, 2020, 06:37:18 PM

Previous topic - Next topic

1000001101000


Kane88

Ok, I get that part.  Right now I am stuck at getting the boot files into debian on the WS-WVHL unit.

The only way I know to do that, is to copy them via usb from windows to linux (samba isn't on the WS-WVHL yet that I can tell-  unknown.)
It doesn't like NTFS, going to try FAT32 next.

Also, is there something I can set to where I don't have to keep entering the sudo password in debian?  Or is that just how debian works?
Thanks.

See below:

kane88@debian:~$ mkdir /mnt/usbdisk1
mkdir: cannot create directory '/mnt/usbdisk1': Permission denied
kane88@debian:~$ sudo mkdir /mnt/usbdisk1
[sudo] password for kane88:
kane88@debian:~$ mount /dev/sdc1 /mnt/usbdisk1
mount: only root can do that
kane88@debian:~$ sudo mount /dev/sdc1 /mnt/usbdisk1
mount: /mnt/usbdisk1: unknown filesystem type 'ntfs'.
kane88@debian:~$


also, I can't get the usb unmounted or ejected either

kane88@debian:~$ sudo umonunt /dev/sdc
sudo: umonunt: command not found

kane88@debian:~$ sudo eject /dev/sdc
sudo: eject: command not found

1000001101000

Personally, I would copy them from github using git or wget. You could also sftp them from wherever you have them.

I beleive you need an additional package to mount ntfs

Looks like you miss-spelled "umount"

You can switch to a root shell via:
sudo su -

Kane88

#18
Quote from: 1000001101000 on February 29, 2020, 11:13:40 AM
Personally, I would copy them from github using git or wget. You could also sftp them from wherever you have them.

I beleive you need an additional package to mount ntfs

Looks like you miss-spelled "umount"

You can switch to a root shell via:
sudo su -

You're right.  I did misspell umount.  It worked the next time.  Is that all that is needed to properly unmount and eject drives in debian?

I have no idea how to copy the files over using those methods you mentioned, so I reformatted the flash drive as FAT32, and managed to get some where anyway.  The files are in the debian session anyway.

acp_commander sees the TS3400

root@debian:/mnt/usbdisk1/buffalo/acp-commander-master# java -jar acp_commander.jar -f
TS3400DAEA      192.168.x.x   TS3400D(YOUZEI)         ID=003230       mac: xx  FW=  1.920
Found 1 device(s).

I can run this
root@debian:/mnt/usbdisk1/buffalo/acp-commander-master# java -jar acp_commander.jar -t 192.168.x.x -pw **** -o
admin password:
Reset root pwd...
start telnetd...

You can now telnet to your box as user 'root' providing no / an empty password. Please change your root password to something secure.


But I cannot telnet into the machine

root@debian:/mnt/usbdisk1/buffalo/acp-commander-master# telnet 192.168.x.x
Trying 192.168.x.x...
telnet: Unable to connect to remote host: Connection refused
root@debian:/mnt/usbdisk1/buffalo/acp-commander-master#


Thoughts?  Do I need to use an older firmware?  I can go back as far as firmware version 1.84 if need be.  I don't have anything older than that.



Kane88

update: the admin password using the 1.92 firmware got corrupted running this, and I couldn't access the console again. 
java -jar acp_commander.jar -t <device ip address> -pw <your admin password> -o

Something was definitely changed in the newer firmware.

So I reverted back to v1.84, I can telnet into it now.

anyway- onto the rest...

1000001101000

If you ended up in EM mode the password becomes "password", I believe that's default for fresh installs too. That would be my best guess for why it would stop working.

Kane88

It didn't end up in em mode.  I just couldn't log back in to the admin console.  I'll try password next time if that happens.

Well, I gave it a shot.  And getting the same problem.  The raid 0 partition won't format as 24tb.  I guess LPAE is not enabled within the debian installer. :(

Getting this error:
Failed to create a file system
The ext4 file system creation in partition #1 of RAID0 device #10 failed.

There's no option in the installer to switch it to xfs.
There's choices Ext2, Ext3, Ext4, btrfs, JFS, Fat16, Fat32, swap, physical for encrpytion, and physical for LVM

Under ext4, there is a typical useage with choices for standard, news, largefile, and largefile 4

1000001101000

Leave the big array blank during the install and try within Debian afterwards

1000001101000

Specifically this is because the installer runs as the regular armmp kernel, the lpae kernel is installed at the end of the process and is only available when you boot into the system.

Reading a little more about this it seems like the needed options should be defaults now for the version of mkfs used by buster (extents and 64bit). I don't think you should have to do anything special when creating the filesystem, if it gives an error similar to this you may have to pass those options manually:
Quotemke2fs: Size of device (0x123456789 blocks) is too big to be expressed in 32 bits using a blocksize of 4096


Kane88

Ok, I reinstalled and left the big array blank this time.
I noticed some sort of LPAE package loading during the install.  Hopefully that has what it needs.

As for the large /mnt/array partition, can you help me get it formatted?
The installer had it listed as xfs and unformatted.  Shouldn't it be Ext4 journaling?  How to make that change?

Also:
I was going over your post installation steps page

https://github.com/1000001101000/Debian_on_Buffalo/wiki/Post-Installation-Options

Step 3 in 'Setup your existing raid arrays for use on the new system' is missing something.  And I have no clue how to edit the mdadm.conf file

3. Edit /etc/mdadm/mdadm.conf, make sure there is a line for each array listed by the command from the previous step. 4. If you had to make changes, generate a new initramfs and reboot.

   update-initramfs -u
   reboot


-----

One other thing I noticed was the host name can't be changed in the installer.
I just changed it using webmin, is that the only place it needs to be modified?  Or are there other places I need to change it?

Thanks.


1000001101000

you can see which kernel you are running with
uname -a

you can see a list of mdadm arrays via:
mdadm —detail —scan
Assuming their running you can see how they relate to the disks:
lsblk

if an array isn't listed in /etc/mdadm/mdadm.conf it will get a name like md127. If you add the line from the scan above you can set the desired name.

To format run something similar to:
mkfs.ext4 /dev/md127

Then mount:
mkdir /mnt/array
mount /dev/md127 /mnt/array


I'd recommend waiting to update fstab until after you're done experimenting. Once you add something to fstab it will mount at boot but the boot process will hang forever if the mount fails.

as for the hostname... The installer needs the hostname set before it starts the network, as such it can't be set by the ssh install process. You can change it by editing /etc/hostname or the hostname command (I assume webmin does that for you)

1000001101000

You can actually test the 16TB limit before formatting. All you need to do is try to write something after the 16TB mark.

I think this should be a good test (Only do this before formatting)
dd if=/dev/zero of=/dev/md127 bs=1G skip=17000 count=1

Kane88

#27
Thanks. I checked the hostname after I did this, and it has updated correctly.  Thanks.

I'm having some problems with mdadm though, here are the results of the commands you mentioned:

root@debian:/# uname -a
Linux debian 4.19.0-8-armmp-lpae #1 SMP Debian 4.19.98-1 (2020-01-26) armv7l GNU/Linux
root@debian:/#


root@debian:~# mdadm —detail —scan
mdadm: An option must be given to set the mode before a second device
       (—scan) is listed
root@debian:~#


lsblk has this for all 4 drives

└─sda6      8:6    0  5.4T  0 part
  └─md10    9:10   0 21.7T  0 raid0


your test seems to have been a success!
root@debian:/# dd if=/dev/zero of=/dev/md10 bs=1G skip=17000 count=1
1+0 records in
1+0 records out
1073741824 bytes (1.1 GB, 1.0 GiB) copied, 33.9754 s, 31.6 MB/s
root@debian:/#
root@debian:/#


here is the current mdadm.conf
I have not made any changes in it
And I haven't formatted the array yet, or tried to mount it either.


root@debian:/# vi /etc/mdadm/mdadm.conf

# by default (built-in), scan all partitions (/proc/partitions) and all
# containers for MD superblocks. alternatively, specify devices to scan, using
# wildcards if desired.
#DEVICE partitions containers

# automatically tag new arrays as belonging to the local system
HOMEHOST <system>

# instruct the monitoring daemon where to send mail alerts
MAILADDR root

# definitions of existing MD arrays
ARRAY /dev/md0 UUID=78102440:dba0a338:e97df895:e01ccbc7
ARRAY /dev/md/1  metadata=1.2 UUID=df18a690:6dbd0eb2:895e89c9:373dd0f6 name=TS3400D-EMAEA:1
ARRAY /dev/md/2  metadata=1.2 UUID=31c82c2d:1ca7bbf8:be5d548e:e2714b1e name=TS3400D-EMAEA:2
ARRAY /dev/md/10  metadata=1.2 UUID=c6dae5c0:39b83377:dcba7d07:2967aadb name=TS3400D-EMAEA:10

# This configuration was auto-generated on Sun, 01 Mar 2020 00:01:13 -0600 by mkconf

1000001101000

Looks like the installer handled your mdadm.conf just fine.

The reason the command didnt work was autocorrect changing "dash dash" to some special character.

It sure looks like the write worked.

If my math is correct that means you successfully wrote 1gb of zeros somewhere around 16.2TB into the array. Try increasing the size of "skip" to try around 20TB and the again for like 30TB to confirm what that that fails as expected.

Kane88

Quote from: 1000001101000 on March 01, 2020, 01:05:55 PM
Looks like the installer handled your mdadm.conf just fine.

The reason the command didnt work was autocorrect changing "dash dash" to some special character.

It sure looks like the write worked.

If my math is correct that means you successfully wrote 1gb of zeros somewhere around 16.2TB into the array. Try increasing the size of "skip" to try around 20TB and the again for like 30TB to confirm what that that fails as expected.


Two things:

I typed out the mdadm command, and getting a different error now.

root@debian:/# mdadm -detail -scan
mdadm: -d does not set the mode, and so cannot be the first option.
root@debian:/#



And I don't think we have the right test point for the dd.  How do we calculate the starting place for above 16TB? 
I tried 17000 on the TS3400 and on the TS-WVHL too, and both are able to write it.

Then I went to skip=40000 on the TS3400 and it ran fine.  And now I am just am adding 0's and it keeps going with no errors

I am up to skip=4000000000 now

root@debian:/home/kane88# dd if=/dev/zero of=/dev/md10 bs=1G skip=4000000000 count=1
1+0 records in
1+0 records out
1073741824 bytes (1.1 GB, 1.0 GiB) copied, 55.9858 s, 19.2 MB/s


root@debian:/home/kane88# dd if=/dev/zero of=/dev/md10 bs=1G skip=40000000000 count=1
dd: /dev/zero: cannot skip: Value too large for defined data type
0+0 records in
0+0 records out

Browser ID: smf (is_webkit)
Templates: 4: index (default), Display (default), GenericControls (default), GenericControls (default).
Sub templates: 6: init, html_above, body_above, main, body_below, html_below.
Language files: 5: index+Modifications.english (default), Post.english (default), Editor.english (default), Drafts.english (default), StopForumSpam.english (default).
Style sheets: 4: index.css, attachments.css, jquery.sceditor.css, responsive.css.
Hooks called: 403 (show)
Files included: 35 - 1354KB. (show)
Memory used: 1159KB.
Tokens: post-login.
Queries used: 16.

[Show Queries]