Does anyone know what the maximum drive size is for the Terastation 3400 series?
According to this post, supposedly there is no 16TB size limit for Terastations.
http://forums.buffalotech.com/index.php?topic=24730.0
And I'm not able to find any other info on this.
Recently, I was not able to get four 8TB disks working for 32TB total space on the TS3400 series.
With the 3400, the array build fails. The LED display goes read and it shows raid build failed.
The array can be formatted in the console, the led screen will go back to normal functions, but the disks cannot be checked afterward.
Also: the array can be deleted, but a new one cannot be created.
The TS3400 does work just fine with 4TB disks.
The 8TB drives work just fine in a 5400 series unit. But this does not solve the question for the 3400 series.
I contacted support and was given the response- that the 3400 series was shipped with 16TB drives.
Thanks.
TS5400 is a 64-bit intel based device
TS3400 is a 32-bit armv7 device.
The 16TB volume limit would apply to the 3400, though 2x 16TB volumes should work.
Actually, thinking about it.. this device has a CPU which supports lpae which i would have thought would avoid that limit. I don't know if Buffalo's kernel uses lpae but my debian installer will install the lpae version of the debian kernel. You could try that and see if that solves the problem.
Quote from: 1000001101000 on January 29, 2020, 07:07:43 PM
TS5400 is a 64-bit intel based device
TS3400 is a 32-bit armv7 device.
The 16TB volume limit would apply to the 3400, though 2x 16TB volumes should work.
That makes sense. I will try that later when I have some more drives.
Quote from: 1000001101000 on January 29, 2020, 09:02:46 PM
Actually, thinking about it.. this device has a CPU which supports lpae which i would have thought would avoid that limit. I don't know if Buffalo's kernel uses lpae but my debian installer will install the lpae version of the debian kernel. You could try that and see if that solves the problem.
I'll try out the debian at some point.
I'm still in the process of upgrading some storage, which will free up some stuff for testing. Once that is done, then on to getting some sort of test rig going for Debian.
Maybe we need a general Debian thread.
I'm starting to think going that route might be a good thing w/these older units for many reasons, if not at least in regards to keeping up with latest SMB and security stuff.
Is it possible to construct some sort of custom TFTP boot, to where the non-intel buffalo devices could boot TFTP, and then run the debian installer via the usb flash drive- as the intel units do? That might make things easier. Honestly, I'm lost on the whole wget thing in trying to get one of these TS-XEs going. I have to take some more time on that...
I have some XEs that have no caddies. :( I found some long head screws to where I can finally physically mount some drives and start getting a TS-XE test box going... the drives actually slid right in, I was surprised.
Anyone have a 3d printer? :D
QuoteIs it possible to construct some sort of custom TFTP boot, to where the non-intel buffalo devices could boot TFTP, and then run the debian installer via the usb flash drive- as the intel units do? That might make things easier. Honestly, I'm lost on the whole wget thing in trying to get one of these TS-XEs going. I have to take some more time on that...
The installer is contained in the custom uImage.buffalo and initrd.buffalo files. You could boot from them via TFTP if you prefer, no need for any usb media. Since the TFTP ip address is not (easily) changeable, changing your network to 192.168.11.* is a hassle, and initiating TFTP boot varies from device to device I've never bothered with it.
QuoteMaybe we need a general Debian thread.
I'm starting to think going that route might be a good thing w/these older units for many reasons, if not at least in regards to keeping up with latest SMB and security stuff.
That would be great. Some howto's for the wiki would also be great.
QuoteI have some XEs that have no caddies. :( I found some long head screws to where I can finally physically mount some drives and start getting a TS-XE test box going... the drives actually slid right in, I was surprised.
It's probably a bad idea to do that long term, but I did find once that some standard case screws made a perfect spacer for drives in an LS-QVL.
Quote from: 1000001101000 on January 31, 2020, 03:14:44 PM
QuoteIs it possible to construct some sort of custom TFTP boot, to where the non-intel buffalo devices could boot TFTP, and then run the debian installer via the usb flash drive- as the intel units do? That might make things easier. Honestly, I'm lost on the whole wget thing in trying to get one of these TS-XEs going. I have to take some more time on that...
The installer is contained in the custom uImage.buffalo and initrd.buffalo files. You could boot from them via TFTP if you prefer, no need for any usb media. Since the TFTP ip address is not (easily) changeable, changing your network to 192.168.11.* is a hassle, and initiating TFTP boot varies from device to device I've never bothered with it.
It sounds like all I would need to do is just copy the boot files into a known working TFTP setup?
If so, I will try that when I have some time.
While I agree the 192.168.11.x IP is a hassle, Buffalo TFTP can be done fairly easily with two computers.
Just put the NAS, and the two PCs on the same switch. I just use my regular home router's ports.
Set PC1 to be 192.168.11.1 and run the TFTP server on this one - I use my HTPC
Leave PC2 on DHCP - run the firmware updater app with this one- I use my main desktop PC.
I've done just about all my TFTP firmwares this way, no problem. Very straightforward.
If anything, getting the right boot files extracted are the problem moreso, than getting the network IPs up correctly.
After the nas boots on 192.168.11.x, the buffalo firmware switches to DHCP, and can the unit can be found on the regular local subnet.
Now I don't know if your custom installer will do that switch, but I have noticed just about all buffalo terastations and linkstation TFTP boots will do this.
I have to read up on the wget stuff. I really am getting lost with that. My command line is not very good. I watched the video on your wiki, the guy goes so fast- I was still lost afterward.
--
Quote from: 1000001101000 on January 31, 2020, 03:14:44 PM
QuoteI have some XEs that have no caddies. :( I found some long head screws to where I can finally physically mount some drives and start getting a TS-XE test box going... the drives actually slid right in, I was surprised.
It's probably a bad idea to do that long term, but I did find once that some standard case screws made a perfect spacer for drives in an LS-QVL.
True- I figured as much, probably too much vibration which is not good for the drives. And of course- the drives don't lock in either.
Just wanted to try a quick test. They're pretty close, at least the sata connector on the backplane isn't bending...
What kind of standard case screws did you use? The long head screws I tried- are the hex head ones with the thicker thread for Hard Drives, pci/vga slot covers, and some motherboards.
--
Other question:
Is it possible to switch for example the LS220DE bios firmware into one of the other LS220 units? (the DE boots to EM mode automatically, the other D series units do not). It seems there should be a way to convert the bios firmware to the DE version somehow...
QuoteIt sounds like all I would need to do is just copy the boot files into a known working TFTP setup?
If so, I will try that when I have some time.
Exactly.
All boot methods are basically just a way to load the kernel(uImage.buffalo) and initrd (initrd.buffalo) into memory and then execute them.
QuoteAfter the nas boots on 192.168.11.x, the buffalo firmware switches to DHCP, and can the unit can be found on the regular local subnet.
Now I don't know if your custom installer will do that switch, but I have noticed just about all buffalo terastations and linkstation TFTP boots will do this.
The debian installer will also set an address via dhcp.
QuoteI have to read up on the wget stuff. I really am getting lost with that. My command line is not very good. I watched the video on your wiki, the guy goes so fast- I was still lost afterward.
should be pretty straightforward if you have python installed, though if your on Windows you may need to install it first. Otherwise post your console output and we'll take a look.
I think the screws I used were hex-head case screws with the correct thread (just what I had around). Probably not a great idea for the reasons you mentioned though it worked surprisingly well.
My installer example assumes you've got a device with working firmware, whether it has nand or not doesn't make a difference at that point.
Quote from: 1000001101000 on January 31, 2020, 07:27:32 PM
QuoteIt sounds like all I would need to do is just copy the boot files into a known working TFTP setup?
If so, I will try that when I have some time.
Exactly.
All boot methods are basically just a way to load the kernel(uImage.buffalo) and initrd (initrd.buffalo) into memory and then execute them.
Ok, maybe I can get that going later when I have some time.
QuoteQuoteI have to read up on the wget stuff. I really am getting lost with that. My command line is not very good. I watched the video on your wiki, the guy goes so fast- I was still lost afterward.
should be pretty straightforward if you have python installed, though if your on Windows you may need to install it first. Otherwise post your console output and we'll take a look.
I think the screws I used were hex-head case screws with the correct thread (just what I had around). Probably not a great idea for the reasons you mentioned though it worked surprisingly well.
My installer example assumes you've got a device with working firmware, whether it has nand or not doesn't make a difference at that point.
I have no idea how to install python on windows. I downloaded an FTP app for windows, hoping there's a way to just navigate to wherever the boot files need to be copied to so the nas can boot.
Another thing is that I'm still having problems with java, to where it will not run unless the command prompt is on the java directory itself.
I have the java directory in my path statement. Even after a reboot- it won't run unless I manually go to the directory java is in. very strange.
Python.org provides a windows installer, it should be simple to install.
I'm surprised the java installer wouldn't have handled the PATH setup for you. At least you've got a workaround for that part.
Quote from: 1000001101000 on February 06, 2020, 08:42:26 AM
Python.org provides a windows installer, it should be simple to install.
I'm surprised the java installer wouldn't have handled the PATH setup for you. At least you've got a workaround for that part.
Quote from: 1000001101000 on February 06, 2020, 08:42:26 AM
Python.org provides a windows installer, it should be simple to install.
I'm surprised the java installer wouldn't have handled the PATH setup for you. At least you've got a workaround for that part.
Yes for whatever reason, the java installer doesn't handle things, although it is in the path, it is really wierd.
I am getting ready to try working on a 3000 series unit again soon...
I'm going to do it on one that has the latest stock firmware clean installed, and attempt to test the LPAE (using 4x6tb disks for 24tb) if I can get the debian loaded and running...
I'm really stuck with this loading installer files part.
'Copy the installer files from your PC using sftp or wget. You could also copy them to a network share on the device, then copy them to /boot from /mnt/disk.... or /mnt/array....'
I haven't installed python yet. Is that going to be needed for these commands?
Are the commands from this page to be entered in windows?
I'm confused as to how they are getting from an $ prompt...
https://github.com/1000001101000/Debian_on_Buffalo/wiki/Example:-Loading-installer-files-using-stock-firmware
thanks
The commands in the example were run from my PC running debian. They only use java and python which should allow them to work form any OS that has them installed.
If you're having trouble doing this from windows you could try from one of the devices you've already installed Debian on. Python is part of the default installation and java can be installed as simply as:
Quotesudo apt-get install default-jdk
Just an update on this for those interested- before the Debian install.
After I loaded the stock firmware 1.92 onto the 24TB array (4x 6TB disks) in the TS3400:
The system boots.
The RAID 0 default format for 24TB failed just like with the 32TB array. So 16TB is definitely the limit with Buffalo's stock os.
I made one 12TB RAID 0 array. And it worked ok to create a second 12TB RAID 0 array. Checking both arrays worked fine (checking and formatting both fail on arrays larger than 16TB). I want to say I could not make two separate 16TB arrays when I tried the 8TB drives awhile back, but I don't recall. The 12s certainly work.
I mapped both RAID 0 arrays and wrote some data on both- but I did not fill them entirely. Both wrote at reasonable speeds for this model, although I have seen this unit be faster too. For all intents and purposes- these 12TB arrays are probably fine. I'd go with this setup, but the idea is to use less windows drive letters and max size of course...
I did try making a RAID 10 array. That one worked to the point of where it wants it to sync all 24tb. I did not let it finish the sync, it wants over 3000 minutes to do it!
Anyway, onto installing Debian 10.3
Quote from: 1000001101000 on February 28, 2020, 10:05:56 AM
The commands in the example were run from my PC running debian. They only use java and python which should allow them to work form any OS that has them installed.
If you're having trouble doing this from windows you could try from one of the devices you've already installed Debian on. Python is part of the default installation and java can be installed as simply as:
Quotesudo apt-get install default-jdk
Ok, got it. I'll be using the TS-WVHL to run the commands from the guide here:
https://github.com/1000001101000/Debian_on_Buffalo/wiki/Example:-Loading-installer-files-using-stock-firmware
I am assuming the installer will throw some sort of error(s) if LPAE doesn't do what is needed to make one RAID 0 24TB array. We'll see what it does.
it's been a while since I tried to make a volume > 16TB on any system. One thing that I remember was a lack of errors when creating/mounting volumes. The errors would mainly manifest when trying to write to blocks past 16TB.
For ext4 there is an option that must be set to allow > 16TB volumes to be created I don't recall what it is but I believe it became the default in the past few years.
I don't think I had to do anything special to create >16tb XFS volumes.
Ok, thanks. That helps. Hopefully I can get that far. :)
In your example, I take it that 192.168.1.131 is the Buffalo device to install Debian on.
and 192.168.1.146:8000... that would be your linux pc (or in my case, the TS-WVHL).
--
I already have a question before I get started about the root login info...
EDIT: I think I just answered it:
Use acp_commander to enable a root shell over telnet
java -jar acp_commander.jar -t <device ip address> -pw <your admin password> -o
Connect to the device via telnet, the username is "root" and the password is blank (just press enter).
--
You are correct on all counts.
Ok, I get that part. Right now I am stuck at getting the boot files into debian on the WS-WVHL unit.
The only way I know to do that, is to copy them via usb from windows to linux (samba isn't on the WS-WVHL yet that I can tell- unknown.)
It doesn't like NTFS, going to try FAT32 next.
Also, is there something I can set to where I don't have to keep entering the sudo password in debian? Or is that just how debian works?
Thanks.
See below:
kane88@debian:~$ mkdir /mnt/usbdisk1
mkdir: cannot create directory '/mnt/usbdisk1': Permission denied
kane88@debian:~$ sudo mkdir /mnt/usbdisk1
[sudo] password for kane88:
kane88@debian:~$ mount /dev/sdc1 /mnt/usbdisk1
mount: only root can do that
kane88@debian:~$ sudo mount /dev/sdc1 /mnt/usbdisk1
mount: /mnt/usbdisk1: unknown filesystem type 'ntfs'.
kane88@debian:~$
also, I can't get the usb unmounted or ejected either
kane88@debian:~$ sudo umonunt /dev/sdc
sudo: umonunt: command not found
kane88@debian:~$ sudo eject /dev/sdc
sudo: eject: command not found
Personally, I would copy them from github using git or wget. You could also sftp them from wherever you have them.
I beleive you need an additional package to mount ntfs
Looks like you miss-spelled "umount"
You can switch to a root shell via:
sudo su -
Quote from: 1000001101000 on February 29, 2020, 11:13:40 AM
Personally, I would copy them from github using git or wget. You could also sftp them from wherever you have them.
I beleive you need an additional package to mount ntfs
Looks like you miss-spelled "umount"
You can switch to a root shell via:
sudo su -
You're right. I did misspell umount. It worked the next time. Is that all that is needed to properly unmount and eject drives in debian?
I have no idea how to copy the files over using those methods you mentioned, so I reformatted the flash drive as FAT32, and managed to get some where anyway. The files are in the debian session anyway.
acp_commander sees the TS3400
root@debian:/mnt/usbdisk1/buffalo/acp-commander-master# java -jar acp_commander.jar -f
TS3400DAEA 192.168.x.x TS3400D(YOUZEI) ID=003230 mac: xx FW= 1.920
Found 1 device(s).
I can run this
root@debian:/mnt/usbdisk1/buffalo/acp-commander-master# java -jar acp_commander.jar -t 192.168.x.x -pw **** -o
admin password:
Reset root pwd...
start telnetd...
You can now telnet to your box as user 'root' providing no / an empty password. Please change your root password to something secure.
But I cannot telnet into the machine
root@debian:/mnt/usbdisk1/buffalo/acp-commander-master# telnet 192.168.x.x
Trying 192.168.x.x...
telnet: Unable to connect to remote host: Connection refused
root@debian:/mnt/usbdisk1/buffalo/acp-commander-master#
Thoughts? Do I need to use an older firmware? I can go back as far as firmware version 1.84 if need be. I don't have anything older than that.
update: the admin password using the 1.92 firmware got corrupted running this, and I couldn't access the console again.
java -jar acp_commander.jar -t <device ip address> -pw <your admin password> -o
Something was definitely changed in the newer firmware.
So I reverted back to v1.84, I can telnet into it now.
anyway- onto the rest...
If you ended up in EM mode the password becomes "password", I believe that's default for fresh installs too. That would be my best guess for why it would stop working.
It didn't end up in em mode. I just couldn't log back in to the admin console. I'll try password next time if that happens.
Well, I gave it a shot. And getting the same problem. The raid 0 partition won't format as 24tb. I guess LPAE is not enabled within the debian installer. :(
Getting this error:
Failed to create a file system
The ext4 file system creation in partition #1 of RAID0 device #10 failed.
There's no option in the installer to switch it to xfs.
There's choices Ext2, Ext3, Ext4, btrfs, JFS, Fat16, Fat32, swap, physical for encrpytion, and physical for LVM
Under ext4, there is a typical useage with choices for standard, news, largefile, and largefile 4
Leave the big array blank during the install and try within Debian afterwards
Specifically this is because the installer runs as the regular armmp kernel, the lpae kernel is installed at the end of the process and is only available when you boot into the system.
Reading a little more about this it seems like the needed options should be defaults now for the version of mkfs used by buster (extents and 64bit). I don't think you should have to do anything special when creating the filesystem, if it gives an error similar to this you may have to pass those options manually:
Quotemke2fs: Size of device (0x123456789 blocks) is too big to be expressed in 32 bits using a blocksize of 4096
Ok, I reinstalled and left the big array blank this time.
I noticed some sort of LPAE package loading during the install. Hopefully that has what it needs.
As for the large /mnt/array partition, can you help me get it formatted?
The installer had it listed as xfs and unformatted. Shouldn't it be Ext4 journaling? How to make that change?
Also:
I was going over your post installation steps page
https://github.com/1000001101000/Debian_on_Buffalo/wiki/Post-Installation-Options
Step 3 in 'Setup your existing raid arrays for use on the new system' is missing something. And I have no clue how to edit the mdadm.conf file
3. Edit /etc/mdadm/mdadm.conf, make sure there is a line for each array listed by the command from the previous step. 4. If you had to make changes, generate a new initramfs and reboot.
update-initramfs -u
reboot
-----
One other thing I noticed was the host name can't be changed in the installer.
I just changed it using webmin, is that the only place it needs to be modified? Or are there other places I need to change it?
Thanks.
you can see which kernel you are running with
uname -a
you can see a list of mdadm arrays via:
mdadm —detail —scan
Assuming their running you can see how they relate to the disks:
lsblk
if an array isn't listed in /etc/mdadm/mdadm.conf it will get a name like md127. If you add the line from the scan above you can set the desired name.
To format run something similar to:
mkfs.ext4 /dev/md127
Then mount:
mkdir /mnt/array
mount /dev/md127 /mnt/array
I'd recommend waiting to update fstab until after you're done experimenting. Once you add something to fstab it will mount at boot but the boot process will hang forever if the mount fails.
as for the hostname... The installer needs the hostname set before it starts the network, as such it can't be set by the ssh install process. You can change it by editing /etc/hostname or the hostname command (I assume webmin does that for you)
You can actually test the 16TB limit before formatting. All you need to do is try to write something after the 16TB mark.
I think this should be a good test (Only do this before formatting)
dd if=/dev/zero of=/dev/md127 bs=1G skip=17000 count=1
Thanks. I checked the hostname after I did this, and it has updated correctly. Thanks.
I'm having some problems with mdadm though, here are the results of the commands you mentioned:
root@debian:/# uname -a
Linux debian 4.19.0-8-armmp-lpae #1 SMP Debian 4.19.98-1 (2020-01-26) armv7l GNU/Linux
root@debian:/#
root@debian:~# mdadm —detail —scan
mdadm: An option must be given to set the mode before a second device
(—scan) is listed
root@debian:~#
lsblk has this for all 4 drives
└─sda6 8:6 0 5.4T 0 part
└─md10 9:10 0 21.7T 0 raid0
your test seems to have been a success!
root@debian:/# dd if=/dev/zero of=/dev/md10 bs=1G skip=17000 count=1
1+0 records in
1+0 records out
1073741824 bytes (1.1 GB, 1.0 GiB) copied, 33.9754 s, 31.6 MB/s
root@debian:/#
root@debian:/#
here is the current mdadm.conf
I have not made any changes in it
And I haven't formatted the array yet, or tried to mount it either.
root@debian:/# vi /etc/mdadm/mdadm.conf
# by default (built-in), scan all partitions (/proc/partitions) and all
# containers for MD superblocks. alternatively, specify devices to scan, using
# wildcards if desired.
#DEVICE partitions containers
# automatically tag new arrays as belonging to the local system
HOMEHOST <system>
# instruct the monitoring daemon where to send mail alerts
MAILADDR root
# definitions of existing MD arrays
ARRAY /dev/md0 UUID=78102440:dba0a338:e97df895:e01ccbc7
ARRAY /dev/md/1 metadata=1.2 UUID=df18a690:6dbd0eb2:895e89c9:373dd0f6 name=TS3400D-EMAEA:1
ARRAY /dev/md/2 metadata=1.2 UUID=31c82c2d:1ca7bbf8:be5d548e:e2714b1e name=TS3400D-EMAEA:2
ARRAY /dev/md/10 metadata=1.2 UUID=c6dae5c0:39b83377:dcba7d07:2967aadb name=TS3400D-EMAEA:10
# This configuration was auto-generated on Sun, 01 Mar 2020 00:01:13 -0600 by mkconf
Looks like the installer handled your mdadm.conf just fine.
The reason the command didnt work was autocorrect changing "dash dash" to some special character.
It sure looks like the write worked.
If my math is correct that means you successfully wrote 1gb of zeros somewhere around 16.2TB into the array. Try increasing the size of "skip" to try around 20TB and the again for like 30TB to confirm what that that fails as expected.
Quote from: 1000001101000 on March 01, 2020, 01:05:55 PM
Looks like the installer handled your mdadm.conf just fine.
The reason the command didnt work was autocorrect changing "dash dash" to some special character.
It sure looks like the write worked.
If my math is correct that means you successfully wrote 1gb of zeros somewhere around 16.2TB into the array. Try increasing the size of "skip" to try around 20TB and the again for like 30TB to confirm what that that fails as expected.
Two things:
I typed out the mdadm command, and getting a different error now.
root@debian:/# mdadm -detail -scan
mdadm: -d does not set the mode, and so cannot be the first option.
root@debian:/#
And I don't think we have the right test point for the dd. How do we calculate the starting place for above 16TB?
I tried 17000 on the TS3400 and on the TS-WVHL too, and both are able to write it.
Then I went to skip=40000 on the TS3400 and it ran fine. And now I am just am adding 0's and it keeps going with no errors
I am up to skip=4000000000 now
root@debian:/home/kane88# dd if=/dev/zero of=/dev/md10 bs=1G skip=4000000000 count=1
1+0 records in
1+0 records out
1073741824 bytes (1.1 GB, 1.0 GiB) copied, 55.9858 s, 19.2 MB/s
root@debian:/home/kane88# dd if=/dev/zero of=/dev/md10 bs=1G skip=40000000000 count=1
dd: /dev/zero: cannot skip: Value too large for defined data type
0+0 records in
0+0 records out
Two dashes for each.
Looks like it should have been "seek" not "skip"
seek worked this time
thanks
Are you able to confirm you can write to 20tb but not 30tb? If so that is a pretty neat outcome.
I stopped after this
root@debian:~# dd if=/dev/zero of=/dev/md10 bs=1G seek=22000 count=1
1+0 records in
1+0 records out
1073741824 bytes (1.1 GB, 1.0 GiB) copied, 45.6849 s, 23.5 MB/s
root@debian:~# dd if=/dev/zero of=/dev/md10 bs=1G seek=23000 count=1
dd: /dev/md10: cannot seek: Invalid argument
0+0 records in
0+0 records out
0 bytes copied, 0.00804384 s, 0.0 kB/s
I formatted the partition. It seems to be working, which is pretty cool. The stock firmware writes a good 2x to 3x faster though. These drives run at worst 100MB/sec on a windows machine. They're not slow disks... I guess buffalo uses something to optimize their os. Is there some kind of write caching or something else that needs to be added to the install?
Is this raid zero or linear?
Either way that dd command was horribly inefficient, I used 1g as the block size to make the math easy but in practice it would destroy your performance.
A safe/quick test to show more real performance would be:
dd if=/dev/md10 of=/dev/null bs=4k count=10240
I used the default partitioning config the buffalo firmware tried to make, which is raid 0.
I left the large partition alone during the debian install.
The installer was showing it as raid 0, and it wasn't formatted.
How do I check if raid 0 or linear?
Given the root acct recovery problems I was having with the TS-WVHL, I'm in the process of blowing out the debian install on the TS3400 and starting over with root enabled.
I will try your new test when it's done.
Note that the recovery option we're talking about doesn't help without a physical terminal to connect to.
If you didn't go out of your way to make it linear nothing would have done it for you. I'm a bit surprised it defaulted to raid0 but would make sense.
"mdadm --detail /dev/md10" tells you everything about the array including raid level.
The TS3400 firmware installer defaults to RAID 0 for the user data array.
Well in that case, there's no need to enable the root acct then since the TS3400 and the TS-XEL don't have video ports.
I'm reinstalling one more time, with the root disabled. There wasn't even a way to reboot the terastation. I was running into some big difficulties...
root@debian:/# echo $PATH
/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games
kane88@debian:~$ shutdown -r now
-bash: shutdown: command not found
kane88@debian:~$ cd /
kane88@debian:/$ shutdown -r now
-bash: shutdown: command not found
kane88@debian:/$ sudo su -
[sudo] password for kane88:
kane88 is not in the sudoers file. This incident will be reported.
kane88@debian:/$ su
Password:
root@debian:/# shutdown -r now
bash: shutdown: command not found
root@debian:/#
I couldn't do this
kane88@debian:~$ sudo su -
[sudo] password for kane88:
kane88 is not in the sudoers file. This incident will be reported.
kane88@debian:~$
I could do this
kane88@debian:~$ su
Password:
root@debian:/home/kane88#
I couldn't do this
root@debian:/home/kane88# dpkg --install webmin_1.941.all.deb
dpkg: warning: 'ldconfig' not found in PATH or not executable
dpkg: warning: 'start-stop-daemon' not found in PATH or not executable
dpkg: error: 2 expected programs not found in PATH or not executable
Note: root's PATH should usually contain /usr/local/sbin, /usr/sbin and /sbin
root@debian:/home/kane88#
None of these problems were happening w/root disabled.
If you use "su -" instead of "su" most of those problems will go away.
Ok, so the TS3400 is reloaded with root disabled:
ran tests
root@debian:~# dd if=/dev/zero of=/dev/md10 bs=1G seek=22000 count=1
1+0 records in
1+0 records out
1073741824 bytes (1.1 GB, 1.0 GiB) copied, 58.6727 s, 18.3 MB/s
root@debian:~# dd if=/dev/md10 of=/dev/null bs=4k count=10240
10240+0 records in
10240+0 records out
41943040 bytes (42 MB, 40 MiB) copied, 0.298845 s, 140 MB/s
root@debian:~#
and now having some problems problems formatting the array :(
root@debian:~# mkfs.ext4 /dev/md10
mke2fs 1.44.5 (15-Dec-2018)
/dev/md10 contains a xfs file system
Proceed anyway? (y,N) y
Creating filesystem with 5826563584 4k blocks and 364161024 inodes
Filesystem UUID: dfb8c4e4-3aba-4bfb-93e5-82c9a22e371d
Superblock backups stored on blocks:
32768, 98304, 163840, 229376, 294912, 819200, 884736, 1605632, 2654208,
4096000, 7962624, 11239424, 20480000, 23887872, 71663616, 78675968,
102400000, 214990848, 512000000, 550731776, 644972544, 1934917632,
2560000000, 3855122432, 5804752896
Allocating group tables: done
Writing inode tables: done
Creating journal (262144 blocks): mkfs.ext4: Attempt to read block from filesystem resulted in short read
while trying to create journal
root@debian:~# mkdir /mnt/array
root@debian:~# mount /dev/md10 /mnt/array
mount: /mnt/array: wrong fs type, bad option, bad superblock on /dev/md10, missing codepage or helper program, or other error.
root@debian:~# uname -a
Linux debian 4.19.0-8-armmp-lpae #1 SMP Debian 4.19.98-1 (2020-01-26) armv7l GNU/Linux
root@debian:~# mdadm --detail --scan
ARRAY /dev/md0 metadata=0.90 UUID=520ff40a:771a4a9a:e97df895:e01ccbc7
ARRAY /dev/md/1 metadata=1.2 name=TS3400D-EMAEA:1 UUID=bc2a312c:9e69a15e:e2c23f68:356e26a7
ARRAY /dev/md/2 metadata=1.2 name=TS3400D-EMAEA:2 UUID=8fed1747:dedae5ac:02cdf5ff:0606eba5
ARRAY /dev/md/10 metadata=1.2 name=TS3400D-EMAEA:10 UUID=998beb61:15e2396c:306eea0a:54ff59c9
root@debian:~#
I'm not super familiar with xfs options. There must be an option required to create it as > 16tb (>32bit). We've already established ext4 will work with the current defaults, otherwise you can research xfs options.
Last time, it formatted just like it did yesterday. The mkfs.ext4 /dev/md10 has worked both times.
Though I am not sure it has ever mounted correctly to begin with. I have never been able to write any file data to it.
I found this article, but the commands don't work when I enter them with /dev/mapper/md10 and so on.
https://askubuntu.com/questions/779754/how-do-i-resize-an-ext4-partition-beyond-the-16tb-limit
It mentions enabling 64 bit support. The TS3400 doesn't have a 64bit cpu, so I don't see how that will work, unless it is just for the file system.
I'm out of ideas. It seems like OS support for this is currently spotty at best. Maybe it is best for now to drop back to using 2x 12tb raid 0 arrays with the stock os for now. And revisit this later when OS support for it is better.
if mkfs.ext4 didn't throw an error the fs was presumably created without issue. There would be no need to re-size it after the fact.
what actually happened when you tried to mount it?
This might help you with setting up your fstab:
https://help.ubuntu.com/community/Fstab
Quote from: 1000001101000 on March 04, 2020, 09:05:33 AM
if mkfs.ext4 didn't throw an error the fs was presumably created without issue. There would be no need to re-size it after the fact.
root@debian:~# mkfs.ext4 /dev/md10
mke2fs 1.44.5 (15-Dec-2018)
/dev/md10 contains a xfs file system
Proceed anyway? (y,N) y
Creating filesystem with 5826563584 4k blocks and 364161024 inodes
Filesystem UUID: dfb8c4e4-3aba-4bfb-93e5-82c9a22e371d
Superblock backups stored on blocks:
32768, 98304, 163840, 229376, 294912, 819200, 884736, 1605632, 2654208,
4096000, 7962624, 11239424, 20480000, 23887872, 71663616, 78675968,
102400000, 214990848, 512000000, 550731776, 644972544, 1934917632,
2560000000, 3855122432, 5804752896
Allocating group tables: done
Writing inode tables: done
Creating journal (262144 blocks): mkfs.ext4: Attempt to read block from filesystem resulted in short read
while trying to create journal
Quotewhat actually happened when you tried to mount it?
root@debian:~# mkdir /mnt/array
root@debian:~# mount /dev/md10 /mnt/array
mount: /mnt/array: wrong fs type, bad option, bad superblock on /dev/md10, missing codepage or helper program, or other error
QuoteCreating journal (262144 blocks): mkfs.ext4: Attempt to read block from filesystem resulted in short read
while trying to create journal
Looks like the format failed, I must have been wrong about those ext4 defaults (even though I did check on one of my systems)
try:
mkfs.ext4 -O 64bit /dev/md10
TeraStation 3400
Marvell Armada XP MV78230, Dual-Core
Supports 32-bit instruction set
from
https://www.marvell.com/content/dam/marvell/en/public-collateral/embedded-processors/marvell-embedded-processors-armada-xp-mv78230-hardware-specifications-2014-07.pdf
I had the same problem with my LS-QVL
You can not format bigger volumes then 16TB with a 32bit cpu.
I am able to assemble a 21TB raid with mdadm but i am not able to format it with XFS, EXT.
Armada XP supports lpae (Large Physical Address Extensions), we're trying to confirm whether LPAE allows that limit to be exceeded. we've got it running an LPAE enabled kernel and appear to have successfully written to areas > 16TB using dd. Assuming we did that right, it shows it was able to access areas past 16TB for r/w.
kane88 is working on formatting it with the options necessary to support > 32bit addresses.
If this works it would only apply to the ts3000 running Debian but is still interesting.
I will give it another try over the weekend. I have to reinstall the OS again.
I wish debian would update their installer, enabling lpae so that these drives could be formatted correctly- during the install process.
It seems to me, the installer should find what kind of processor is being used, and enable ALL necessary support for it during the setup, so that this whole formatting issue can be avoided.
This reminds me of the 137gb issue with Windows XP/2000 back in the day, and boy would people lose data if 48bit lba wasn't enabled. That said, even if this does work- how trustworthy will it be remains to be seen...
If this works I could make an ts3400 installer image which boots the LPAE kernel and tweaks the ext4 defaults to allow the format to work. That said, I imagine we'll just add some notes to the wiki on how to do it after the install since maintaining a separate installer for once configuration of one device is a lot of work. I imagine Debian made the same decision when they used the regular armhf installer for LPAE and non-LPAE devices.
I just realized I have a TS3400D and enough unused drives to make an 18TB RAID0 array. I'll give a try in the coming days and see how it goes.
Ok, I reloaded the OS again. Still having trouble, even with the new command.
I hope you fare better with your 18TB array. Hopefully there is a way to format arrays that are larger than 16TB.
root@debian:/home/kane88# mkfs.ext4 -O 64bit /dev/md10
mke2fs 1.44.5 (15-Dec-2018)
/dev/md10 contains a xfs file system
Proceed anyway? (y,N) y
Creating filesystem with 5826563584 4k blocks and 364161024 inodes
Filesystem UUID: 820821cf-2489-413a-8b34-980d45e01f0c
Superblock backups stored on blocks:
32768, 98304, 163840, 229376, 294912, 819200, 884736, 1605632, 2654208,
4096000, 7962624, 11239424, 20480000, 23887872, 71663616, 78675968,
102400000, 214990848, 512000000, 550731776, 644972544, 1934917632,
2560000000, 3855122432, 5804752896
Allocating group tables: done
Writing inode tables: done
Creating journal (262144 blocks): mkfs.ext4: Attempt to read block from filesystem resulted in short read
while trying to create journal
root@debian:/home/kane88#
I gave it a try, I wasn't able to write past the 16TB mark using dd. I also got the same error as you did trying to format the array. I'm thinking the previous dd test was invalid for some reason. It seems the 16TB limit applies under the lpae kernel after all.
"40-bit Large Physical Address Extensions (LPAE) addressing up to 1 TB of RAM"
from
https://en.wikipedia.org/wiki/ARM_Cortex-A15
You are trying to work with 20TB
Right, a 32-bit address space limits you to 4GB (2^32 bytes)of ram or 16TB (2^32 4K blocks) block devices. LPAE gives you a 40-bit address space allowing 1TB (2^40 bytes) of ram. I wondered if this meant we could address 4096TB (2^40 4k blocks) block devices.
That doesn't seem to be how it works. I don't know if LPAE only applies to memory addresses or it's some other limitation.