Hacking the TeraStation Live

Gaining console access (i.e. telnet/ssh)

The TS Live comes with an already "open" firmware. As from nas-central.org, the only thing needed to allow telnet/ssh access is to use acp_commander.
Detailed instructions here on nas-central.org.

Adding a bootstrap loader/installer

Instructions on this post on nas-central.org
Basically, after gaining telnet/ssh access, create a directrory on the RAID array to store optware feeds, get and run the bootstrap loader, and you're done!

  1. mkdir /mnt/array1/tmp
  2. cd /mnt/array1/tmp
  3. wget http://ipkg.nslu2-linux.org/feeds/optware/cs05q3armel/cross/stable/teraprov2-bootstrap_1.1-1_arm.xsh
  4. sh teraprov2-bootstrap_1.1-1_arm.xsh

Warning!
Check /etc/init.d/rc.optware and look if /mnt/array1/.optware is mounted. In ny case the rc.optware script was wrong, since defined /mnt/disk1/.optware as volume to mount. That path doesn't exist, and as a result I had the RAM filesystem mounted as /opt, which is a bad thing, since it gets full in a cinch!

Once the bootstrap is installed:

  1. /dev/md2 (the first or unique RAID volume) is also mounted as /opt.
  2. in /opt several folders are created
  3. ipkg is installed in /opt/bin/ipkg and the config file with the selected feeds is in /opt/etc/ipkg.conf
  4. PATH is modified into PATH=/opt/bin:/opt/sbin:$PATH so one can use the binaries installed via ipkg as if they were normally installed.
  5. /etc/init.d/rcS is modified appending at the end
    1. # Optware setup
    2. [ -x /etc/init.d/rc.optware ] && /etc/init.d/rc.optware start

After the installation, reconnect to make sure the new $PATH is working.
Now it is possible to use ipkg the standard way, to both update the packages list and install packages:

  1. ipkg update
  2. ipkg upgrade
  3. ipkg install <appname>

Cannot login via web interface

This link says that this happens either because /etc/melco/userinfo is missing or empty...
That file should at the base minimum contain the following lines:

  1. admin<>Built-in account for administering the system;
  2. guest<>Built-in account for guest access to the system;
Note that I've still never tested it, just placed here for reference...

Replacing hard disks

In the end, 4x500 GB can be a tight space, espacially if storing backups.
Looking for supported 2TB SATA 3.5" drives, I found:

on rebuilding the RAID array

Buffalo Technology forum
Replacing disks under RAID5, on nas-central.org
Exploiting md recovering and growing capabilities, on nas-central.org.

My recipe

The TS Live has been reconfigured out of the box to use the four 500 GB drives as two RAID1 separate volumes.
That is, the drives and partition situation goes like this:

drive #blocksdevicesystem
1297171/dev/sda183 Linux
1498015/dev/sda283 Linux
1487588815/dev/sda45 Extended
1136521/dev/sda582 Linux swap
1487315678+/dev/sda683 Linux
2297171/dev/sdb183 Linux
2498015/dev/sdb283 Linux
2487588815/dev/sdb45 Extended
2136521/dev/sdb582 Linux swap
2487315678+/dev/sdb683 Linux
3297171/dev/sdc183 Linux
3498015/dev/sdc283 Linux
3487588815/dev/sdc45 Extended
3136521/dev/sdc582 Linux swap
3487315678+/dev/sdc683 Linux
4297171/dev/sdd183 Linux
4498015/dev/sdd283 Linux
4487588815/dev/sdd45 Extended
4136521/dev/sdd582 Linux swap
4487315678+/dev/sdd683 Linux
As in the procedure on nas-central.org, the TS Live has been stopped, and drive #4 has been replaced with a WD20EARS 2.0 TB Caviar Green. Restarted and re-synced RAID1 array ("Rearing", via web interface).
After that (three hours), stopped again and replaced drive #3 with a second WD20EARS, restart and resync.

The partition table for /dev/sdc and /dev/sdd now looks like that:

drive #blocksdevicesystem
3297171/dev/sdc183 Linux
3498015/dev/sdc283 Linux
31952716815/dev/sdc45 Extended
3136521/dev/sdc582 Linux swap
31/dev/sdc683 Linux
4297171/dev/sdd183 Linux
4498015/dev/sdd283 Linux
4487588815/dev/sdd45 Extended
4136521/dev/sdd582 Linux swap
4487315678+/dev/sdd683 Linux

So far, so good. Last two steps now:
first, grow the md array to the maximum available size:

  1. mdadm --grow /dev/md3 -z max
on a 2 TB volume, this takes about 20 hours...
next, grow the xfs with
  1. xfs_growfs /mnt/array2/
the xfs file system can be "grown" while mdadm --grow is still running.

Note that, differently from what suggested on the procedure I used as reference on nas-central.org, letting the web interface rebuild (i.e. "rear") the array was not only harmless, but worked as expected, perhaps a tad slower.