Technology in the early 21st century is here to kill us. It stresses us out and shortens our lives and makes us feel like failed Neanderthals every single day of our lives.

We are drowning in data yet no one controls theirs at all. At best it happens to be properly backed up at someone else’s cloud location for a short period of time before that fails or the company goes bankrupt or inevitably tries to extort money from you.

The solution to taking control of your own data must involve massive local redundant backed-up storage (MALREBS). There is no other alternative. And this is a technically impossible task for 99.99% of the human race. In this day and age, everything precious to you is stored either by someone else who really doesn’t give a shit about your data, on ridiculously flaky steel platters that randomly fail while you are sleeping, or solid state drives that wear out from the second you start using them. But here we are so let’s struggle on to get it done.

One required component of MALREBS is raid. This turns those flaky hard drives from 100% chance of total failure to perhaps something less than 90%. This is very very important.

There is only one raid solution to rely on. Linux software raid is the only solution that is both free (in all senses) and reliable, in that you can take your hard drive data with you from a very old (possibly non-existent) system to a brand new system and have a reasonable expectation that the data will still be accessible.

But here’s the rub: mdraid has not received the polish it needs to Just Work. It has serious flaws that after hours of learning, still leave you unsure and hanging and most likely bailing out of the entire process. But it is the best thing we have on the planet, so let’s distill it down to the essentials.

  1. check S.M.A.R.T. data of drives – run tests and make sure they are completely healthy!
  2. clean raid drives of superblock and partition data
    mdadm --misc --zero-superblock /dev/sdd && dd if=/dev/zero of=/dev/sdd bs=1M count=100 && mdadm --examine /dev/sdd
     mdadm: No md superblock detected on /dev/sdd.
  3. use whole drives (not drive partitions) in a newly created raid
    mdadm --create --verbose /dev/md0 --level=mirror --raid-devices=2 /dev/sdd /dev/sde
     mdadm: size set to 3906887488K
     mdadm: automatically enabling write-intent bitmap on large array
     Continue creating array? yes
     mdadm: Defaulting to version 1.2 metadata
     mdadm: array /dev/md0 started.
    watch -n 1 cat /proc/mdstat
    # wait 400 FUCKING MINUTES for a GODDAMNED EMPTY 4TB DRIVE to sync with ANOTHER empty 4TB drive, FUCKSAKE
    # IF YOU REBOOT BEFORE THAT, IT WILL BE AS IF YOU NEVER SET UP A RAID
  4. save and reboot and make sure the mdraid service restores the raid
    mdadm --detail --scan >>/etc/mdadm.conf
     rc-update add mdraid boot
     # start then stop then start the /etc/init.d/mdraid service, make sure this works to restore your raid (check /proc/mdstat)
     # format /dev/md0 as ext4 and set up an auto mount point in /etc/fstab
     # reboot and pray
  5. hopefully much later, upon failure, to restore a single drive, set it up as a raid:
    madadm -A /dev/sdd # I THINK! it's all very iffy. Which SUCKS.

image

8 bay eSata jbod enclosure.  Next step… load it up!