Hive

From Bitpost wiki
Revision as of 17:27, 14 November 2021 by M (talk | contribs)

TODO

  • move sassy and splat data to newbee, remove them
  • continue migration until we have 7 slots free (16 - 7 newbee - 2 grim)
  • add 7 new BIG ssds in new raidz array
  • rinse and repeat

Overview

FreeNAS provides storage via Pools. A pool is a bunch of raw drives gathered and managed as a set. My pools are one of these:

  • single drive: no FreeNAS advantage other than health checks
  • raid1 pair: mirrored drives give normal write speeds, fast reads, single-fail redundancy, costs half of storage potential
  • raid0 pair: striped drives gives fast writes, normal reads, no redundancy, no storage cost
  • raid of multiple drives: FreeNAS optimization of read/write speed, redundancy, storage potential

The three levels of raid are:

  • raidz: one drive is consumed just for parity (no data storage, ie you only get (n-1) storage total), and one drive can be lost without losing any data; fastest
  • raidz2: two drives for parity, two can be lost
  • raidz3: three drives for parity, three can be lost; slowest

Hardware

  • Drives
Pool Capacity Type Drives
sassy 0.2 TB single 250GB ssd
splat 3.6 TB raid0 1.82 TB hdd x2
mack 0.9 TB single 1 TB ssd
reservoir 2.7 TB single 2.73 TB hdd
grim 7.2 TB raid0 3.64 TB ssd x2
newbee 6 TB raidz 1 TB ssd x7
  • LSI 8-drive SAS board passed through proxmox to hive as "PCI Device":

Melange-LSI-board.png

  • 7 1 TB Crucial SSDs

Plugged in to SATA 1, 2, 3 and U.2 1, 2, 3, 4. NOTE: to get U.2 drives to be recognized by Melange ASUS mobo required a BIOS change:

Bios > advanced > onboard devices config > U.2 mode (bottom) > SATA (NOT PCI-E)

Hive history