Hive: Difference between revisions

From Bitpost wiki
No edit summary
 
(35 intermediate revisions by the same user not shown)
Line 1: Line 1:
== TODO ==
== TODO ==


* move sassy and splat data to safe, remove them
* run mh-setup-samba everywhere
* continue migration until we have 7 slots free (16 - 7 safe - 2 grim)
** x cast
* add 7 new BIG ssds in new raidz array
** x bandit
* rinse and repeat
** glam
* update kodi shares to remove old, add mine
** x cast
** case (needs kodi update)
* update windoze mapped drive to point to mine/mack (get it to persist, ffs)
* move tv-mind, film to mine, pull them out of bitpost, harvest (they are 4 and 6TB!!)
* SHIT i should have burn-tested the drives!!!  I AM SEEING ERRORS!!  And I won't be able to fix once i have them loaded up... rrr... the true burn tests are destructive... since i'm an impatient moron, i will just have to do SMART tests from TrueNAS UI...
da10  SHORT: SUCCESS
da13  SHORT: SUCCESS
(keep going)


== Overview ==
== Pools ==


FreeNAS provides storage via Pools.  A pool is a bunch of raw drives gathered and managed as a set.  My pools are one of these:
* single drive: no FreeNAS advantage other than health checks
* raid1 pair: mirrored drives give normal write speeds, fast reads, single-fail redundancy, costs half of storage potential
* raid0 pair: striped drives gives fast writes, normal reads, no redundancy, no storage cost
* raid of multiple drives: FreeNAS optimization of read/write speed, redundancy, storage potential
The three levels of raid are:
* raidz: one drive is consumed just for parity (no data storage, ie you only get (n-1) storage total), and one drive can be lost without losing any data; fastest
* raidz2: two drives for parity, two can be lost
* raidz3: three drives for parity, three can be lost; slowest
== Hardware ==
* Drives
{| class="wikitable"
{| class="wikitable"
! Pool
! Pool
Line 27: Line 22:
! Type
! Type
! Drives
! Drives
|-
|sassy
|0.2 TB
| style="color:red" |single
|250GB ssd
|-
|splat
|3.6 TB
| style="color:red" |raid0
|1.82 TB hdd x2
|-
|mack
|0.9 TB
| style="color:red" |single
|1 TB ssd
|-
|reservoir
|2.7 TB
| style="color:red" |single
|2.73 TB hdd
|-
|-
|grim
|grim
Line 52: Line 27:
| style="color:red" |raid0
| style="color:red" |raid0
|3.64 TB ssd x2
|3.64 TB ssd x2
|-
|mine
|43 TB
| style="color:green" |raidz
|8TB ssd x7
|-
|-
|safe
|safe
|6 TB
|5.2 TB
| style="color:green" |raidz
| style="color:green" |raidz
|1 TB ssd x7
|1 TB ssd x7
|}
|}
NOTE that there is an eighth 8TB drive matching the mine array in [[Bluto]].  Harvest it from there if one of the 7 dies.
== Hardware ==


* LSI 8-drive SAS board passed through proxmox to hive as "PCI Device":
* LSI 8-drive SAS board passed through proxmox to hive as "PCI Device":

Latest revision as of 16:31, 8 October 2023

TODO

  • run mh-setup-samba everywhere
    • x cast
    • x bandit
    • glam
  • update kodi shares to remove old, add mine
    • x cast
    • case (needs kodi update)
  • update windoze mapped drive to point to mine/mack (get it to persist, ffs)
  • move tv-mind, film to mine, pull them out of bitpost, harvest (they are 4 and 6TB!!)
  • SHIT i should have burn-tested the drives!!! I AM SEEING ERRORS!! And I won't be able to fix once i have them loaded up... rrr... the true burn tests are destructive... since i'm an impatient moron, i will just have to do SMART tests from TrueNAS UI...
da10  SHORT: SUCCESS
da13  SHORT: SUCCESS
(keep going)

Pools

Pool Capacity Type Drives
grim 7.2 TB raid0 3.64 TB ssd x2
mine 43 TB raidz 8TB ssd x7
safe 5.2 TB raidz 1 TB ssd x7

NOTE that there is an eighth 8TB drive matching the mine array in Bluto. Harvest it from there if one of the 7 dies.

Hardware

  • LSI 8-drive SAS board passed through proxmox to hive as "PCI Device":

Melange-LSI-board.png

  • 7 1 TB Crucial SSDs

Plugged in to SATA 1, 2, 3 and U.2 1, 2, 3, 4. NOTE: to get U.2 drives to be recognized by Melange ASUS mobo required a BIOS change:

Bios > advanced > onboard devices config > U.2 mode (bottom) > SATA (NOT PCI-E)

Hive history