Hive: Difference between revisions

From Bitpost wiki
No edit summary
 
(25 intermediate revisions by the same user not shown)
Line 1: Line 1:
== TODO ==
== TODO ==


* copy bitpost/softraid/* to safe/
* run mh-setup-samba everywhere
remove backup/old/box_images, backup/development-backup-2019-01,
** x cast
development-backup-2019-01/bitpost.com/htdocs/temp/Billions/,  ......
** x bandit
 
** glam
set up cron to pull all repos to safe/ every night
* update kodi shares to remove old, add mine
* update all my code on softraid; remove both the old softraid drives and give dan for offsite backup
** x cast
* move mack data to safe, remove torrent symlinks, update windoze mapped drive and kodi
** case (needs kodi update)
* stop rtorrent, move sassy to safe, update edit_torrent_shows links, update torrent symlinks, update all SharedDownload folders (via mh-setup-samba!)
* update windoze mapped drive to point to mine/mack (get it to persist, ffs)
* move splat data to safe, update edit_torrent_shows links, update torrent symlinks
* move tv-mind, film to mine, pull them out of bitpost, harvest (they are 4 and 6TB!!)
* remove mack, sassy, splat; safe should now have 478 + 200 + 3100 = 3.8 TB used (ouch)
* SHIT i should have burn-tested the drives!!!  I AM SEEING ERRORS!!  And I won't be able to fix once i have them loaded up... rrr... the true burn tests are destructive... since i'm an impatient moron, i will just have to do SMART tests from TrueNAS UI...
* DEEP PURGE; we will need to move softraid, reservoir, tv-mind, film to safe and grim; this will require deep purges of all shit; check that there are no symlinks
da10  SHORT: SUCCESS
* continue migration until we have 7 slots free (16 - 7 safe - 2 grim)
da13  SHORT: SUCCESS
* add 7 new BIG ssds in new raidz array
(keep going)
* rinse and repeat
 
== Overview ==
 
FreeNAS provides storage via Pools. A pool is a bunch of raw drives gathered and managed as a set. My pools are one of these:
* single drive: no FreeNAS advantage other than health checks
* raid1 pair: mirrored drives give normal write speeds, fast reads, single-fail redundancy, costs half of storage potential
* raid0 pair: striped drives gives fast writes, normal reads, no redundancy, no storage cost
* raid of multiple drives: FreeNAS optimization of read/write speed, redundancy, storage potential
 
The three levels of raid are:
* raidz: one drive is consumed just for parity (no data storage, ie you only get (n-1) storage total), and one drive can be lost without losing any data; fastest
* raidz2: two drives for parity, two can be lost
* raidz3: three drives for parity, three can be lost; slowest


== Pools ==
== Pools ==


* Drives
{| class="wikitable"
{| class="wikitable"
! Pool
! Pool
Line 37: Line 22:
! Type
! Type
! Drives
! Drives
|-
|sassy
|0.2 TB
| style="color:red" |single
|250GB ssd
|-
|splat
|3.6 TB
| style="color:red" |raid0
|1.82 TB hdd x2
|-
|mack
|0.9 TB
| style="color:red" |single
|1 TB ssd
|-
|reservoir
|2.7 TB
| style="color:red" |single
|2.73 TB hdd
|-
|-
|grim
|grim
Line 62: Line 27:
| style="color:red" |raid0
| style="color:red" |raid0
|3.64 TB ssd x2
|3.64 TB ssd x2
|-
|mine
|43 TB
| style="color:green" |raidz
|8TB ssd x7
|-
|-
|safe
|safe
|6 TB
|5.2 TB
| style="color:green" |raidz
| style="color:green" |raidz
|1 TB ssd x7
|1 TB ssd x7
|}
|}


== Datasets ==
NOTE that there is an eighth 8TB drive matching the mine array in [[Bluto]]Harvest it from there if one of the 7 dies.
 
Every pool should have one dataset.  This is where we set the permissions, important for SAMBA access.
 
Dataset settings:
  name #pool#-ds
share type SMB
user m
group m
ACL
  who everyone@
  type Allow
  Perm type Basic
  Perm Full control
  Flags type Basic
  Flags Inherit
 
== Windows SMB Shares ==
 
Share each dataset as a Samba share under:
Sharing > Windows Shares (SMB)
 
Use the pool name for the share name.
 
Use the same ACL as for the dataset.


== Hardware ==
== Hardware ==

Latest revision as of 16:31, 8 October 2023

TODO

  • run mh-setup-samba everywhere
    • x cast
    • x bandit
    • glam
  • update kodi shares to remove old, add mine
    • x cast
    • case (needs kodi update)
  • update windoze mapped drive to point to mine/mack (get it to persist, ffs)
  • move tv-mind, film to mine, pull them out of bitpost, harvest (they are 4 and 6TB!!)
  • SHIT i should have burn-tested the drives!!! I AM SEEING ERRORS!! And I won't be able to fix once i have them loaded up... rrr... the true burn tests are destructive... since i'm an impatient moron, i will just have to do SMART tests from TrueNAS UI...
da10  SHORT: SUCCESS
da13  SHORT: SUCCESS
(keep going)

Pools

Pool Capacity Type Drives
grim 7.2 TB raid0 3.64 TB ssd x2
mine 43 TB raidz 8TB ssd x7
safe 5.2 TB raidz 1 TB ssd x7

NOTE that there is an eighth 8TB drive matching the mine array in Bluto. Harvest it from there if one of the 7 dies.

Hardware

  • LSI 8-drive SAS board passed through proxmox to hive as "PCI Device":

Melange-LSI-board.png

  • 7 1 TB Crucial SSDs

Plugged in to SATA 1, 2, 3 and U.2 1, 2, 3, 4. NOTE: to get U.2 drives to be recognized by Melange ASUS mobo required a BIOS change:

Bios > advanced > onboard devices config > U.2 mode (bottom) > SATA (NOT PCI-E)

Hive history