https://bitpost.com/w/api.php?action=feedcontributions&user=M&feedformat=atomBitpost wiki - User contributions [en]2024-03-29T12:04:34ZUser contributionsMediaWiki 1.39.3https://bitpost.com/w/index.php?title=Proxmox&diff=7473Proxmox2024-03-24T23:31:57Z<p>M: /* Mac */</p>
<hr />
<div>== Available VM Types ==<br />
* Place ISOs in /var/lib/vz/template/iso<br />
* To upload via Proxmox web ui:<br />
Storage View > Datacenter > melange > local > (you might have to hit refresh now!) > ISOs > Upload<br />
<br />
== VM Installation ==<br />
* VM install from ISO<br />
** When you first boot an ubuntu iso, it will behave like an installation thumb drive.<br />
** Install to the only available drive of the VM (/dev/sda). Proxmox is smart enough to allow this. The install ISO ends up as a DVD drive.<br />
** Once VM install is completed, remove the DVD:<br />
*** sudo umount /dev/sf0 (or whatever)<br />
*** Proxmox > (VM) > hardware > DVD > remove<br />
*** Proxmox > (VM) > shutdown, then start<br />
<br />
== Changing VM disk size ==<br />
<br />
=== Grow ===<br />
Increase is fairly easy:<br />
* (VM) > Hardware > Hard Disk > Disk Action > Resize > Add number of GB to increase<br />
* reboot the VM<br />
* Update the OS<br />
==== Ubuntu ====<br />
* Update the ubuntu filesystem (if you didn't use LVM (which is basically useless))<br />
sudo parted /dev/sda # use the right one, obv!<br />
print # to get partition list<br />
resizepart 2 # use the right one, obv!<br />
End? 100%<br />
sudo resize2fs /dev/sda#<br />
<br />
If you used LVM in ubuntu (stop doing that!), you have to do all this:<br />
sudo su -<br />
parted -s -a opt /dev/sda "print free" # see existing partitions<br />
parted -s -a opt /dev/sda "print free" "resizepart 3 100%" "print free" # resize partition to fill space<br />
pvresize /dev/sda3 # to get the LVM resized to the partition<br />
lvdisplay<br />
lvextend -l +100%FREE /dev/ubuntu-vg/ubuntu-lv # to resize the logical volume<br />
resize2fs /dev/ubuntu-vg/ubuntu-lv<br />
<br />
==== Mac ====<br />
For a macOS VM, thenickdude has [https://www.nicksherlock.com/2021/12/expanding-the-disk-of-your-proxmox-macos-vm/ notes].<br />
<br />
* In proxmox, stop the VM, find the hardware disk and do a Disk operation to increase the size, and restart the VM<br />
* Fix it up in an OSX terminal:<br />
diskutil list<br />
# fix the partition<br />
diskutil repairDisk disk0<br />
# actually resize the contents<br />
diskutil apfs resizeContainer disk1 0<br />
<br />
==== Windows ====<br />
* Use diskpart to remove the restore partition, if it is in the way of expanding the C:\ drive.<br />
* Use disk manager to expand C:\<br />
<br />
=== Shrink ===<br />
I have not yet found a way to shrink a VM's disk. I had to significantly shrink [[bandit]]'s drive. I ended up rebuilding it from scratch on a smaller starting drive.<br />
<br />
Obv. ridiculous. Not for lack of trying. Read on for some notes from failed attempts.<br />
<br />
Clonezilla MAY be able to shrink partitions, then clone them, then let you restore them to same-sized partitions on a smaller drive. I never got it to work successfully. You would have to get pretty good at precisely sizing partitions... Some basic steps follow...<br />
* Create a new vm with the new disk size you want (and a diff hostname).<br />
* Old vm > Add CD > Use the clonezilla iso > change boot order > boot into clonezilla<br />
* Select the advanced mode > for target, select samba > use m, /spiceflow/softraid/backup, 192.168.21.1<br />
* It should prompt for password and verify the drive size, sweet!!<br />
* Clone the PARTITIONS not the DRIVE or it won't shrink and it won't fit on new drive<br />
* Go through all the prompts using defaults to send /dev/sda to a backup file on softraid<br />
* Shut down the old vm, move the clonezilla iso to the new vm, boot clonezilla<br />
* Use advanced mode, restore from smb image (similar to savepart), but make sure to select -icds (skip partition size check), i also had it create a new partition table (was that good? i got an error)<br />
* fix the hostname in proxmox, and dnsmasq MAC address on bitpost<br />
<br />
More horror that never worked...<br />
<br />
Shrinking is VERY painful and VERY hard to find instructions, a lot of trial and horror... and no luck yet<br />
* Add CD > choose gparted live cd > boot with it, shrink partition<br />
* Stop the VM, remove the CD<br />
* ssh melange # and massage the disk there<br />
sudo su -<br />
lvdisplay | grep "LV Path\|LV Size"<br />
qemu-img info /dev/pve/vm-105-disk-0<br />
# DO NOT RESIZE DIRECTLY, it fucks the disk: qemu-img resize -f raw vm-105-disk-0 -210G<br />
# FYI THIS ALSO FUCKED THE DRIVE: lvreduce -L 40G /dev/pve/vm-105-disk-0<br />
# i think that is ONLY FOR LVM DISKS<br />
mount /spiceflow/bitpost/<br />
qemu-img convert vm-105-disk-0 /spiceflow/bitpost/softraid/backup/vm-105-disk-0.raw<br />
qemu-img resize -f raw /spiceflow/bitpost/softraid/backup/vm-105-disk-0.raw --shrink -200G<br />
Image resized.<br />
# DO NOT DO THIS it will FUCK THE DISK: # qemu-img convert -p -O qcow2 /spiceflow/bitpost/softraid/backup/vm-105-disk-0.raw vm-105-disk-0<br />
# now just copy it back over itself! weird but it works (no it doesn't, when you restore)<br />
cp /spiceflow/bitpost/softraid/backup/vm-105-disk-0.raw --shrink -200G<br />
* edit 105.conf and reduce disk to the proper size (not sure if that's just so the GUI looks correct?)<br />
emacs /etc/pve/nodes/melange/qemu-server/105.conf<br />
<br />
== Changing VM CPU allocation ==<br />
<br />
* Navigate to (VM) > Hardware > Processors<br />
* Adjust core count. Overallocation is a very good idea. CPU cores will be used only as threads need them. I've read that some shops successfully overallocate by a factor of 20:1. With our 12 Ryzen 9 cores, that means we would allocate 120 total cores to our VMs(!). Don't do that, ha.<br />
* Stick with 1 socket (that's just for sizing to match any paid licensing)<br />
<br />
== VM remote desktop display ==<br />
I'm using [[SPICE]] for a full responsive 4k UI on other thin(ner) clients.<br />
<br />
=== SPICE memory ===<br />
Using vscode on abtdev1 on a second desktop would crash the VM with SPICE memory set to 36(MB). I bumped it to 80 and it seems okay. It apparently needs more when you use more desktops, in both Windows and linux, according to [https://melange:8006/pve-docs/chapter-qm.html#qm_display this].<br />
<br />
== Proxmox config ==<br />
There is a LOT of configuration. After getting it all set, I copied it all out, following [https://forum.proxmox.com/threads/how-to-backup-proxmox-configuration-files.67789/ some forum suggestion]. Try to keep it updated!<br />
<br />
Config is here:<br />
development/config/proxmox/melange<br />
<br />
Manage it from here:<br />
🌐 m@melange [~/development/config/proxmox/melange] cat backup.txt<br />
<br />
For example, to back up a VM config after making changes:<br />
🌐 m@melange [~/development/config/proxmox/melange/etc/pve/qemu-server] git pull<br />
git mv 111.conf 111.conf-old-montery<br />
sudo cp /etc/pve/qemu-server/111.conf .<br />
sudo chown m:m 111.conf<br />
git add 111.conf<br />
git commit -a -m "Updated matcha 111 config now that Ventura is working" && git push<br />
<br />
=== Storage ===<br />
==== CIFS ====<br />
To mount a samba share as proxmox storage:<br />
proxmox UI > Server view > Datacenter > Storage > Add > CIFS > enter fields<br />
<br />
Remember to set Content Type to include the things you will place there.<br />
<br />
=== Backups ===<br />
<br />
* I am backing up to softraid/backup on bitpost. See the bitpost-backup SMB share under the melange tree in the lefthand bar.<br />
* I am backing up EVERY VM. In order to back up the hive VM (104), which includes TB of storage that we cannot back up, you must mark all drives other than the OS drive with ",no backup", here:<br />
/etc/pve/nodes/melange/qemu-server/104.conf<br />
* Proxmox puts files here:<br />
/spiceflow/softraid/backup/dump/vzdump-qemu-103-2022_03_24-09_15_59.vma.zst<br />
* I am currently keeping 1 backup and 1 "weekly" backup. Odd, but you configure that in Storage:<br />
proxmox UI > Server view > Datacenter > Storage > bitpost-backup > Edit > Backup Retention tab<br />
It's a bit confusing, [https://pbs.proxmox.com/docs/prune-simulator/ this tool] helps a lot!<br />
<br />
Configuration:<br />
* I set up a dedicated CIFS share for that on bitpost, named backup.<br />
* I added a Storage that uses it, and set 'VZDump backup file' as the Content Type.<br />
* From there, it's easy to configure VM backups.<br />
proxmox UI > Server view > Datacenter > Backup > Add<br />
I don't want to try to back up my massive FreeNAS data! So select "Exclude VMs" and check hive/104.<br />
* If something goes wrong, you can kill the backup task from a terminal:<br />
🌐 m@melange [~] ps ax|grep vz<br />
🌐 m@melange [~] sudo kill -9 #pid#<br />
# this may cause "VM is locked (backup)" - to fix that:<br />
sudo qm unlock 101<br />
<br />
* You can watch the backup job logging in the proxmox system log:<br />
sudo tail -f /var/log/syslog <br />
...<br />
Mar 24 09:23:01 melange pvedaemon[21327]: INFO: Finished Backup of VM 103 (00:07:02)<br />
Mar 24 09:23:01 melange pvedaemon[21327]: INFO: Starting Backup of VM 105 (qemu)<br />
...<br />
<br />
=== debian ===<br />
I had debian apt-get stop working due to distro changes. Run this to fix it:<br />
ssh melange<br />
sudo apt-get --allow-releaseinfo-change update<br />
<br />
== Proxmox update ==<br />
See [[Melange#Upgrade|melange upgrade notes]].<br />
<br />
== Proxmox Installation ==<br />
* In BIOS, enable SVM (cpu virtualization) (you need a modern AMD or Intel chipset)<br />
* Get the latest proxmox release ISO, dd it to a thumb drive (use [[Flash_Drives]] SAM 64 EVO)<br />
* Boot and install onto the primary drive over any existing OSes<br />
* During install, use ext4<br />
** I deemed ZFS too fancy, it's basically software raid, and troublesome according to some<br />
* Create a user and use ssh key (NOTE you might want to keep some root terminals open so you don't screw up and get locked out!):<br />
apt install sudo<br />
adduser m<br />
visudo # and allow m to sudo<br />
nano /etc/ssh/sshd_config # and turn off password login, root login<br />
su - m<br />
# set up ssh keys<br />
# in another terminal, test:<br />
ssh melange<br />
* Fix the fucking default proxmox apt repo from "enterprise" to "no-license". BTW, they say it's not well tested, and you better [https://www.proxmox.com/en/proxmox-ve/pricing pay for a subscription] and get a key for the "better" repo. Cmon you all that is utter bullshit...<br />
# Either delete /etc/apt/sources.list.d/pve-enterprise.list file or comment all lines in this file with #.<br />
emacs -nw /etc/apt/sources.list.d/pve-no-subscription.list<br />
deb http://download.proxmox.com/debian/pve buster pve-no-subscription</div>Mhttps://bitpost.com/w/index.php?title=Melange_history&diff=7472Melange history2024-03-24T20:26:11Z<p>M: </p>
<hr />
<div><br />
== GRIM DEATH ==<br />
<br />
2024/03/23 We are harvesting the hive grim drives for use as melange drives for VM storate.<br />
<br />
I changed the location of the hive System Dataset Pool from grim to safe.<br />
<br />
I copied all grim data to safe (oops, i forgot SharedDownloads... it's empty now...)<br />
<br />
I removed the 'grim' pool from FreeNAS.<br />
<br />
Now I need to move the drives! I want to keep the PCI card as pass-thru, but the two grim drives are on it.<br />
* make note of all hive drive assignments<br />
* open melange<br />
** remove both grim drives from the PCI passthru<br />
** move the one mine drive that is on SATA from SATA to one of the PCI passthroughs<br />
** move one safe drive from SATA to the other of the PCI passthroughs<br />
** add both grim drives to SATA<br />
* close and restart melange and see if you can reconnect everything<br />
** the grim drives should now show up on melange, not passed through<br />
** the safe and mine drives should show up passed through, but perhaps hive cannot associate them; if not, try to fix<br />
** if not, RESET/KEEP GOING brother!<br />
<br />
Let's go...<br />
<br />
First, capture everything...<br />
<br />
SATA DRIVES:<br />
<br />
🌐 m@melange [~] sudo lsblk |awk 'NR==1{print $0" DEVICE-ID(S)"}NR>1{dev=$1;printf $0" ";system("find /dev/disk/by-id -lname \"*"dev"\" -printf \" %p\"");print "";}'|grep -v -E 'part|lvm'<br />
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT DEVICE-ID(S)<br />
sda 8:0 0 931.5G 0 disk /dev/disk/by-id/ata-CT1000MX500SSD1_2122E5A2FB01 /dev/disk/by-id/wwn-0x500a0751e5a2fb01<br />
sdb 8:16 0 931.5G 0 disk /dev/disk/by-id/ata-CT1000MX500SSD1_2122E5A2FD56 /dev/disk/by-id/wwn-0x500a0751e5a2fd56<br />
sdc 8:32 1 931.5G 0 disk /dev/disk/by-id/ata-CT1000MX500SSD1_2122E5A313D2 /dev/disk/by-id/wwn-0x500a0751e5a313d2<br />
sdd 8:48 1 931.5G 0 disk /dev/disk/by-id/ata-CT1000MX500SSD1_2122E5A313D6 /dev/disk/by-id/wwn-0x500a0751e5a313d6<br />
sde 8:64 1 931.5G 0 disk /dev/disk/by-id/ata-CT1000MX500SSD1_2117E59AAE1B /dev/disk/by-id/wwn-0x500a0751e59aae1b<br />
sdf 8:80 1 931.5G 0 disk /dev/disk/by-id/ata-CT1000MX500SSD1_2121E5A2E131 /dev/disk/by-id/wwn-0x500a0751e5a2e131<br />
sdg 8:96 1 931.5G 0 disk /dev/disk/by-id/ata-CT1000MX500SSD1_2122E5A3009A /dev/disk/by-id/wwn-0x500a0751e5a3009a<br />
sdh 8:112 1 7.3T 0 disk /dev/disk/by-id/ata-Samsung_SSD_870_QVO_8TB_S5VUNJ0W706320H /dev/disk/by-id/wwn-0x5002538f33710e1d<br />
nvme0n1 259:0 0 931.5G 0 disk /dev/disk/by-id/nvme-Samsung_SSD_970_EVO_Plus_1TB_S4EWNJ0N107994E /dev/disk/by-id/nvme-eui.002538510141169d<br />
<br />
<br />
104 PASSTHRUS:<br />
<br />
🌐 m@melange [~] sudo cat /etc/pve/qemu-server/104.conf<br />
boot: order=scsi0;net0<br />
cores: 4<br />
hostpci0: 0a:00.0,rombar=0<br />
memory: 24576<br />
name: hive<br />
net0: virtio=DA:E8:DA:81:EC:64,bridge=vmbr0,firewall=1<br />
numa: 0<br />
onboot: 1<br />
ostype: l26<br />
scsi0: local-lvm:vm-104-disk-0,size=25G<br />
scsi11: /dev/disk/by-id/ata-CT1000MX500SSD1_2122E5A2FD56,size=976762584K,backup=no,serial=2122E5A2FD56<br />
scsi12: /dev/disk/by-id/ata-CT1000MX500SSD1_2122E5A2FB01,size=976762584K,backup=no,serial=2122E5A2FB01<br />
scsi13: /dev/disk/by-id/ata-CT1000MX500SSD1_2122E5A313D2,size=976762584K,backup=no,serial=2122E5A313D2<br />
scsi14: /dev/disk/by-id/ata-CT1000MX500SSD1_2122E5A313D6,size=976762584K,backup=no,serial=2122E5A313D6<br />
scsi15: /dev/disk/by-id/ata-CT1000MX500SSD1_2117E59AAE1B,size=976762584K,backup=no,serial=2117E59AAE1B<br />
scsi16: /dev/disk/by-id/ata-CT1000MX500SSD1_2121E5A2E131,size=976762584K,backup=no,serial=2121E5A2E131<br />
scsi17: /dev/disk/by-id/ata-CT1000MX500SSD1_2122E5A3009A,size=976762584K,backup=no,serial=2122E5A3009A<br />
scsi18: /dev/disk/by-id/ata-Samsung_SSD_870_QVO_8TB_S5VUNJ0W706320H,backup=0,serial=S5VUNJ0W706320H,size=7814026584K<br />
scsihw: virtio-scsi-pci<br />
smbios1: uuid=dc3d077c-0063-41fe-abde-a97674d14dc8<br />
sockets: 1<br />
startup: order=1,up=90<br />
vmgenid: 1dcb048d-122f-4343-ad87-1a01ee8284a6<br />
<br />
PRE-MOVE DRIVES:<br />
<br />
* 7 1tb safe drives on SATA<br />
* 1 8tb mine drive on SATA<br />
* 2 4tb grim drives on PCI<br />
* 6 8tb mine drives on PCI<br />
<br />
PRE-MOVE hive:<br />
da7 2122E5A3009A 931.51 GiB safe<br />
da8 S5VUNJ0W706320H 7.28 TiB mine<br />
da11 S5B0NW0NB01796J 3.64 TiB N/A<br />
da12 S4CXNF0M307721X 3.64 TiB N/A<br />
<br />
POST-MOVE DRIVES:<br />
<br />
* 6 1tb safe drives on SATA<br />
* 2 8tb mine drives on SATA<br />
* 1 1tb safe drive on SATA<br />
* 7 8tb mine drives on PCI<br />
<br />
=== STEPS THAT MAY NEED REVERSAL ===<br />
<br />
Do i need to adjust hive or melange before opening case? I guess i could remove the grim sata passthru... AND the SAFE i'm going to move, too, it will no longer pass through (we will be using the PCI card passthru for it).<br />
<br />
* shut down VMS, then hive (but not melange)<br />
* remove two drives from SATA passthru, first is 1 from SAFE (moving to PCI card) and second is 1 from MINE<br />
<br />
scsi16: /dev/disk/by-id/ata-CT1000MX500SSD1_2121E5A2E131,size=976762584K,backup=no,serial=2121E5A2E131<br />
<br />
^^^ SWITCHING TO SWAPPING THIS ONE, it is easier to access for swapping<br />
<br />
scsi17: /dev/disk/by-id/ata-CT1000MX500SSD1_2122E5A3009A,size=976762584K,backup=no,serial=2122E5A3009A<br />
<br />
^^ ACTUALLY, NOT THIS ONE, leave it<br />
<br />
scsi18: /dev/disk/by-id/ata-Samsung_SSD_870_QVO_8TB_S5VUNJ0W706320H,backup=0,serial=S5VUNJ0W706320H,size=7814026584K<br />
* shut down melange<br />
* remove two 4GB drives from PCI: S5B0NW0NB01796J, S4CXNF0M307721X<br />
* move the only 8TB from SATA to PCI: S5VUNJ0W706320H<br />
* move 1 1TB from SATA to PCI: 2121E5A2E131 (not 2122E5A3009A)<br />
* connect two 4GB drives to SATA: S5B0NW0NB01796J, S4CXNF0M307721X<br />
<br />
WENT GREAT, hive recognized the move without me doing ANYTHING!!<br />
<br />
Now we can set up the ex-grim 4TBs for VM usage, yay.<br />
<br />
== 2023 July full upgrade ==<br />
I need to install a Windows 11 VM. Also have Ubuntu 20.04 machines that should be moved to 22.04. Figured as good a reason as any for a full upgrade of everything.<br />
<br />
* Upgrade bitpost first. Upon reboot, my $*(@ IP changed again. Fuck you google. Spent a while resetting that. Here are the notes (also in my Red Dead RP journal, keepin it real (real accessible when everything's down), lol!):<br />
cast$ ssh bitpost # not bp or bitpost.com, so we get to the LAN resource<br />
sudo su -<br />
stronger_firewall_and_save # internet should now work<br />
# get new IP from whatsmyip<br />
# fix bitpost.com DNS at domains.google.com<br />
# WAIT for propagation.... might as well fix the other DNS records...<br />
sudo service dnsmasq restart<br />
ping bitpost.com # EVENTUALLY this will work! May need to repeat this AND previous step.<br />
* Ask Tom to update E-S DNS to use new IP<br />
* Upgrade abtdev1, then all Ubuntu boxes (glam is toughest), then positronic last, with this pattern:<br />
mh-update-ubuntu # and reboot<br />
sudo do-release-upgrade # best to connect directly, but ssh worked fine too<br />
sudo shutdown -h now # to prep for melange reboot<br />
* Upgrade hive's TrueNAS install, via https://hive CHECK FOR UPDATES, then shut it down<br />
* Update and reboot melange PROXMOX install, via https://melange:8006 Datacenter > melange > Updates<br />
* CHECK EVERYTHING<br />
** proxmox samba share for backups<br />
** samba shares<br />
** at ptl to ensure it can get to positronic<br />
** shitcutter and blogs and wiki and...<br />
** I had a terrible time getting GLAM apache + PHP working again now that Ubuntu uses PHP 8.1; just needed to ENABLE THE MODULE, ffs:<br />
a2enmod php8.1<br />
<br />
== 6.3 > 7.0 ==<br />
<br />
Proxmox uses apt for upgrades.<br />
I followed [https://pve.proxmox.com/wiki/Upgrade_from_6.x_to_7.0 this], for the most part.<br />
* Update all VMS<br />
* Shut down all VMS<br />
* Fully update current version's apt packages - this took me from 6.3 to 6.4, a necessary first step.<br />
sudo apt update<br />
sudo apt dist-upgrade<br />
* Upgrade basic apt sources list from buster to bullseye<br />
sudo sed -i 's/buster\/updates/bullseye-security/g;s/buster/bullseye/g' /etc/apt/sources.list<br />
# instructions discuss pve-enterprise but i needed to change pve-no-subscription instead - but same exact steps, otherwise<br />
# ie, leave this commented out, but might as well set to bullseye<br />
# /etc/apt/sources.list.d/pve-enterprise.list<br />
# and update this to bullseye<br />
# /etc/apt/sources.list.d/pve-no-subscription.list<br />
* Perform the full upgrade to bullseye / pm 7<br />
sudo apt update<br />
sudo apt dist-upgrade<br />
* Reboot<br />
<br />
== Manual restart notes ==<br />
<br />
NOTE: This shouldn't be a problem any more with newer staged order restart.<br />
<br />
One time bandit samba shares don't mount (it comes up too fast perhaps?). So restart them then restart qbt nox:<br />
mh-setup-samba-shares<br />
sudo service qbittorrent-nox restart<br />
<br />
I did another round of `apt update && apt dist-upgrade` without stopping containers and it went fine (with bandit fixup still needed after reboot, tho).<br />
<br />
sudo apt update<br />
sudo apt dist-upgrade<br />
ssh bandit<br />
mh-setup-samba-shares<br />
sudo service qbittorrent-nox restart<br />
<br />
== Add 7 1TB zraid ==<br />
<br />
After adding 7 new 1 TB ssds:<br />
🌐 m@melange [~] sudo lsblk |awk 'NR==1{print $0" DEVICE-ID(S)"}NR>1{dev=$1;printf $0" ";system("find /dev/disk/by-id -lname \"*"dev"\" -printf \" %p\"");print "";}'|grep -v -E 'part|lvm'<br />
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT DEVICE-ID(S)<br />
sdh 8:112 0 931.5G 0 disk /dev/disk/by-id/wwn-0x500a0751e5a2fb01 /dev/disk/by-id/ata-CT1000MX500SSD1_2122E5A2FB01<br />
sdi 8:128 0 931.5G 0 disk /dev/disk/by-id/ata-CT1000MX500SSD1_2122E5A2FD56 /dev/disk/by-id/wwn-0x500a0751e5a2fd56<br />
sdj 8:144 1 931.5G 0 disk /dev/disk/by-id/ata-CT1000MX500SSD1_2122E5A313D2 /dev/disk/by-id/wwn-0x500a0751e5a313d2<br />
sdk 8:160 1 931.5G 0 disk /dev/disk/by-id/ata-CT1000MX500SSD1_2122E5A313D6 /dev/disk/by-id/wwn-0x500a0751e5a313d6<br />
sdl 8:176 1 931.5G 0 disk /dev/disk/by-id/ata-CT1000MX500SSD1_2117E59AAE1B /dev/disk/by-id/wwn-0x500a0751e59aae1b<br />
sdm 8:192 1 931.5G 0 disk /dev/disk/by-id/wwn-0x500a0751e5a2e131 /dev/disk/by-id/ata-CT1000MX500SSD1_2121E5A2E131<br />
sdn 8:208 1 931.5G 0 disk /dev/disk/by-id/wwn-0x500a0751e5a3009a /dev/disk/by-id/ata-CT1000MX500SSD1_2122E5A3009A<br />
nvme0n1 259:0 0 931.5G 0 disk /dev/disk/by-id/nvme-eui.002538510141169d /dev/disk/by-id/nvme-Samsung_SSD_970_EVO_Plus_1TB_S4EWNJ0N107994E<br />
<br />
<br />
Before adding 7 new 1 TB ssds:<br />
<pre><br />
🌐 m@melange [~] ls /dev/<br />
autofs dm-8 i2c-7 net stdin tty28 tty5 ttyS12 ttyS6 vcsu1<br />
block dm-9 i2c-8 null stdout tty29 tty50 ttyS13 ttyS7 vcsu2<br />
btrfs-control dri i2c-9 nvme0 tty tty3 tty51 ttyS14 ttyS8 vcsu3<br />
bus ecryptfs initctl nvme0n1 tty0 tty30 tty52 ttyS15 ttyS9 vcsu4<br />
char fb0 input nvme0n1p1 tty1 tty31 tty53 ttyS16 udmabuf vcsu5<br />
console fd kmsg nvme0n1p2 tty10 tty32 tty54 ttyS17 uhid vcsu6<br />
core full kvm nvme0n1p3 tty11 tty33 tty55 ttyS18 uinput vfio<br />
cpu fuse lightnvm nvram tty12 tty34 tty56 ttyS19 urandom vga_arbiter<br />
cpu_dma_latency gpiochip0 log port tty13 tty35 tty57 ttyS2 userio vhci<br />
cuse hpet loop0 ppp tty14 tty36 tty58 ttyS20 vcs vhost-net<br />
disk hugepages loop1 pps0 tty15 tty37 tty59 ttyS21 vcs1 vhost-vsock<br />
dm-0 hwrng loop2 psaux tty16 tty38 tty6 ttyS22 vcs2 watchdog<br />
dm-1 i2c-0 loop3 ptmx tty17 tty39 tty60 ttyS23 vcs3 watchdog0<br />
dm-10 i2c-1 loop4 ptp0 tty18 tty4 tty61 ttyS24 vcs4 zero<br />
dm-11 i2c-10 loop5 pts tty19 tty40 tty62 ttyS25 vcs5 zfs<br />
dm-12 i2c-11 loop6 pve tty2 tty41 tty63 ttyS26 vcs6<br />
dm-13 i2c-12 loop7 random tty20 tty42 tty7 ttyS27 vcsa<br />
dm-14 i2c-13 loop-control rfkill tty21 tty43 tty8 ttyS28 vcsa1<br />
dm-2 i2c-14 mapper rtc tty22 tty44 tty9 ttyS29 vcsa2<br />
dm-3 i2c-2 mcelog rtc0 tty23 tty45 ttyprintk ttyS3 vcsa3<br />
dm-4 i2c-3 mem shm tty24 tty46 ttyS0 ttyS30 vcsa4<br />
dm-5 i2c-4 mpt2ctl snapshot tty25 tty47 ttyS1 ttyS31 vcsa5<br />
dm-6 i2c-5 mpt3ctl snd tty26 tty48 ttyS10 ttyS4 vcsa6<br />
dm-7 i2c-6 mqueue stderr tty27 tty49 ttyS11 ttyS5 vcsu<br />
<br />
</pre><br />
<br />
== macOS USB passthru failed attempt ==<br />
<br />
That doesn't work on macOS. Tried setting usb mapping via console, following this:<br />
sudo qm monitor 111<br />
qm> info usbhost<br />
qm> quit<br />
sudo qm set 111 -usb1 host=05ac:12a8<br />
<br />
No luck, same result. Reading his remarks on USB forwarding, try resetting machine type:<br />
machine: pc-q35-6.0 (instead of latest, which was 6.2 at time of writing)<br />
remove this from /etc/pve/qemu-server/111.conf: -global ICH9-LPC.acpi-pci-hotplug-with-bridge-support=off<br />
<br />
Hmm.. perhaps it is a conflict between Nick's usb keyboard config and my usb port selection... try plugging usb into another port and remapping...<br />
<br />
No luck. FFS. Reset to 6.2 and see if we have any luck with hotplug line removed from config... Nope.<br />
<br />
Keep trying permutations... nothing from googling indicates taht this shouldn't just FUCKING WORK...<br />
<br />
Remove this and re-add the hotplug line, on the off chance it shouldn't be used with q35 v6.2:<br />
-global nec-usb-xhci.msi=off<br />
<br />
Nope, that jsut caused a problem with "Springboard", not working on this Mac, or some shit. Re-adding the line...<br />
<br />
Well what now? Google more? <br />
<br />
Update and reboot proxmox and retry... no luck.<br />
<br />
Try changing from blue to light-blue port... the device is mapped so it should be passed through... nope.<br />
<br />
Try [https://raw.githubusercontent.com/vzamora/Proxmox-Cheatsheet/main/General%20Proxmox%20Setup%20and%20Notes/USB%20Passthrough%20Notes.md this guy's approach] to mount an EFI Disk<br />
lsusb<br />
Bus 004 Device 009: ID 05ac:12a8 Apple, Inc. iPhone 5/5C/5S/6/SE<br />
ls -al /dev/bus/usb/004/009<br />
crw-rw-r-- 1 root root 189, 392 Jul 22 16:10 /dev/bus/usb/004/009<br />
sudo emacs /etc/pve/qemu-server/111.conf<br />
lxc.cgroup.devices.allow: c 189:* rwm<br />
lxc.mount.entry: /dev/bus/usb/004 dev/bus/usb/004 none bind,optional,create=dir<br />
<br />
Nope.<br />
<br />
Try mapping the port instead of device ID, from the Proxmox UI... Nope.<br />
<br />
How can i check the apple side for any issues? straight up google for that, macOS not seeing a USB device.<br />
System Information > USB > nada<br />
<br />
hrmphhhh. Never got it working. RE-google next month maybe...<br />
<br />
== Add samba shares manually ==<br />
During original configuration, I added samba shares manually.<br />
sudo emacs /etc/fstab # and paste samba stanza from another machine<br />
sudo emacs /root/samba_credentials<br />
sudo mkdir /spiceflow && sudo chmod 777 /spiceflow<br />
🌐 m@melange [~] mkdir /spiceflow/bitpost<br />
🌐 m@melange [~] mkdir /spiceflow/grim<br />
🌐 m@melange [~] mkdir /spiceflow/mack<br />
🌐 m@melange [~] mkdir /spiceflow/reservoir<br />
🌐 m@melange [~] mkdir /spiceflow/sassy<br />
🌐 m@melange [~] mkdir /spiceflow/safe<br />
<br />
Now you can mount em up and hang em high!<br />
<br />
== Old VMs ==<br />
[[Mongolden]]</div>Mhttps://bitpost.com/w/index.php?title=Melange_history&diff=7471Melange history2024-03-24T20:24:47Z<p>M: /* GRIM DEATH */</p>
<hr />
<div><br />
== GRIM DEATH ==<br />
<br />
2024/03/23 We are harvesting the hive grim drives for use as melange drives for VM storate.<br />
<br />
I changed the location of the hive System Dataset Pool from grim to safe.<br />
<br />
I copied all grim data to safe (oops, i forgot SharedDownloads... it's empty now...)<br />
<br />
I removed the 'grim' pool from FreeNAS.<br />
<br />
Now I need to move the drives! I want to keep the PCI card as pass-thru, but the two grim drives are on it.<br />
* make note of all hive drive assignments<br />
* open melange<br />
** remove both grim drives from the PCI passthru<br />
** move the one mine drive that is on SATA from SATA to one of the PCI passthroughs<br />
** move one safe drive from SATA to the other of the PCI passthroughs<br />
** add both grim drives to SATA<br />
* close and restart melange and see if you can reconnect everything<br />
** the grim drives should now show up on melange, not passed through<br />
** the safe and mine drives should show up passed through, but perhaps hive cannot associate them; if not, try to fix<br />
** if not, RESET/KEEP GOING brother!<br />
<br />
Let's go...<br />
<br />
First, capture everything...<br />
<br />
SATA DRIVES:<br />
<br />
🌐 m@melange [~] sudo lsblk |awk 'NR==1{print $0" DEVICE-ID(S)"}NR>1{dev=$1;printf $0" ";system("find /dev/disk/by-id -lname \"*"dev"\" -printf \" %p\"");print "";}'|grep -v -E 'part|lvm'<br />
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT DEVICE-ID(S)<br />
sda 8:0 0 931.5G 0 disk /dev/disk/by-id/ata-CT1000MX500SSD1_2122E5A2FB01 /dev/disk/by-id/wwn-0x500a0751e5a2fb01<br />
sdb 8:16 0 931.5G 0 disk /dev/disk/by-id/ata-CT1000MX500SSD1_2122E5A2FD56 /dev/disk/by-id/wwn-0x500a0751e5a2fd56<br />
sdc 8:32 1 931.5G 0 disk /dev/disk/by-id/ata-CT1000MX500SSD1_2122E5A313D2 /dev/disk/by-id/wwn-0x500a0751e5a313d2<br />
sdd 8:48 1 931.5G 0 disk /dev/disk/by-id/ata-CT1000MX500SSD1_2122E5A313D6 /dev/disk/by-id/wwn-0x500a0751e5a313d6<br />
sde 8:64 1 931.5G 0 disk /dev/disk/by-id/ata-CT1000MX500SSD1_2117E59AAE1B /dev/disk/by-id/wwn-0x500a0751e59aae1b<br />
sdf 8:80 1 931.5G 0 disk /dev/disk/by-id/ata-CT1000MX500SSD1_2121E5A2E131 /dev/disk/by-id/wwn-0x500a0751e5a2e131<br />
sdg 8:96 1 931.5G 0 disk /dev/disk/by-id/ata-CT1000MX500SSD1_2122E5A3009A /dev/disk/by-id/wwn-0x500a0751e5a3009a<br />
sdh 8:112 1 7.3T 0 disk /dev/disk/by-id/ata-Samsung_SSD_870_QVO_8TB_S5VUNJ0W706320H /dev/disk/by-id/wwn-0x5002538f33710e1d<br />
nvme0n1 259:0 0 931.5G 0 disk /dev/disk/by-id/nvme-Samsung_SSD_970_EVO_Plus_1TB_S4EWNJ0N107994E /dev/disk/by-id/nvme-eui.002538510141169d<br />
<br />
<br />
104 PASSTHRUS:<br />
<br />
🌐 m@melange [~] sudo cat /etc/pve/qemu-server/104.conf<br />
boot: order=scsi0;net0<br />
cores: 4<br />
hostpci0: 0a:00.0,rombar=0<br />
memory: 24576<br />
name: hive<br />
net0: virtio=DA:E8:DA:81:EC:64,bridge=vmbr0,firewall=1<br />
numa: 0<br />
onboot: 1<br />
ostype: l26<br />
scsi0: local-lvm:vm-104-disk-0,size=25G<br />
scsi11: /dev/disk/by-id/ata-CT1000MX500SSD1_2122E5A2FD56,size=976762584K,backup=no,serial=2122E5A2FD56<br />
scsi12: /dev/disk/by-id/ata-CT1000MX500SSD1_2122E5A2FB01,size=976762584K,backup=no,serial=2122E5A2FB01<br />
scsi13: /dev/disk/by-id/ata-CT1000MX500SSD1_2122E5A313D2,size=976762584K,backup=no,serial=2122E5A313D2<br />
scsi14: /dev/disk/by-id/ata-CT1000MX500SSD1_2122E5A313D6,size=976762584K,backup=no,serial=2122E5A313D6<br />
scsi15: /dev/disk/by-id/ata-CT1000MX500SSD1_2117E59AAE1B,size=976762584K,backup=no,serial=2117E59AAE1B<br />
scsi16: /dev/disk/by-id/ata-CT1000MX500SSD1_2121E5A2E131,size=976762584K,backup=no,serial=2121E5A2E131<br />
scsi17: /dev/disk/by-id/ata-CT1000MX500SSD1_2122E5A3009A,size=976762584K,backup=no,serial=2122E5A3009A<br />
scsi18: /dev/disk/by-id/ata-Samsung_SSD_870_QVO_8TB_S5VUNJ0W706320H,backup=0,serial=S5VUNJ0W706320H,size=7814026584K<br />
scsihw: virtio-scsi-pci<br />
smbios1: uuid=dc3d077c-0063-41fe-abde-a97674d14dc8<br />
sockets: 1<br />
startup: order=1,up=90<br />
vmgenid: 1dcb048d-122f-4343-ad87-1a01ee8284a6<br />
<br />
PRE-MOVE DRIVES:<br />
<br />
* 7 1tb safe drives on SATA<br />
* 1 8tb mine drive on SATA<br />
* 2 4tb grim drives on PCI<br />
* 6 8tb mine drives on PCI<br />
<br />
PRE-MOVE hive:<br />
da7 2122E5A3009A 931.51 GiB safe<br />
da8 S5VUNJ0W706320H 7.28 TiB mine<br />
da11 S5B0NW0NB01796J 3.64 TiB N/A<br />
da12 S4CXNF0M307721X 3.64 TiB N/A<br />
<br />
POST-MOVE DRIVES:<br />
<br />
* 6 1tb safe drives on SATA<br />
* 2 8tb mine drives on SATA<br />
* 1 1tb safe drive on SATA<br />
* 7 8tb mine drives on PCI<br />
<br />
=== STEPS THAT MAY NEED REVERSAL ===<br />
<br />
Do i need to adjust hive or melange before opening case? I guess i could remove the grim sata passthru... AND the SAFE i'm going to move, too, it will no longer pass through (we will be using the PCI card passthru for it).<br />
<br />
* shut down VMS, then hive (but not melange)<br />
* remove two drives from SATA passthru, first is 1 from SAFE (moving to PCI card) and second is 1 from MINE<br />
<br />
scsi16: /dev/disk/by-id/ata-CT1000MX500SSD1_2121E5A2E131,size=976762584K,backup=no,serial=2121E5A2E131<br />
<br />
^^^ SWITCHING TO SWAPPING THIS ONE, it is easier to access for swapping<br />
<br />
scsi17: /dev/disk/by-id/ata-CT1000MX500SSD1_2122E5A3009A,size=976762584K,backup=no,serial=2122E5A3009A<br />
<br />
^^ ACTUALLY, NOT THIS ONE, leave it<br />
<br />
scsi18: /dev/disk/by-id/ata-Samsung_SSD_870_QVO_8TB_S5VUNJ0W706320H,backup=0,serial=S5VUNJ0W706320H,size=7814026584K<br />
* shut down melange<br />
* remove two 4GB drives from PCI: S5B0NW0NB01796J, S4CXNF0M307721X<br />
* move the only 8TB from SATA to PCI: S5VUNJ0W706320H<br />
* move 1 1TB from SATA to PCI: 2122E5A3009A<br />
* connect two 4GB drives to SATA: S5B0NW0NB01796J, S4CXNF0M307721X<br />
<br />
WENT GREAT, hive recognized the move without me doing ANYTHING!!<br />
<br />
Now we can set up the ex-grim 4TBs for VM usage, yay.<br />
<br />
== 2023 July full upgrade ==<br />
I need to install a Windows 11 VM. Also have Ubuntu 20.04 machines that should be moved to 22.04. Figured as good a reason as any for a full upgrade of everything.<br />
<br />
* Upgrade bitpost first. Upon reboot, my $*(@ IP changed again. Fuck you google. Spent a while resetting that. Here are the notes (also in my Red Dead RP journal, keepin it real (real accessible when everything's down), lol!):<br />
cast$ ssh bitpost # not bp or bitpost.com, so we get to the LAN resource<br />
sudo su -<br />
stronger_firewall_and_save # internet should now work<br />
# get new IP from whatsmyip<br />
# fix bitpost.com DNS at domains.google.com<br />
# WAIT for propagation.... might as well fix the other DNS records...<br />
sudo service dnsmasq restart<br />
ping bitpost.com # EVENTUALLY this will work! May need to repeat this AND previous step.<br />
* Ask Tom to update E-S DNS to use new IP<br />
* Upgrade abtdev1, then all Ubuntu boxes (glam is toughest), then positronic last, with this pattern:<br />
mh-update-ubuntu # and reboot<br />
sudo do-release-upgrade # best to connect directly, but ssh worked fine too<br />
sudo shutdown -h now # to prep for melange reboot<br />
* Upgrade hive's TrueNAS install, via https://hive CHECK FOR UPDATES, then shut it down<br />
* Update and reboot melange PROXMOX install, via https://melange:8006 Datacenter > melange > Updates<br />
* CHECK EVERYTHING<br />
** proxmox samba share for backups<br />
** samba shares<br />
** at ptl to ensure it can get to positronic<br />
** shitcutter and blogs and wiki and...<br />
** I had a terrible time getting GLAM apache + PHP working again now that Ubuntu uses PHP 8.1; just needed to ENABLE THE MODULE, ffs:<br />
a2enmod php8.1<br />
<br />
== 6.3 > 7.0 ==<br />
<br />
Proxmox uses apt for upgrades.<br />
I followed [https://pve.proxmox.com/wiki/Upgrade_from_6.x_to_7.0 this], for the most part.<br />
* Update all VMS<br />
* Shut down all VMS<br />
* Fully update current version's apt packages - this took me from 6.3 to 6.4, a necessary first step.<br />
sudo apt update<br />
sudo apt dist-upgrade<br />
* Upgrade basic apt sources list from buster to bullseye<br />
sudo sed -i 's/buster\/updates/bullseye-security/g;s/buster/bullseye/g' /etc/apt/sources.list<br />
# instructions discuss pve-enterprise but i needed to change pve-no-subscription instead - but same exact steps, otherwise<br />
# ie, leave this commented out, but might as well set to bullseye<br />
# /etc/apt/sources.list.d/pve-enterprise.list<br />
# and update this to bullseye<br />
# /etc/apt/sources.list.d/pve-no-subscription.list<br />
* Perform the full upgrade to bullseye / pm 7<br />
sudo apt update<br />
sudo apt dist-upgrade<br />
* Reboot<br />
<br />
== Manual restart notes ==<br />
<br />
NOTE: This shouldn't be a problem any more with newer staged order restart.<br />
<br />
One time bandit samba shares don't mount (it comes up too fast perhaps?). So restart them then restart qbt nox:<br />
mh-setup-samba-shares<br />
sudo service qbittorrent-nox restart<br />
<br />
I did another round of `apt update && apt dist-upgrade` without stopping containers and it went fine (with bandit fixup still needed after reboot, tho).<br />
<br />
sudo apt update<br />
sudo apt dist-upgrade<br />
ssh bandit<br />
mh-setup-samba-shares<br />
sudo service qbittorrent-nox restart<br />
<br />
== Add 7 1TB zraid ==<br />
<br />
After adding 7 new 1 TB ssds:<br />
🌐 m@melange [~] sudo lsblk |awk 'NR==1{print $0" DEVICE-ID(S)"}NR>1{dev=$1;printf $0" ";system("find /dev/disk/by-id -lname \"*"dev"\" -printf \" %p\"");print "";}'|grep -v -E 'part|lvm'<br />
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT DEVICE-ID(S)<br />
sdh 8:112 0 931.5G 0 disk /dev/disk/by-id/wwn-0x500a0751e5a2fb01 /dev/disk/by-id/ata-CT1000MX500SSD1_2122E5A2FB01<br />
sdi 8:128 0 931.5G 0 disk /dev/disk/by-id/ata-CT1000MX500SSD1_2122E5A2FD56 /dev/disk/by-id/wwn-0x500a0751e5a2fd56<br />
sdj 8:144 1 931.5G 0 disk /dev/disk/by-id/ata-CT1000MX500SSD1_2122E5A313D2 /dev/disk/by-id/wwn-0x500a0751e5a313d2<br />
sdk 8:160 1 931.5G 0 disk /dev/disk/by-id/ata-CT1000MX500SSD1_2122E5A313D6 /dev/disk/by-id/wwn-0x500a0751e5a313d6<br />
sdl 8:176 1 931.5G 0 disk /dev/disk/by-id/ata-CT1000MX500SSD1_2117E59AAE1B /dev/disk/by-id/wwn-0x500a0751e59aae1b<br />
sdm 8:192 1 931.5G 0 disk /dev/disk/by-id/wwn-0x500a0751e5a2e131 /dev/disk/by-id/ata-CT1000MX500SSD1_2121E5A2E131<br />
sdn 8:208 1 931.5G 0 disk /dev/disk/by-id/wwn-0x500a0751e5a3009a /dev/disk/by-id/ata-CT1000MX500SSD1_2122E5A3009A<br />
nvme0n1 259:0 0 931.5G 0 disk /dev/disk/by-id/nvme-eui.002538510141169d /dev/disk/by-id/nvme-Samsung_SSD_970_EVO_Plus_1TB_S4EWNJ0N107994E<br />
<br />
<br />
Before adding 7 new 1 TB ssds:<br />
<pre><br />
🌐 m@melange [~] ls /dev/<br />
autofs dm-8 i2c-7 net stdin tty28 tty5 ttyS12 ttyS6 vcsu1<br />
block dm-9 i2c-8 null stdout tty29 tty50 ttyS13 ttyS7 vcsu2<br />
btrfs-control dri i2c-9 nvme0 tty tty3 tty51 ttyS14 ttyS8 vcsu3<br />
bus ecryptfs initctl nvme0n1 tty0 tty30 tty52 ttyS15 ttyS9 vcsu4<br />
char fb0 input nvme0n1p1 tty1 tty31 tty53 ttyS16 udmabuf vcsu5<br />
console fd kmsg nvme0n1p2 tty10 tty32 tty54 ttyS17 uhid vcsu6<br />
core full kvm nvme0n1p3 tty11 tty33 tty55 ttyS18 uinput vfio<br />
cpu fuse lightnvm nvram tty12 tty34 tty56 ttyS19 urandom vga_arbiter<br />
cpu_dma_latency gpiochip0 log port tty13 tty35 tty57 ttyS2 userio vhci<br />
cuse hpet loop0 ppp tty14 tty36 tty58 ttyS20 vcs vhost-net<br />
disk hugepages loop1 pps0 tty15 tty37 tty59 ttyS21 vcs1 vhost-vsock<br />
dm-0 hwrng loop2 psaux tty16 tty38 tty6 ttyS22 vcs2 watchdog<br />
dm-1 i2c-0 loop3 ptmx tty17 tty39 tty60 ttyS23 vcs3 watchdog0<br />
dm-10 i2c-1 loop4 ptp0 tty18 tty4 tty61 ttyS24 vcs4 zero<br />
dm-11 i2c-10 loop5 pts tty19 tty40 tty62 ttyS25 vcs5 zfs<br />
dm-12 i2c-11 loop6 pve tty2 tty41 tty63 ttyS26 vcs6<br />
dm-13 i2c-12 loop7 random tty20 tty42 tty7 ttyS27 vcsa<br />
dm-14 i2c-13 loop-control rfkill tty21 tty43 tty8 ttyS28 vcsa1<br />
dm-2 i2c-14 mapper rtc tty22 tty44 tty9 ttyS29 vcsa2<br />
dm-3 i2c-2 mcelog rtc0 tty23 tty45 ttyprintk ttyS3 vcsa3<br />
dm-4 i2c-3 mem shm tty24 tty46 ttyS0 ttyS30 vcsa4<br />
dm-5 i2c-4 mpt2ctl snapshot tty25 tty47 ttyS1 ttyS31 vcsa5<br />
dm-6 i2c-5 mpt3ctl snd tty26 tty48 ttyS10 ttyS4 vcsa6<br />
dm-7 i2c-6 mqueue stderr tty27 tty49 ttyS11 ttyS5 vcsu<br />
<br />
</pre><br />
<br />
== macOS USB passthru failed attempt ==<br />
<br />
That doesn't work on macOS. Tried setting usb mapping via console, following this:<br />
sudo qm monitor 111<br />
qm> info usbhost<br />
qm> quit<br />
sudo qm set 111 -usb1 host=05ac:12a8<br />
<br />
No luck, same result. Reading his remarks on USB forwarding, try resetting machine type:<br />
machine: pc-q35-6.0 (instead of latest, which was 6.2 at time of writing)<br />
remove this from /etc/pve/qemu-server/111.conf: -global ICH9-LPC.acpi-pci-hotplug-with-bridge-support=off<br />
<br />
Hmm.. perhaps it is a conflict between Nick's usb keyboard config and my usb port selection... try plugging usb into another port and remapping...<br />
<br />
No luck. FFS. Reset to 6.2 and see if we have any luck with hotplug line removed from config... Nope.<br />
<br />
Keep trying permutations... nothing from googling indicates taht this shouldn't just FUCKING WORK...<br />
<br />
Remove this and re-add the hotplug line, on the off chance it shouldn't be used with q35 v6.2:<br />
-global nec-usb-xhci.msi=off<br />
<br />
Nope, that jsut caused a problem with "Springboard", not working on this Mac, or some shit. Re-adding the line...<br />
<br />
Well what now? Google more? <br />
<br />
Update and reboot proxmox and retry... no luck.<br />
<br />
Try changing from blue to light-blue port... the device is mapped so it should be passed through... nope.<br />
<br />
Try [https://raw.githubusercontent.com/vzamora/Proxmox-Cheatsheet/main/General%20Proxmox%20Setup%20and%20Notes/USB%20Passthrough%20Notes.md this guy's approach] to mount an EFI Disk<br />
lsusb<br />
Bus 004 Device 009: ID 05ac:12a8 Apple, Inc. iPhone 5/5C/5S/6/SE<br />
ls -al /dev/bus/usb/004/009<br />
crw-rw-r-- 1 root root 189, 392 Jul 22 16:10 /dev/bus/usb/004/009<br />
sudo emacs /etc/pve/qemu-server/111.conf<br />
lxc.cgroup.devices.allow: c 189:* rwm<br />
lxc.mount.entry: /dev/bus/usb/004 dev/bus/usb/004 none bind,optional,create=dir<br />
<br />
Nope.<br />
<br />
Try mapping the port instead of device ID, from the Proxmox UI... Nope.<br />
<br />
How can i check the apple side for any issues? straight up google for that, macOS not seeing a USB device.<br />
System Information > USB > nada<br />
<br />
hrmphhhh. Never got it working. RE-google next month maybe...<br />
<br />
== Add samba shares manually ==<br />
During original configuration, I added samba shares manually.<br />
sudo emacs /etc/fstab # and paste samba stanza from another machine<br />
sudo emacs /root/samba_credentials<br />
sudo mkdir /spiceflow && sudo chmod 777 /spiceflow<br />
🌐 m@melange [~] mkdir /spiceflow/bitpost<br />
🌐 m@melange [~] mkdir /spiceflow/grim<br />
🌐 m@melange [~] mkdir /spiceflow/mack<br />
🌐 m@melange [~] mkdir /spiceflow/reservoir<br />
🌐 m@melange [~] mkdir /spiceflow/sassy<br />
🌐 m@melange [~] mkdir /spiceflow/safe<br />
<br />
Now you can mount em up and hang em high!<br />
<br />
== Old VMs ==<br />
[[Mongolden]]</div>Mhttps://bitpost.com/w/index.php?title=Melange_history&diff=7470Melange history2024-03-24T20:22:59Z<p>M: /* STEPS THAT MAY NEED REVERSAL */</p>
<hr />
<div><br />
== GRIM DEATH ==<br />
<br />
2024/03/23 We are harvesting the hive grim drives for use as melange drives for VM storate.<br />
<br />
I changed the location of the hive System Dataset Pool from grim to safe.<br />
<br />
I copied all grim data to safe (oops, i forgot SharedDownloads... it's empty now...)<br />
<br />
I removed the 'grim' pool from FreeNAS.<br />
<br />
Now I need to move the drives! I want to keep the PCI card as pass-thru, but the two grim drives are on it.<br />
* make note of all hive drive assignments<br />
* open melange<br />
** remove both grim drives from the PCI passthru<br />
** move the one mine drive that is on SATA from SATA to one of the PCI passthroughs<br />
** move one safe drive from SATA to the other of the PCI passthroughs<br />
** add both grim drives to SATA<br />
* close and restart melange and see if you can reconnect everything<br />
** the grim drives should now show up on melange, not passed through<br />
** the safe and mine drives should show up passed through, but perhaps hive cannot associate them; if not, try to fix<br />
** if not, RESET/KEEP GOING brother!<br />
<br />
Let's go...<br />
<br />
First, capture everything...<br />
<br />
SATA DRIVES:<br />
<br />
🌐 m@melange [~] sudo lsblk |awk 'NR==1{print $0" DEVICE-ID(S)"}NR>1{dev=$1;printf $0" ";system("find /dev/disk/by-id -lname \"*"dev"\" -printf \" %p\"");print "";}'|grep -v -E 'part|lvm'<br />
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT DEVICE-ID(S)<br />
sda 8:0 0 931.5G 0 disk /dev/disk/by-id/ata-CT1000MX500SSD1_2122E5A2FB01 /dev/disk/by-id/wwn-0x500a0751e5a2fb01<br />
sdb 8:16 0 931.5G 0 disk /dev/disk/by-id/ata-CT1000MX500SSD1_2122E5A2FD56 /dev/disk/by-id/wwn-0x500a0751e5a2fd56<br />
sdc 8:32 1 931.5G 0 disk /dev/disk/by-id/ata-CT1000MX500SSD1_2122E5A313D2 /dev/disk/by-id/wwn-0x500a0751e5a313d2<br />
sdd 8:48 1 931.5G 0 disk /dev/disk/by-id/ata-CT1000MX500SSD1_2122E5A313D6 /dev/disk/by-id/wwn-0x500a0751e5a313d6<br />
sde 8:64 1 931.5G 0 disk /dev/disk/by-id/ata-CT1000MX500SSD1_2117E59AAE1B /dev/disk/by-id/wwn-0x500a0751e59aae1b<br />
sdf 8:80 1 931.5G 0 disk /dev/disk/by-id/ata-CT1000MX500SSD1_2121E5A2E131 /dev/disk/by-id/wwn-0x500a0751e5a2e131<br />
sdg 8:96 1 931.5G 0 disk /dev/disk/by-id/ata-CT1000MX500SSD1_2122E5A3009A /dev/disk/by-id/wwn-0x500a0751e5a3009a<br />
sdh 8:112 1 7.3T 0 disk /dev/disk/by-id/ata-Samsung_SSD_870_QVO_8TB_S5VUNJ0W706320H /dev/disk/by-id/wwn-0x5002538f33710e1d<br />
nvme0n1 259:0 0 931.5G 0 disk /dev/disk/by-id/nvme-Samsung_SSD_970_EVO_Plus_1TB_S4EWNJ0N107994E /dev/disk/by-id/nvme-eui.002538510141169d<br />
<br />
<br />
104 PASSTHRUS:<br />
<br />
🌐 m@melange [~] sudo cat /etc/pve/qemu-server/104.conf<br />
boot: order=scsi0;net0<br />
cores: 4<br />
hostpci0: 0a:00.0,rombar=0<br />
memory: 24576<br />
name: hive<br />
net0: virtio=DA:E8:DA:81:EC:64,bridge=vmbr0,firewall=1<br />
numa: 0<br />
onboot: 1<br />
ostype: l26<br />
scsi0: local-lvm:vm-104-disk-0,size=25G<br />
scsi11: /dev/disk/by-id/ata-CT1000MX500SSD1_2122E5A2FD56,size=976762584K,backup=no,serial=2122E5A2FD56<br />
scsi12: /dev/disk/by-id/ata-CT1000MX500SSD1_2122E5A2FB01,size=976762584K,backup=no,serial=2122E5A2FB01<br />
scsi13: /dev/disk/by-id/ata-CT1000MX500SSD1_2122E5A313D2,size=976762584K,backup=no,serial=2122E5A313D2<br />
scsi14: /dev/disk/by-id/ata-CT1000MX500SSD1_2122E5A313D6,size=976762584K,backup=no,serial=2122E5A313D6<br />
scsi15: /dev/disk/by-id/ata-CT1000MX500SSD1_2117E59AAE1B,size=976762584K,backup=no,serial=2117E59AAE1B<br />
scsi16: /dev/disk/by-id/ata-CT1000MX500SSD1_2121E5A2E131,size=976762584K,backup=no,serial=2121E5A2E131<br />
scsi17: /dev/disk/by-id/ata-CT1000MX500SSD1_2122E5A3009A,size=976762584K,backup=no,serial=2122E5A3009A<br />
scsi18: /dev/disk/by-id/ata-Samsung_SSD_870_QVO_8TB_S5VUNJ0W706320H,backup=0,serial=S5VUNJ0W706320H,size=7814026584K<br />
scsihw: virtio-scsi-pci<br />
smbios1: uuid=dc3d077c-0063-41fe-abde-a97674d14dc8<br />
sockets: 1<br />
startup: order=1,up=90<br />
vmgenid: 1dcb048d-122f-4343-ad87-1a01ee8284a6<br />
<br />
PRE-MOVE DRIVES:<br />
<br />
* 7 1tb safe drives on SATA<br />
* 1 8tb mine drive on SATA<br />
* 2 4tb grim drives on PCI<br />
* 6 8tb mine drives on PCI<br />
<br />
PRE-MOVE hive:<br />
da7 2122E5A3009A 931.51 GiB safe<br />
da8 S5VUNJ0W706320H 7.28 TiB mine<br />
da11 S5B0NW0NB01796J 3.64 TiB N/A<br />
da12 S4CXNF0M307721X 3.64 TiB N/A<br />
<br />
POST-MOVE DRIVES:<br />
<br />
* 6 1tb safe drives on SATA<br />
* 2 8tb mine drives on SATA<br />
* 1 1tb safe drive on SATA<br />
* 7 8tb mine drives on PCI<br />
<br />
=== STEPS THAT MAY NEED REVERSAL ===<br />
<br />
Do i need to adjust hive or melange before opening case? I guess i could remove the grim sata passthru... AND the SAFE i'm going to move, too, it will no longer pass through (we will be using the PCI card passthru for it).<br />
<br />
* shut down VMS, then hive (but not melange)<br />
* remove two drives from SATA passthru, first is 1 from SAFE (moving to PCI card) and second is 1 from MINE<br />
<br />
<br />
scsi17: /dev/disk/by-id/ata-CT1000MX500SSD1_2122E5A3009A,size=976762584K,backup=no,serial=2122E5A3009A<br />
^^ ACTUALLY, NOT THIS ONE, leave it<br />
<br />
scsi18: /dev/disk/by-id/ata-Samsung_SSD_870_QVO_8TB_S5VUNJ0W706320H,backup=0,serial=S5VUNJ0W706320H,size=7814026584K<br />
* shut down melange<br />
<br />
* remove two 4GB drives from PCI: S5B0NW0NB01796J, S4CXNF0M307721X<br />
* move the only 8TB from SATA to PCI: S5VUNJ0W706320H<br />
* move 1 1TB from SATA to PCI: 2122E5A3009A<br />
* connect two 4GB drives to SATA: S5B0NW0NB01796J, S4CXNF0M307721X<br />
<br />
== 2023 July full upgrade ==<br />
I need to install a Windows 11 VM. Also have Ubuntu 20.04 machines that should be moved to 22.04. Figured as good a reason as any for a full upgrade of everything.<br />
<br />
* Upgrade bitpost first. Upon reboot, my $*(@ IP changed again. Fuck you google. Spent a while resetting that. Here are the notes (also in my Red Dead RP journal, keepin it real (real accessible when everything's down), lol!):<br />
cast$ ssh bitpost # not bp or bitpost.com, so we get to the LAN resource<br />
sudo su -<br />
stronger_firewall_and_save # internet should now work<br />
# get new IP from whatsmyip<br />
# fix bitpost.com DNS at domains.google.com<br />
# WAIT for propagation.... might as well fix the other DNS records...<br />
sudo service dnsmasq restart<br />
ping bitpost.com # EVENTUALLY this will work! May need to repeat this AND previous step.<br />
* Ask Tom to update E-S DNS to use new IP<br />
* Upgrade abtdev1, then all Ubuntu boxes (glam is toughest), then positronic last, with this pattern:<br />
mh-update-ubuntu # and reboot<br />
sudo do-release-upgrade # best to connect directly, but ssh worked fine too<br />
sudo shutdown -h now # to prep for melange reboot<br />
* Upgrade hive's TrueNAS install, via https://hive CHECK FOR UPDATES, then shut it down<br />
* Update and reboot melange PROXMOX install, via https://melange:8006 Datacenter > melange > Updates<br />
* CHECK EVERYTHING<br />
** proxmox samba share for backups<br />
** samba shares<br />
** at ptl to ensure it can get to positronic<br />
** shitcutter and blogs and wiki and...<br />
** I had a terrible time getting GLAM apache + PHP working again now that Ubuntu uses PHP 8.1; just needed to ENABLE THE MODULE, ffs:<br />
a2enmod php8.1<br />
<br />
== 6.3 > 7.0 ==<br />
<br />
Proxmox uses apt for upgrades.<br />
I followed [https://pve.proxmox.com/wiki/Upgrade_from_6.x_to_7.0 this], for the most part.<br />
* Update all VMS<br />
* Shut down all VMS<br />
* Fully update current version's apt packages - this took me from 6.3 to 6.4, a necessary first step.<br />
sudo apt update<br />
sudo apt dist-upgrade<br />
* Upgrade basic apt sources list from buster to bullseye<br />
sudo sed -i 's/buster\/updates/bullseye-security/g;s/buster/bullseye/g' /etc/apt/sources.list<br />
# instructions discuss pve-enterprise but i needed to change pve-no-subscription instead - but same exact steps, otherwise<br />
# ie, leave this commented out, but might as well set to bullseye<br />
# /etc/apt/sources.list.d/pve-enterprise.list<br />
# and update this to bullseye<br />
# /etc/apt/sources.list.d/pve-no-subscription.list<br />
* Perform the full upgrade to bullseye / pm 7<br />
sudo apt update<br />
sudo apt dist-upgrade<br />
* Reboot<br />
<br />
== Manual restart notes ==<br />
<br />
NOTE: This shouldn't be a problem any more with newer staged order restart.<br />
<br />
One time bandit samba shares don't mount (it comes up too fast perhaps?). So restart them then restart qbt nox:<br />
mh-setup-samba-shares<br />
sudo service qbittorrent-nox restart<br />
<br />
I did another round of `apt update && apt dist-upgrade` without stopping containers and it went fine (with bandit fixup still needed after reboot, tho).<br />
<br />
sudo apt update<br />
sudo apt dist-upgrade<br />
ssh bandit<br />
mh-setup-samba-shares<br />
sudo service qbittorrent-nox restart<br />
<br />
== Add 7 1TB zraid ==<br />
<br />
After adding 7 new 1 TB ssds:<br />
🌐 m@melange [~] sudo lsblk |awk 'NR==1{print $0" DEVICE-ID(S)"}NR>1{dev=$1;printf $0" ";system("find /dev/disk/by-id -lname \"*"dev"\" -printf \" %p\"");print "";}'|grep -v -E 'part|lvm'<br />
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT DEVICE-ID(S)<br />
sdh 8:112 0 931.5G 0 disk /dev/disk/by-id/wwn-0x500a0751e5a2fb01 /dev/disk/by-id/ata-CT1000MX500SSD1_2122E5A2FB01<br />
sdi 8:128 0 931.5G 0 disk /dev/disk/by-id/ata-CT1000MX500SSD1_2122E5A2FD56 /dev/disk/by-id/wwn-0x500a0751e5a2fd56<br />
sdj 8:144 1 931.5G 0 disk /dev/disk/by-id/ata-CT1000MX500SSD1_2122E5A313D2 /dev/disk/by-id/wwn-0x500a0751e5a313d2<br />
sdk 8:160 1 931.5G 0 disk /dev/disk/by-id/ata-CT1000MX500SSD1_2122E5A313D6 /dev/disk/by-id/wwn-0x500a0751e5a313d6<br />
sdl 8:176 1 931.5G 0 disk /dev/disk/by-id/ata-CT1000MX500SSD1_2117E59AAE1B /dev/disk/by-id/wwn-0x500a0751e59aae1b<br />
sdm 8:192 1 931.5G 0 disk /dev/disk/by-id/wwn-0x500a0751e5a2e131 /dev/disk/by-id/ata-CT1000MX500SSD1_2121E5A2E131<br />
sdn 8:208 1 931.5G 0 disk /dev/disk/by-id/wwn-0x500a0751e5a3009a /dev/disk/by-id/ata-CT1000MX500SSD1_2122E5A3009A<br />
nvme0n1 259:0 0 931.5G 0 disk /dev/disk/by-id/nvme-eui.002538510141169d /dev/disk/by-id/nvme-Samsung_SSD_970_EVO_Plus_1TB_S4EWNJ0N107994E<br />
<br />
<br />
Before adding 7 new 1 TB ssds:<br />
<pre><br />
🌐 m@melange [~] ls /dev/<br />
autofs dm-8 i2c-7 net stdin tty28 tty5 ttyS12 ttyS6 vcsu1<br />
block dm-9 i2c-8 null stdout tty29 tty50 ttyS13 ttyS7 vcsu2<br />
btrfs-control dri i2c-9 nvme0 tty tty3 tty51 ttyS14 ttyS8 vcsu3<br />
bus ecryptfs initctl nvme0n1 tty0 tty30 tty52 ttyS15 ttyS9 vcsu4<br />
char fb0 input nvme0n1p1 tty1 tty31 tty53 ttyS16 udmabuf vcsu5<br />
console fd kmsg nvme0n1p2 tty10 tty32 tty54 ttyS17 uhid vcsu6<br />
core full kvm nvme0n1p3 tty11 tty33 tty55 ttyS18 uinput vfio<br />
cpu fuse lightnvm nvram tty12 tty34 tty56 ttyS19 urandom vga_arbiter<br />
cpu_dma_latency gpiochip0 log port tty13 tty35 tty57 ttyS2 userio vhci<br />
cuse hpet loop0 ppp tty14 tty36 tty58 ttyS20 vcs vhost-net<br />
disk hugepages loop1 pps0 tty15 tty37 tty59 ttyS21 vcs1 vhost-vsock<br />
dm-0 hwrng loop2 psaux tty16 tty38 tty6 ttyS22 vcs2 watchdog<br />
dm-1 i2c-0 loop3 ptmx tty17 tty39 tty60 ttyS23 vcs3 watchdog0<br />
dm-10 i2c-1 loop4 ptp0 tty18 tty4 tty61 ttyS24 vcs4 zero<br />
dm-11 i2c-10 loop5 pts tty19 tty40 tty62 ttyS25 vcs5 zfs<br />
dm-12 i2c-11 loop6 pve tty2 tty41 tty63 ttyS26 vcs6<br />
dm-13 i2c-12 loop7 random tty20 tty42 tty7 ttyS27 vcsa<br />
dm-14 i2c-13 loop-control rfkill tty21 tty43 tty8 ttyS28 vcsa1<br />
dm-2 i2c-14 mapper rtc tty22 tty44 tty9 ttyS29 vcsa2<br />
dm-3 i2c-2 mcelog rtc0 tty23 tty45 ttyprintk ttyS3 vcsa3<br />
dm-4 i2c-3 mem shm tty24 tty46 ttyS0 ttyS30 vcsa4<br />
dm-5 i2c-4 mpt2ctl snapshot tty25 tty47 ttyS1 ttyS31 vcsa5<br />
dm-6 i2c-5 mpt3ctl snd tty26 tty48 ttyS10 ttyS4 vcsa6<br />
dm-7 i2c-6 mqueue stderr tty27 tty49 ttyS11 ttyS5 vcsu<br />
<br />
</pre><br />
<br />
== macOS USB passthru failed attempt ==<br />
<br />
That doesn't work on macOS. Tried setting usb mapping via console, following this:<br />
sudo qm monitor 111<br />
qm> info usbhost<br />
qm> quit<br />
sudo qm set 111 -usb1 host=05ac:12a8<br />
<br />
No luck, same result. Reading his remarks on USB forwarding, try resetting machine type:<br />
machine: pc-q35-6.0 (instead of latest, which was 6.2 at time of writing)<br />
remove this from /etc/pve/qemu-server/111.conf: -global ICH9-LPC.acpi-pci-hotplug-with-bridge-support=off<br />
<br />
Hmm.. perhaps it is a conflict between Nick's usb keyboard config and my usb port selection... try plugging usb into another port and remapping...<br />
<br />
No luck. FFS. Reset to 6.2 and see if we have any luck with hotplug line removed from config... Nope.<br />
<br />
Keep trying permutations... nothing from googling indicates taht this shouldn't just FUCKING WORK...<br />
<br />
Remove this and re-add the hotplug line, on the off chance it shouldn't be used with q35 v6.2:<br />
-global nec-usb-xhci.msi=off<br />
<br />
Nope, that jsut caused a problem with "Springboard", not working on this Mac, or some shit. Re-adding the line...<br />
<br />
Well what now? Google more? <br />
<br />
Update and reboot proxmox and retry... no luck.<br />
<br />
Try changing from blue to light-blue port... the device is mapped so it should be passed through... nope.<br />
<br />
Try [https://raw.githubusercontent.com/vzamora/Proxmox-Cheatsheet/main/General%20Proxmox%20Setup%20and%20Notes/USB%20Passthrough%20Notes.md this guy's approach] to mount an EFI Disk<br />
lsusb<br />
Bus 004 Device 009: ID 05ac:12a8 Apple, Inc. iPhone 5/5C/5S/6/SE<br />
ls -al /dev/bus/usb/004/009<br />
crw-rw-r-- 1 root root 189, 392 Jul 22 16:10 /dev/bus/usb/004/009<br />
sudo emacs /etc/pve/qemu-server/111.conf<br />
lxc.cgroup.devices.allow: c 189:* rwm<br />
lxc.mount.entry: /dev/bus/usb/004 dev/bus/usb/004 none bind,optional,create=dir<br />
<br />
Nope.<br />
<br />
Try mapping the port instead of device ID, from the Proxmox UI... Nope.<br />
<br />
How can i check the apple side for any issues? straight up google for that, macOS not seeing a USB device.<br />
System Information > USB > nada<br />
<br />
hrmphhhh. Never got it working. RE-google next month maybe...<br />
<br />
== Add samba shares manually ==<br />
During original configuration, I added samba shares manually.<br />
sudo emacs /etc/fstab # and paste samba stanza from another machine<br />
sudo emacs /root/samba_credentials<br />
sudo mkdir /spiceflow && sudo chmod 777 /spiceflow<br />
🌐 m@melange [~] mkdir /spiceflow/bitpost<br />
🌐 m@melange [~] mkdir /spiceflow/grim<br />
🌐 m@melange [~] mkdir /spiceflow/mack<br />
🌐 m@melange [~] mkdir /spiceflow/reservoir<br />
🌐 m@melange [~] mkdir /spiceflow/sassy<br />
🌐 m@melange [~] mkdir /spiceflow/safe<br />
<br />
Now you can mount em up and hang em high!<br />
<br />
== Old VMs ==<br />
[[Mongolden]]</div>Mhttps://bitpost.com/w/index.php?title=Melange_history&diff=7469Melange history2024-03-24T18:56:49Z<p>M: /* GRIM DEATH */</p>
<hr />
<div><br />
== GRIM DEATH ==<br />
<br />
2024/03/23 We are harvesting the hive grim drives for use as melange drives for VM storate.<br />
<br />
I changed the location of the hive System Dataset Pool from grim to safe.<br />
<br />
I copied all grim data to safe (oops, i forgot SharedDownloads... it's empty now...)<br />
<br />
I removed the 'grim' pool from FreeNAS.<br />
<br />
Now I need to move the drives! I want to keep the PCI card as pass-thru, but the two grim drives are on it.<br />
* make note of all hive drive assignments<br />
* open melange<br />
** remove both grim drives from the PCI passthru<br />
** move the one mine drive that is on SATA from SATA to one of the PCI passthroughs<br />
** move one safe drive from SATA to the other of the PCI passthroughs<br />
** add both grim drives to SATA<br />
* close and restart melange and see if you can reconnect everything<br />
** the grim drives should now show up on melange, not passed through<br />
** the safe and mine drives should show up passed through, but perhaps hive cannot associate them; if not, try to fix<br />
** if not, RESET/KEEP GOING brother!<br />
<br />
Let's go...<br />
<br />
First, capture everything...<br />
<br />
SATA DRIVES:<br />
<br />
🌐 m@melange [~] sudo lsblk |awk 'NR==1{print $0" DEVICE-ID(S)"}NR>1{dev=$1;printf $0" ";system("find /dev/disk/by-id -lname \"*"dev"\" -printf \" %p\"");print "";}'|grep -v -E 'part|lvm'<br />
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT DEVICE-ID(S)<br />
sda 8:0 0 931.5G 0 disk /dev/disk/by-id/ata-CT1000MX500SSD1_2122E5A2FB01 /dev/disk/by-id/wwn-0x500a0751e5a2fb01<br />
sdb 8:16 0 931.5G 0 disk /dev/disk/by-id/ata-CT1000MX500SSD1_2122E5A2FD56 /dev/disk/by-id/wwn-0x500a0751e5a2fd56<br />
sdc 8:32 1 931.5G 0 disk /dev/disk/by-id/ata-CT1000MX500SSD1_2122E5A313D2 /dev/disk/by-id/wwn-0x500a0751e5a313d2<br />
sdd 8:48 1 931.5G 0 disk /dev/disk/by-id/ata-CT1000MX500SSD1_2122E5A313D6 /dev/disk/by-id/wwn-0x500a0751e5a313d6<br />
sde 8:64 1 931.5G 0 disk /dev/disk/by-id/ata-CT1000MX500SSD1_2117E59AAE1B /dev/disk/by-id/wwn-0x500a0751e59aae1b<br />
sdf 8:80 1 931.5G 0 disk /dev/disk/by-id/ata-CT1000MX500SSD1_2121E5A2E131 /dev/disk/by-id/wwn-0x500a0751e5a2e131<br />
sdg 8:96 1 931.5G 0 disk /dev/disk/by-id/ata-CT1000MX500SSD1_2122E5A3009A /dev/disk/by-id/wwn-0x500a0751e5a3009a<br />
sdh 8:112 1 7.3T 0 disk /dev/disk/by-id/ata-Samsung_SSD_870_QVO_8TB_S5VUNJ0W706320H /dev/disk/by-id/wwn-0x5002538f33710e1d<br />
nvme0n1 259:0 0 931.5G 0 disk /dev/disk/by-id/nvme-Samsung_SSD_970_EVO_Plus_1TB_S4EWNJ0N107994E /dev/disk/by-id/nvme-eui.002538510141169d<br />
<br />
<br />
104 PASSTHRUS:<br />
<br />
🌐 m@melange [~] sudo cat /etc/pve/qemu-server/104.conf<br />
boot: order=scsi0;net0<br />
cores: 4<br />
hostpci0: 0a:00.0,rombar=0<br />
memory: 24576<br />
name: hive<br />
net0: virtio=DA:E8:DA:81:EC:64,bridge=vmbr0,firewall=1<br />
numa: 0<br />
onboot: 1<br />
ostype: l26<br />
scsi0: local-lvm:vm-104-disk-0,size=25G<br />
scsi11: /dev/disk/by-id/ata-CT1000MX500SSD1_2122E5A2FD56,size=976762584K,backup=no,serial=2122E5A2FD56<br />
scsi12: /dev/disk/by-id/ata-CT1000MX500SSD1_2122E5A2FB01,size=976762584K,backup=no,serial=2122E5A2FB01<br />
scsi13: /dev/disk/by-id/ata-CT1000MX500SSD1_2122E5A313D2,size=976762584K,backup=no,serial=2122E5A313D2<br />
scsi14: /dev/disk/by-id/ata-CT1000MX500SSD1_2122E5A313D6,size=976762584K,backup=no,serial=2122E5A313D6<br />
scsi15: /dev/disk/by-id/ata-CT1000MX500SSD1_2117E59AAE1B,size=976762584K,backup=no,serial=2117E59AAE1B<br />
scsi16: /dev/disk/by-id/ata-CT1000MX500SSD1_2121E5A2E131,size=976762584K,backup=no,serial=2121E5A2E131<br />
scsi17: /dev/disk/by-id/ata-CT1000MX500SSD1_2122E5A3009A,size=976762584K,backup=no,serial=2122E5A3009A<br />
scsi18: /dev/disk/by-id/ata-Samsung_SSD_870_QVO_8TB_S5VUNJ0W706320H,backup=0,serial=S5VUNJ0W706320H,size=7814026584K<br />
scsihw: virtio-scsi-pci<br />
smbios1: uuid=dc3d077c-0063-41fe-abde-a97674d14dc8<br />
sockets: 1<br />
startup: order=1,up=90<br />
vmgenid: 1dcb048d-122f-4343-ad87-1a01ee8284a6<br />
<br />
PRE-MOVE DRIVES:<br />
<br />
* 7 1tb safe drives on SATA<br />
* 1 8tb mine drive on SATA<br />
* 2 4tb grim drives on PCI<br />
* 6 8tb mine drives on PCI<br />
<br />
PRE-MOVE hive:<br />
da7 2122E5A3009A 931.51 GiB safe<br />
da8 S5VUNJ0W706320H 7.28 TiB mine<br />
da11 S5B0NW0NB01796J 3.64 TiB N/A<br />
da12 S4CXNF0M307721X 3.64 TiB N/A<br />
<br />
POST-MOVE DRIVES:<br />
<br />
* 6 1tb safe drives on SATA<br />
* 2 8tb mine drives on SATA<br />
* 1 1tb safe drive on SATA<br />
* 7 8tb mine drives on PCI<br />
<br />
=== STEPS THAT MAY NEED REVERSAL ===<br />
<br />
Do i need to adjust hive or melange before opening case? I guess i could remove the grim sata passthru... AND the SAFE i'm going to move, too, it will no longer pass through (we will be using the PCI card passthru for it).<br />
<br />
* shut down VMS, then hive (but not melange)<br />
* remove two drives from SATA passthru, first is 1 from SAFE (moving to PCI card) and second is 1 from MINE<br />
scsi17: /dev/disk/by-id/ata-CT1000MX500SSD1_2122E5A3009A,size=976762584K,backup=no,serial=2122E5A3009A<br />
scsi18: /dev/disk/by-id/ata-Samsung_SSD_870_QVO_8TB_S5VUNJ0W706320H,backup=0,serial=S5VUNJ0W706320H,size=7814026584K<br />
* shut down melange<br />
<br />
* remove two 4GB drives from PCI: S5B0NW0NB01796J, S4CXNF0M307721X<br />
* move the only 8TB from SATA to PCI: S5VUNJ0W706320H<br />
* move 1 1TB from SATA to PCI: 2122E5A3009A<br />
* connect two 4GB drives to SATA: S5B0NW0NB01796J, S4CXNF0M307721X<br />
<br />
<br />
* remove 4TB grim from PCI passthru <br />
<br />
* move 1TB 2122E5A3009A (safe) to PCI<br />
<br />
== 2023 July full upgrade ==<br />
I need to install a Windows 11 VM. Also have Ubuntu 20.04 machines that should be moved to 22.04. Figured as good a reason as any for a full upgrade of everything.<br />
<br />
* Upgrade bitpost first. Upon reboot, my $*(@ IP changed again. Fuck you google. Spent a while resetting that. Here are the notes (also in my Red Dead RP journal, keepin it real (real accessible when everything's down), lol!):<br />
cast$ ssh bitpost # not bp or bitpost.com, so we get to the LAN resource<br />
sudo su -<br />
stronger_firewall_and_save # internet should now work<br />
# get new IP from whatsmyip<br />
# fix bitpost.com DNS at domains.google.com<br />
# WAIT for propagation.... might as well fix the other DNS records...<br />
sudo service dnsmasq restart<br />
ping bitpost.com # EVENTUALLY this will work! May need to repeat this AND previous step.<br />
* Ask Tom to update E-S DNS to use new IP<br />
* Upgrade abtdev1, then all Ubuntu boxes (glam is toughest), then positronic last, with this pattern:<br />
mh-update-ubuntu # and reboot<br />
sudo do-release-upgrade # best to connect directly, but ssh worked fine too<br />
sudo shutdown -h now # to prep for melange reboot<br />
* Upgrade hive's TrueNAS install, via https://hive CHECK FOR UPDATES, then shut it down<br />
* Update and reboot melange PROXMOX install, via https://melange:8006 Datacenter > melange > Updates<br />
* CHECK EVERYTHING<br />
** proxmox samba share for backups<br />
** samba shares<br />
** at ptl to ensure it can get to positronic<br />
** shitcutter and blogs and wiki and...<br />
** I had a terrible time getting GLAM apache + PHP working again now that Ubuntu uses PHP 8.1; just needed to ENABLE THE MODULE, ffs:<br />
a2enmod php8.1<br />
<br />
== 6.3 > 7.0 ==<br />
<br />
Proxmox uses apt for upgrades.<br />
I followed [https://pve.proxmox.com/wiki/Upgrade_from_6.x_to_7.0 this], for the most part.<br />
* Update all VMS<br />
* Shut down all VMS<br />
* Fully update current version's apt packages - this took me from 6.3 to 6.4, a necessary first step.<br />
sudo apt update<br />
sudo apt dist-upgrade<br />
* Upgrade basic apt sources list from buster to bullseye<br />
sudo sed -i 's/buster\/updates/bullseye-security/g;s/buster/bullseye/g' /etc/apt/sources.list<br />
# instructions discuss pve-enterprise but i needed to change pve-no-subscription instead - but same exact steps, otherwise<br />
# ie, leave this commented out, but might as well set to bullseye<br />
# /etc/apt/sources.list.d/pve-enterprise.list<br />
# and update this to bullseye<br />
# /etc/apt/sources.list.d/pve-no-subscription.list<br />
* Perform the full upgrade to bullseye / pm 7<br />
sudo apt update<br />
sudo apt dist-upgrade<br />
* Reboot<br />
<br />
== Manual restart notes ==<br />
<br />
NOTE: This shouldn't be a problem any more with newer staged order restart.<br />
<br />
One time bandit samba shares don't mount (it comes up too fast perhaps?). So restart them then restart qbt nox:<br />
mh-setup-samba-shares<br />
sudo service qbittorrent-nox restart<br />
<br />
I did another round of `apt update && apt dist-upgrade` without stopping containers and it went fine (with bandit fixup still needed after reboot, tho).<br />
<br />
sudo apt update<br />
sudo apt dist-upgrade<br />
ssh bandit<br />
mh-setup-samba-shares<br />
sudo service qbittorrent-nox restart<br />
<br />
== Add 7 1TB zraid ==<br />
<br />
After adding 7 new 1 TB ssds:<br />
🌐 m@melange [~] sudo lsblk |awk 'NR==1{print $0" DEVICE-ID(S)"}NR>1{dev=$1;printf $0" ";system("find /dev/disk/by-id -lname \"*"dev"\" -printf \" %p\"");print "";}'|grep -v -E 'part|lvm'<br />
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT DEVICE-ID(S)<br />
sdh 8:112 0 931.5G 0 disk /dev/disk/by-id/wwn-0x500a0751e5a2fb01 /dev/disk/by-id/ata-CT1000MX500SSD1_2122E5A2FB01<br />
sdi 8:128 0 931.5G 0 disk /dev/disk/by-id/ata-CT1000MX500SSD1_2122E5A2FD56 /dev/disk/by-id/wwn-0x500a0751e5a2fd56<br />
sdj 8:144 1 931.5G 0 disk /dev/disk/by-id/ata-CT1000MX500SSD1_2122E5A313D2 /dev/disk/by-id/wwn-0x500a0751e5a313d2<br />
sdk 8:160 1 931.5G 0 disk /dev/disk/by-id/ata-CT1000MX500SSD1_2122E5A313D6 /dev/disk/by-id/wwn-0x500a0751e5a313d6<br />
sdl 8:176 1 931.5G 0 disk /dev/disk/by-id/ata-CT1000MX500SSD1_2117E59AAE1B /dev/disk/by-id/wwn-0x500a0751e59aae1b<br />
sdm 8:192 1 931.5G 0 disk /dev/disk/by-id/wwn-0x500a0751e5a2e131 /dev/disk/by-id/ata-CT1000MX500SSD1_2121E5A2E131<br />
sdn 8:208 1 931.5G 0 disk /dev/disk/by-id/wwn-0x500a0751e5a3009a /dev/disk/by-id/ata-CT1000MX500SSD1_2122E5A3009A<br />
nvme0n1 259:0 0 931.5G 0 disk /dev/disk/by-id/nvme-eui.002538510141169d /dev/disk/by-id/nvme-Samsung_SSD_970_EVO_Plus_1TB_S4EWNJ0N107994E<br />
<br />
<br />
Before adding 7 new 1 TB ssds:<br />
<pre><br />
🌐 m@melange [~] ls /dev/<br />
autofs dm-8 i2c-7 net stdin tty28 tty5 ttyS12 ttyS6 vcsu1<br />
block dm-9 i2c-8 null stdout tty29 tty50 ttyS13 ttyS7 vcsu2<br />
btrfs-control dri i2c-9 nvme0 tty tty3 tty51 ttyS14 ttyS8 vcsu3<br />
bus ecryptfs initctl nvme0n1 tty0 tty30 tty52 ttyS15 ttyS9 vcsu4<br />
char fb0 input nvme0n1p1 tty1 tty31 tty53 ttyS16 udmabuf vcsu5<br />
console fd kmsg nvme0n1p2 tty10 tty32 tty54 ttyS17 uhid vcsu6<br />
core full kvm nvme0n1p3 tty11 tty33 tty55 ttyS18 uinput vfio<br />
cpu fuse lightnvm nvram tty12 tty34 tty56 ttyS19 urandom vga_arbiter<br />
cpu_dma_latency gpiochip0 log port tty13 tty35 tty57 ttyS2 userio vhci<br />
cuse hpet loop0 ppp tty14 tty36 tty58 ttyS20 vcs vhost-net<br />
disk hugepages loop1 pps0 tty15 tty37 tty59 ttyS21 vcs1 vhost-vsock<br />
dm-0 hwrng loop2 psaux tty16 tty38 tty6 ttyS22 vcs2 watchdog<br />
dm-1 i2c-0 loop3 ptmx tty17 tty39 tty60 ttyS23 vcs3 watchdog0<br />
dm-10 i2c-1 loop4 ptp0 tty18 tty4 tty61 ttyS24 vcs4 zero<br />
dm-11 i2c-10 loop5 pts tty19 tty40 tty62 ttyS25 vcs5 zfs<br />
dm-12 i2c-11 loop6 pve tty2 tty41 tty63 ttyS26 vcs6<br />
dm-13 i2c-12 loop7 random tty20 tty42 tty7 ttyS27 vcsa<br />
dm-14 i2c-13 loop-control rfkill tty21 tty43 tty8 ttyS28 vcsa1<br />
dm-2 i2c-14 mapper rtc tty22 tty44 tty9 ttyS29 vcsa2<br />
dm-3 i2c-2 mcelog rtc0 tty23 tty45 ttyprintk ttyS3 vcsa3<br />
dm-4 i2c-3 mem shm tty24 tty46 ttyS0 ttyS30 vcsa4<br />
dm-5 i2c-4 mpt2ctl snapshot tty25 tty47 ttyS1 ttyS31 vcsa5<br />
dm-6 i2c-5 mpt3ctl snd tty26 tty48 ttyS10 ttyS4 vcsa6<br />
dm-7 i2c-6 mqueue stderr tty27 tty49 ttyS11 ttyS5 vcsu<br />
<br />
</pre><br />
<br />
== macOS USB passthru failed attempt ==<br />
<br />
That doesn't work on macOS. Tried setting usb mapping via console, following this:<br />
sudo qm monitor 111<br />
qm> info usbhost<br />
qm> quit<br />
sudo qm set 111 -usb1 host=05ac:12a8<br />
<br />
No luck, same result. Reading his remarks on USB forwarding, try resetting machine type:<br />
machine: pc-q35-6.0 (instead of latest, which was 6.2 at time of writing)<br />
remove this from /etc/pve/qemu-server/111.conf: -global ICH9-LPC.acpi-pci-hotplug-with-bridge-support=off<br />
<br />
Hmm.. perhaps it is a conflict between Nick's usb keyboard config and my usb port selection... try plugging usb into another port and remapping...<br />
<br />
No luck. FFS. Reset to 6.2 and see if we have any luck with hotplug line removed from config... Nope.<br />
<br />
Keep trying permutations... nothing from googling indicates taht this shouldn't just FUCKING WORK...<br />
<br />
Remove this and re-add the hotplug line, on the off chance it shouldn't be used with q35 v6.2:<br />
-global nec-usb-xhci.msi=off<br />
<br />
Nope, that jsut caused a problem with "Springboard", not working on this Mac, or some shit. Re-adding the line...<br />
<br />
Well what now? Google more? <br />
<br />
Update and reboot proxmox and retry... no luck.<br />
<br />
Try changing from blue to light-blue port... the device is mapped so it should be passed through... nope.<br />
<br />
Try [https://raw.githubusercontent.com/vzamora/Proxmox-Cheatsheet/main/General%20Proxmox%20Setup%20and%20Notes/USB%20Passthrough%20Notes.md this guy's approach] to mount an EFI Disk<br />
lsusb<br />
Bus 004 Device 009: ID 05ac:12a8 Apple, Inc. iPhone 5/5C/5S/6/SE<br />
ls -al /dev/bus/usb/004/009<br />
crw-rw-r-- 1 root root 189, 392 Jul 22 16:10 /dev/bus/usb/004/009<br />
sudo emacs /etc/pve/qemu-server/111.conf<br />
lxc.cgroup.devices.allow: c 189:* rwm<br />
lxc.mount.entry: /dev/bus/usb/004 dev/bus/usb/004 none bind,optional,create=dir<br />
<br />
Nope.<br />
<br />
Try mapping the port instead of device ID, from the Proxmox UI... Nope.<br />
<br />
How can i check the apple side for any issues? straight up google for that, macOS not seeing a USB device.<br />
System Information > USB > nada<br />
<br />
hrmphhhh. Never got it working. RE-google next month maybe...<br />
<br />
== Add samba shares manually ==<br />
During original configuration, I added samba shares manually.<br />
sudo emacs /etc/fstab # and paste samba stanza from another machine<br />
sudo emacs /root/samba_credentials<br />
sudo mkdir /spiceflow && sudo chmod 777 /spiceflow<br />
🌐 m@melange [~] mkdir /spiceflow/bitpost<br />
🌐 m@melange [~] mkdir /spiceflow/grim<br />
🌐 m@melange [~] mkdir /spiceflow/mack<br />
🌐 m@melange [~] mkdir /spiceflow/reservoir<br />
🌐 m@melange [~] mkdir /spiceflow/sassy<br />
🌐 m@melange [~] mkdir /spiceflow/safe<br />
<br />
Now you can mount em up and hang em high!<br />
<br />
== Old VMs ==<br />
[[Mongolden]]</div>Mhttps://bitpost.com/w/index.php?title=Melange_history&diff=7468Melange history2024-03-23T20:57:48Z<p>M: /* GRIM DEATH */</p>
<hr />
<div><br />
== GRIM DEATH ==<br />
<br />
2024/03/23 We are harvesting the hive grim drives for use as melange drives for VM storate.<br />
<br />
I changed the location of the hive System Dataset Pool from grim to safe.<br />
<br />
I copied all grim data to safe (oops, i forgot SharedDownloads... it's empty now...)<br />
<br />
I removed the 'grim' pool from FreeNAS.<br />
<br />
Now I need to move the drives! I want to keep the PCI card as pass-thru, but the two grim drives are on it.<br />
* make note of all hive drive assignments<br />
* open melange<br />
** remove both grim drives from the PCI passthru<br />
** move the one mine drive that is on SATA from SATA to one of the PCI passthroughs<br />
** move one safe drive from SATA to the other of the PCI passthroughs<br />
** add both grim drives to SATA<br />
* close and restart melange and see if you can reconnect everything<br />
** the grim drives should now show up on melange, not passed through<br />
** the safe and mine drives should show up passed through, but perhaps hive cannot associate them; if not, try to fix<br />
** if not, RESET/KEEP GOING brother!<br />
<br />
Let's go...<br />
<br />
First, capture everything...<br />
<br />
🌐 m@melange [~] sudo lsblk |awk 'NR==1{print $0" DEVICE-ID(S)"}NR>1{dev=$1;printf $0" ";system("find /dev/disk/by-id -lname \"*"dev"\" -printf \" %p\"");print "";}'|grep -v -E 'part|lvm'<br />
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT DEVICE-ID(S)<br />
sda 8:0 0 931.5G 0 disk /dev/disk/by-id/ata-CT1000MX500SSD1_2122E5A2FB01 /dev/disk/by-id/wwn-0x500a0751e5a2fb01<br />
sdb 8:16 0 931.5G 0 disk /dev/disk/by-id/ata-CT1000MX500SSD1_2122E5A2FD56 /dev/disk/by-id/wwn-0x500a0751e5a2fd56<br />
sdc 8:32 1 931.5G 0 disk /dev/disk/by-id/ata-CT1000MX500SSD1_2122E5A313D2 /dev/disk/by-id/wwn-0x500a0751e5a313d2<br />
sdd 8:48 1 931.5G 0 disk /dev/disk/by-id/ata-CT1000MX500SSD1_2122E5A313D6 /dev/disk/by-id/wwn-0x500a0751e5a313d6<br />
sde 8:64 1 931.5G 0 disk /dev/disk/by-id/ata-CT1000MX500SSD1_2117E59AAE1B /dev/disk/by-id/wwn-0x500a0751e59aae1b<br />
sdf 8:80 1 931.5G 0 disk /dev/disk/by-id/ata-CT1000MX500SSD1_2121E5A2E131 /dev/disk/by-id/wwn-0x500a0751e5a2e131<br />
sdg 8:96 1 931.5G 0 disk /dev/disk/by-id/ata-CT1000MX500SSD1_2122E5A3009A /dev/disk/by-id/wwn-0x500a0751e5a3009a<br />
sdh 8:112 1 7.3T 0 disk /dev/disk/by-id/ata-Samsung_SSD_870_QVO_8TB_S5VUNJ0W706320H /dev/disk/by-id/wwn-0x5002538f33710e1d<br />
nvme0n1 259:0 0 931.5G 0 disk /dev/disk/by-id/nvme-Samsung_SSD_970_EVO_Plus_1TB_S4EWNJ0N107994E /dev/disk/by-id/nvme-eui.002538510141169d<br />
<br />
🌐 m@melange [~] sudo cat /etc/pve/qemu-server/104.conf<br />
boot: order=scsi0;net0<br />
cores: 4<br />
hostpci0: 0a:00.0,rombar=0<br />
memory: 24576<br />
name: hive<br />
net0: virtio=DA:E8:DA:81:EC:64,bridge=vmbr0,firewall=1<br />
numa: 0<br />
onboot: 1<br />
ostype: l26<br />
scsi0: local-lvm:vm-104-disk-0,size=25G<br />
scsi11: /dev/disk/by-id/ata-CT1000MX500SSD1_2122E5A2FD56,size=976762584K,backup=no,serial=2122E5A2FD56<br />
scsi12: /dev/disk/by-id/ata-CT1000MX500SSD1_2122E5A2FB01,size=976762584K,backup=no,serial=2122E5A2FB01<br />
scsi13: /dev/disk/by-id/ata-CT1000MX500SSD1_2122E5A313D2,size=976762584K,backup=no,serial=2122E5A313D2<br />
scsi14: /dev/disk/by-id/ata-CT1000MX500SSD1_2122E5A313D6,size=976762584K,backup=no,serial=2122E5A313D6<br />
scsi15: /dev/disk/by-id/ata-CT1000MX500SSD1_2117E59AAE1B,size=976762584K,backup=no,serial=2117E59AAE1B<br />
scsi16: /dev/disk/by-id/ata-CT1000MX500SSD1_2121E5A2E131,size=976762584K,backup=no,serial=2121E5A2E131<br />
scsi17: /dev/disk/by-id/ata-CT1000MX500SSD1_2122E5A3009A,size=976762584K,backup=no,serial=2122E5A3009A<br />
scsi18: /dev/disk/by-id/ata-Samsung_SSD_870_QVO_8TB_S5VUNJ0W706320H,backup=0,serial=S5VUNJ0W706320H,size=7814026584K<br />
scsihw: virtio-scsi-pci<br />
smbios1: uuid=dc3d077c-0063-41fe-abde-a97674d14dc8<br />
sockets: 1<br />
startup: order=1,up=90<br />
vmgenid: 1dcb048d-122f-4343-ad87-1a01ee8284a6<br />
<br />
== 2023 July full upgrade ==<br />
I need to install a Windows 11 VM. Also have Ubuntu 20.04 machines that should be moved to 22.04. Figured as good a reason as any for a full upgrade of everything.<br />
<br />
* Upgrade bitpost first. Upon reboot, my $*(@ IP changed again. Fuck you google. Spent a while resetting that. Here are the notes (also in my Red Dead RP journal, keepin it real (real accessible when everything's down), lol!):<br />
cast$ ssh bitpost # not bp or bitpost.com, so we get to the LAN resource<br />
sudo su -<br />
stronger_firewall_and_save # internet should now work<br />
# get new IP from whatsmyip<br />
# fix bitpost.com DNS at domains.google.com<br />
# WAIT for propagation.... might as well fix the other DNS records...<br />
sudo service dnsmasq restart<br />
ping bitpost.com # EVENTUALLY this will work! May need to repeat this AND previous step.<br />
* Ask Tom to update E-S DNS to use new IP<br />
* Upgrade abtdev1, then all Ubuntu boxes (glam is toughest), then positronic last, with this pattern:<br />
mh-update-ubuntu # and reboot<br />
sudo do-release-upgrade # best to connect directly, but ssh worked fine too<br />
sudo shutdown -h now # to prep for melange reboot<br />
* Upgrade hive's TrueNAS install, via https://hive CHECK FOR UPDATES, then shut it down<br />
* Update and reboot melange PROXMOX install, via https://melange:8006 Datacenter > melange > Updates<br />
* CHECK EVERYTHING<br />
** proxmox samba share for backups<br />
** samba shares<br />
** at ptl to ensure it can get to positronic<br />
** shitcutter and blogs and wiki and...<br />
** I had a terrible time getting GLAM apache + PHP working again now that Ubuntu uses PHP 8.1; just needed to ENABLE THE MODULE, ffs:<br />
a2enmod php8.1<br />
<br />
== 6.3 > 7.0 ==<br />
<br />
Proxmox uses apt for upgrades.<br />
I followed [https://pve.proxmox.com/wiki/Upgrade_from_6.x_to_7.0 this], for the most part.<br />
* Update all VMS<br />
* Shut down all VMS<br />
* Fully update current version's apt packages - this took me from 6.3 to 6.4, a necessary first step.<br />
sudo apt update<br />
sudo apt dist-upgrade<br />
* Upgrade basic apt sources list from buster to bullseye<br />
sudo sed -i 's/buster\/updates/bullseye-security/g;s/buster/bullseye/g' /etc/apt/sources.list<br />
# instructions discuss pve-enterprise but i needed to change pve-no-subscription instead - but same exact steps, otherwise<br />
# ie, leave this commented out, but might as well set to bullseye<br />
# /etc/apt/sources.list.d/pve-enterprise.list<br />
# and update this to bullseye<br />
# /etc/apt/sources.list.d/pve-no-subscription.list<br />
* Perform the full upgrade to bullseye / pm 7<br />
sudo apt update<br />
sudo apt dist-upgrade<br />
* Reboot<br />
<br />
== Manual restart notes ==<br />
<br />
NOTE: This shouldn't be a problem any more with newer staged order restart.<br />
<br />
One time bandit samba shares don't mount (it comes up too fast perhaps?). So restart them then restart qbt nox:<br />
mh-setup-samba-shares<br />
sudo service qbittorrent-nox restart<br />
<br />
I did another round of `apt update && apt dist-upgrade` without stopping containers and it went fine (with bandit fixup still needed after reboot, tho).<br />
<br />
sudo apt update<br />
sudo apt dist-upgrade<br />
ssh bandit<br />
mh-setup-samba-shares<br />
sudo service qbittorrent-nox restart<br />
<br />
== Add 7 1TB zraid ==<br />
<br />
After adding 7 new 1 TB ssds:<br />
🌐 m@melange [~] sudo lsblk |awk 'NR==1{print $0" DEVICE-ID(S)"}NR>1{dev=$1;printf $0" ";system("find /dev/disk/by-id -lname \"*"dev"\" -printf \" %p\"");print "";}'|grep -v -E 'part|lvm'<br />
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT DEVICE-ID(S)<br />
sdh 8:112 0 931.5G 0 disk /dev/disk/by-id/wwn-0x500a0751e5a2fb01 /dev/disk/by-id/ata-CT1000MX500SSD1_2122E5A2FB01<br />
sdi 8:128 0 931.5G 0 disk /dev/disk/by-id/ata-CT1000MX500SSD1_2122E5A2FD56 /dev/disk/by-id/wwn-0x500a0751e5a2fd56<br />
sdj 8:144 1 931.5G 0 disk /dev/disk/by-id/ata-CT1000MX500SSD1_2122E5A313D2 /dev/disk/by-id/wwn-0x500a0751e5a313d2<br />
sdk 8:160 1 931.5G 0 disk /dev/disk/by-id/ata-CT1000MX500SSD1_2122E5A313D6 /dev/disk/by-id/wwn-0x500a0751e5a313d6<br />
sdl 8:176 1 931.5G 0 disk /dev/disk/by-id/ata-CT1000MX500SSD1_2117E59AAE1B /dev/disk/by-id/wwn-0x500a0751e59aae1b<br />
sdm 8:192 1 931.5G 0 disk /dev/disk/by-id/wwn-0x500a0751e5a2e131 /dev/disk/by-id/ata-CT1000MX500SSD1_2121E5A2E131<br />
sdn 8:208 1 931.5G 0 disk /dev/disk/by-id/wwn-0x500a0751e5a3009a /dev/disk/by-id/ata-CT1000MX500SSD1_2122E5A3009A<br />
nvme0n1 259:0 0 931.5G 0 disk /dev/disk/by-id/nvme-eui.002538510141169d /dev/disk/by-id/nvme-Samsung_SSD_970_EVO_Plus_1TB_S4EWNJ0N107994E<br />
<br />
<br />
Before adding 7 new 1 TB ssds:<br />
<pre><br />
🌐 m@melange [~] ls /dev/<br />
autofs dm-8 i2c-7 net stdin tty28 tty5 ttyS12 ttyS6 vcsu1<br />
block dm-9 i2c-8 null stdout tty29 tty50 ttyS13 ttyS7 vcsu2<br />
btrfs-control dri i2c-9 nvme0 tty tty3 tty51 ttyS14 ttyS8 vcsu3<br />
bus ecryptfs initctl nvme0n1 tty0 tty30 tty52 ttyS15 ttyS9 vcsu4<br />
char fb0 input nvme0n1p1 tty1 tty31 tty53 ttyS16 udmabuf vcsu5<br />
console fd kmsg nvme0n1p2 tty10 tty32 tty54 ttyS17 uhid vcsu6<br />
core full kvm nvme0n1p3 tty11 tty33 tty55 ttyS18 uinput vfio<br />
cpu fuse lightnvm nvram tty12 tty34 tty56 ttyS19 urandom vga_arbiter<br />
cpu_dma_latency gpiochip0 log port tty13 tty35 tty57 ttyS2 userio vhci<br />
cuse hpet loop0 ppp tty14 tty36 tty58 ttyS20 vcs vhost-net<br />
disk hugepages loop1 pps0 tty15 tty37 tty59 ttyS21 vcs1 vhost-vsock<br />
dm-0 hwrng loop2 psaux tty16 tty38 tty6 ttyS22 vcs2 watchdog<br />
dm-1 i2c-0 loop3 ptmx tty17 tty39 tty60 ttyS23 vcs3 watchdog0<br />
dm-10 i2c-1 loop4 ptp0 tty18 tty4 tty61 ttyS24 vcs4 zero<br />
dm-11 i2c-10 loop5 pts tty19 tty40 tty62 ttyS25 vcs5 zfs<br />
dm-12 i2c-11 loop6 pve tty2 tty41 tty63 ttyS26 vcs6<br />
dm-13 i2c-12 loop7 random tty20 tty42 tty7 ttyS27 vcsa<br />
dm-14 i2c-13 loop-control rfkill tty21 tty43 tty8 ttyS28 vcsa1<br />
dm-2 i2c-14 mapper rtc tty22 tty44 tty9 ttyS29 vcsa2<br />
dm-3 i2c-2 mcelog rtc0 tty23 tty45 ttyprintk ttyS3 vcsa3<br />
dm-4 i2c-3 mem shm tty24 tty46 ttyS0 ttyS30 vcsa4<br />
dm-5 i2c-4 mpt2ctl snapshot tty25 tty47 ttyS1 ttyS31 vcsa5<br />
dm-6 i2c-5 mpt3ctl snd tty26 tty48 ttyS10 ttyS4 vcsa6<br />
dm-7 i2c-6 mqueue stderr tty27 tty49 ttyS11 ttyS5 vcsu<br />
<br />
</pre><br />
<br />
== macOS USB passthru failed attempt ==<br />
<br />
That doesn't work on macOS. Tried setting usb mapping via console, following this:<br />
sudo qm monitor 111<br />
qm> info usbhost<br />
qm> quit<br />
sudo qm set 111 -usb1 host=05ac:12a8<br />
<br />
No luck, same result. Reading his remarks on USB forwarding, try resetting machine type:<br />
machine: pc-q35-6.0 (instead of latest, which was 6.2 at time of writing)<br />
remove this from /etc/pve/qemu-server/111.conf: -global ICH9-LPC.acpi-pci-hotplug-with-bridge-support=off<br />
<br />
Hmm.. perhaps it is a conflict between Nick's usb keyboard config and my usb port selection... try plugging usb into another port and remapping...<br />
<br />
No luck. FFS. Reset to 6.2 and see if we have any luck with hotplug line removed from config... Nope.<br />
<br />
Keep trying permutations... nothing from googling indicates taht this shouldn't just FUCKING WORK...<br />
<br />
Remove this and re-add the hotplug line, on the off chance it shouldn't be used with q35 v6.2:<br />
-global nec-usb-xhci.msi=off<br />
<br />
Nope, that jsut caused a problem with "Springboard", not working on this Mac, or some shit. Re-adding the line...<br />
<br />
Well what now? Google more? <br />
<br />
Update and reboot proxmox and retry... no luck.<br />
<br />
Try changing from blue to light-blue port... the device is mapped so it should be passed through... nope.<br />
<br />
Try [https://raw.githubusercontent.com/vzamora/Proxmox-Cheatsheet/main/General%20Proxmox%20Setup%20and%20Notes/USB%20Passthrough%20Notes.md this guy's approach] to mount an EFI Disk<br />
lsusb<br />
Bus 004 Device 009: ID 05ac:12a8 Apple, Inc. iPhone 5/5C/5S/6/SE<br />
ls -al /dev/bus/usb/004/009<br />
crw-rw-r-- 1 root root 189, 392 Jul 22 16:10 /dev/bus/usb/004/009<br />
sudo emacs /etc/pve/qemu-server/111.conf<br />
lxc.cgroup.devices.allow: c 189:* rwm<br />
lxc.mount.entry: /dev/bus/usb/004 dev/bus/usb/004 none bind,optional,create=dir<br />
<br />
Nope.<br />
<br />
Try mapping the port instead of device ID, from the Proxmox UI... Nope.<br />
<br />
How can i check the apple side for any issues? straight up google for that, macOS not seeing a USB device.<br />
System Information > USB > nada<br />
<br />
hrmphhhh. Never got it working. RE-google next month maybe...<br />
<br />
== Add samba shares manually ==<br />
During original configuration, I added samba shares manually.<br />
sudo emacs /etc/fstab # and paste samba stanza from another machine<br />
sudo emacs /root/samba_credentials<br />
sudo mkdir /spiceflow && sudo chmod 777 /spiceflow<br />
🌐 m@melange [~] mkdir /spiceflow/bitpost<br />
🌐 m@melange [~] mkdir /spiceflow/grim<br />
🌐 m@melange [~] mkdir /spiceflow/mack<br />
🌐 m@melange [~] mkdir /spiceflow/reservoir<br />
🌐 m@melange [~] mkdir /spiceflow/sassy<br />
🌐 m@melange [~] mkdir /spiceflow/safe<br />
<br />
Now you can mount em up and hang em high!<br />
<br />
== Old VMs ==<br />
[[Mongolden]]</div>Mhttps://bitpost.com/w/index.php?title=Melange&diff=7467Melange2024-03-23T20:33:03Z<p>M: /* LSI PCI-E board passthru */</p>
<hr />
<div>Running ProxMox and many VMs.<br />
<br />
== VMs ==<br />
<br />
[[Hive]] - [[AbtDev1]] - [[Matcha]] - [[Bandit]] - [[Positronic]] - [[Morosoph]] - [[Glam]] - [[Matryoshka]] - [[Hoard]] - [[Tornados]]<br />
<br />
[[Add a new VM to melange]] - [[Clone a melange VM]]<br />
<br />
=== VM Backups ===<br />
Backups of all VMs (except hive data) are here:<br />
Datacenter > Storage > bitpost-backup<br />
That is a directly-configured samba link to bitpost.com's softraid. It has enough raided storage to fit cascading backups for all VMs there. Whoooop.<br />
<br />
NOTE: NEVER MOUNT /spiceflow/bitpost, as it may conflict with Proxmox samba link. The Proxmox samba link should come up on melange reboot, and do everything it needs, itself.<br />
<br />
== Usage ==<br />
<br />
=== Copy media ===<br />
On melange, you can plug microSSDs into the melange hub, and you can also reach samba shares. <br />
<br />
So direct copy from shares to microSSD is possible without 1Gbit network bottleneck.<br />
sudo su -<br />
mount /dev/sdh1 /media<br />
mount /spiceflow/safe<br />
rsync --progress /media/... /spiceflow/safe/...<br />
<br />
Turns out that's not much of a bottleneck yet... my best cards were only getting 100MB/s write speed... test more cards maybe! In the meantime, just use Nautilus on cast.<br />
<br />
Reading should def be faster though, if you need to copy from microSSD to shares.<br />
<br />
=== Configuration ===<br />
<br />
==== Start order ====<br />
Each VM has a "Start/Shutdown order" Option. We want some VMs to start first. To do that, we set them to a specific order number, with a delay, which will slot them to start before the default VMs. You can also specify an up value, allowing for a pause before starting the VMs in the next order. Nice.<br />
<br />
We want hive to start first, so NAS drives can be connected on startup of other VMs.<br />
<br />
We might want positronic to start before AbtDev1.... But to do that right, we need to set ALL other VMs to order 2, and AbtDev1 to order 3. It's not even really needed though - AbtDev1 won't be connecting to positronic until I go there and run a development build, and by then positronic should be well under way. So I'll go the much-easier route and leave all VMs other than hive to the default.<br />
<br />
NOTE that bitpost is on bare metal, and needs positronic, so manage that manually on any restart of melange!<br />
<br />
Hive: order=1,up=60<br />
All other VMs: order=Any (the default)<br />
<br />
== Upgrade ==<br />
<br />
=== Minor updates ===<br />
<br />
Minor updates are so easy, eg for 7.3.6.<br />
<br />
* (optional) You don't NEED to, but it's a good time to upgrade and shutdown any VMs. It also went fine to do this while leaving all VMs up though!<br />
* Click:<br />
Datacenter > melange > (melange bar) > Updates > Upgrade<br />
* Reboot<br />
<br />
=== Reboot after upgrade ===<br />
<br />
Melange VMs all come back nicely, because they come up staged well. See the order fo the VMs for details, particularly hive:<br />
VM > Options > Start/shutdown order: order=1,up=90<br />
<br />
== Hardware ==<br />
* Motherboard: [https://www.asus.com/motherboards-components/motherboards/workstation/pro-ws-x570-ace/ ASUS Pro WS X570-ACE]<br />
** Storage: 2 M.2 slots, one free!<br />
1 TB M.2 nvme <br />
/dev/mapper/pve-root / # /dev/nvme0n1p3<br />
/boot/efi /dev/nvme0n1p2<br />
* GPU: Nvidia GK208 - GeForce GT 640<br />
sudo lspci|grep VGA<br />
VGA compatible controller: NVIDIA Corporation GK208 [GeForce GT 640 Rev. 2] (rev a1)<br />
<br />
* [https://www.amazon.com/gp/product/B00DSURZYS/ref=ppx_yo_dt_b_search_asin_title?ie=UTF8&psc=1 LSI Broadcom SAS 9300-8i 8-port 12Gb/s SATA+SAS PCI-Express 3.0 Low Profile Host Bus Adapter]: Pass through of 8 drive connections to hive to use for its storage<br />
<br />
* 4 SAS connections on mobo, with a SAS-to-SATA cable; make sure to turn on "SATA mode" for U.2 in BIOS; all passed through to hive<br />
<br />
* 4 SATA ssd connections on mobo, all passed through to hive<br />
<br />
=== Memory ===<br />
32GB x 4 sticks (maxed out)<br />
<br />
128GB total, distributed as follows:<br />
* 34GB AbtDev1<br />
* 24GB Hive<br />
* 16GB Positronic<br />
* 12GB Matcha<br />
* 12GB Glam<br />
* 06GB Matryoshka<br />
* 06GB Bandit<br />
* 03GB Morosoph<br />
* 04GB Hoard (stopped)<br />
* 04GB Tornados (stopped)<br />
<br />
Historic update after 64 > 128 upgrade<br />
Had ~83GB distributed, distribute +37GB, leaving ~8GB (OS provides ~125/128GB, ie it needs ~3GB)<br />
* 20GB AbtDev1 + 14 = [https://www.unitconverters.net/data-storage/gb-to-mb.htm 34816MB]<br />
* 19GB Hive + 5 = 24576MB<br />
* 10GB Positronic +6 = 16384MB<br />
* 08GB Glam +4 = 12288MB<br />
* 08GB Matcha +4 = 12288MB<br />
* 02GB Bandit +4<br />
<br />
=== Passthroughs ===<br />
We pass the LSI PCI-E board (and its 8 ssd drives), the 4 SATA SSD drives, and the 4 U.2 SSD drives to Hive, for storage management.<br />
<br />
==== LSI PCI-E board passthru ====<br />
<br />
I didn't log this when I did it originally. It must be in the 104.conf file tho... I'm guessing it is this:<br />
<br />
🌐 m@melange [~] sudo cat /etc/pve/qemu-server/104.conf<br />
scsihw: virtio-scsi-pci<br />
<br />
==== SSD drives passthru ====<br />
<br />
* Identify drives on melange<br />
🌐 m@melange [~] sudo lsblk |awk 'NR==1{print $0" DEVICE-ID(S)"}NR>1{dev=$1;printf $0" ";system("find /dev/disk/by-id -lname \"*"dev"\" -printf \" %p\"");print "";}'|grep -v -E 'part|lvm'<br />
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT DEVICE-ID(S)<br />
sdh 8:112 0 931.5G 0 disk /dev/disk/by-id/wwn-0x500a0751e5a2fb01 /dev/disk/by-id/ata-CT1000MX500SSD1_2122E5A2FB01<br />
sdi 8:128 0 931.5G 0 disk /dev/disk/by-id/ata-CT1000MX500SSD1_2122E5A2FD56 /dev/disk/by-id/wwn-0x500a0751e5a2fd56<br />
sdj 8:144 1 931.5G 0 disk /dev/disk/by-id/ata-CT1000MX500SSD1_2122E5A313D2 /dev/disk/by-id/wwn-0x500a0751e5a313d2<br />
sdk 8:160 1 931.5G 0 disk /dev/disk/by-id/ata-CT1000MX500SSD1_2122E5A313D6 /dev/disk/by-id/wwn-0x500a0751e5a313d6<br />
sdl 8:176 1 931.5G 0 disk /dev/disk/by-id/ata-CT1000MX500SSD1_2117E59AAE1B /dev/disk/by-id/wwn-0x500a0751e59aae1b<br />
sdm 8:192 1 931.5G 0 disk /dev/disk/by-id/wwn-0x500a0751e5a2e131 /dev/disk/by-id/ata-CT1000MX500SSD1_2121E5A2E131<br />
sdn 8:208 1 931.5G 0 disk /dev/disk/by-id/wwn-0x500a0751e5a3009a /dev/disk/by-id/ata-CT1000MX500SSD1_2122E5A3009A<br />
nvme0n1 259:0 0 931.5G 0 disk /dev/disk/by-id/nvme-eui.002538510141169d /dev/disk/by-id/nvme-Samsung_SSD_970_EVO_Plus_1TB_S4EWNJ0N107994E<br />
<br />
* Use '''qm set''' to pass them all to hive (VM 104)<br />
<br />
sudo qm set 104 -scsi11 /dev/disk/by-id/ata-CT1000MX500SSD1_2122E5A2FD56<br />
sudo qm set 104 -scsi12 /dev/disk/by-id/ata-CT1000MX500SSD1_2122E5A2FB01<br />
sudo qm set 104 -scsi13 /dev/disk/by-id/ata-CT1000MX500SSD1_2122E5A313D2<br />
sudo qm set 104 -scsi14 /dev/disk/by-id/ata-CT1000MX500SSD1_2122E5A313D6<br />
sudo qm set 104 -scsi15 /dev/disk/by-id/ata-CT1000MX500SSD1_2117E59AAE1B<br />
sudo qm set 104 -scsi16 /dev/disk/by-id/ata-CT1000MX500SSD1_2121E5A2E131<br />
sudo qm set 104 -scsi17 /dev/disk/by-id/ata-CT1000MX500SSD1_2122E5A3009A<br />
<br />
# check<br />
sudo cat /etc/pve/qemu-server/104.conf<br />
<br />
(restart hive)<br />
<br />
==== USB ====<br />
<br />
Plug something into melange and see what it sees:<br />
lsusb<br />
Bus 004 Device 002: ID 05ac:12a8 Apple, Inc. iPhone 5/5C/5S/6/SE<br />
<br />
Now we will pass that port through:<br />
proxmox UI > Datacenter > Nodes > VMs > 111 (matcha) > Hardware > Add > USB > it will actually show up in a pick list!<br />
<br />
Hell yeah so easy.<br />
<br />
== [[Melange history|History]] ==</div>Mhttps://bitpost.com/w/index.php?title=Melange_history&diff=7466Melange history2024-03-23T20:21:27Z<p>M: </p>
<hr />
<div><br />
== GRIM DEATH ==<br />
<br />
2024/03/23 We are harvesting the hive grim drives for use as melange drives for VM storate.<br />
<br />
I changed the location of the hive System Dataset Pool from grim to safe.<br />
<br />
I copied all grim data to safe (oops, i forgot SharedDownloads... it's empty now...)<br />
<br />
I removed the 'grim' pool from FreeNAS.<br />
<br />
Now I need to move the drives! I want to keep the PCI card as pass-thru, but the two grim drives are on it.<br />
* make note of all hive drive assignments<br />
* open melange<br />
** remove both grim drives from the PCI passthru<br />
** move the one mine drive that is on SATA from SATA to one of the PCI passthroughs<br />
** move one safe drive from SATA to the other of the PCI passthroughs<br />
** add both grim drives to SATA<br />
* close and restart melange and see if you can reconnect everything; if not, RESET/KEEP GOING brother!<br />
<br />
I removed the passthru on the drive:<br />
🌐 m@melange [~] sudo lsblk |awk 'NR==1{print $0" DEVICE-ID(S)"}NR>1{dev=$1;printf $0" ";system("find /dev/disk/by-id -lname \"*"dev"\" -printf \" %p\"");print "";}'|grep -v -E 'part|lvm'<br />
<br />
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT DEVICE-ID(S)<br />
sdh 8:112 1 7.3T 0 disk /dev/disk/by-id/ata-Samsung_SSD_870_QVO_8TB_S5VUNJ0W706320H /dev/disk/by-id/wwn-0x5002538f33710e1d<br />
<br />
sudo cat /etc/pve/qemu-server/104.conf<br />
<br />
sudo qm set 104 -delete ...<br />
<br />
== 2023 July full upgrade ==<br />
I need to install a Windows 11 VM. Also have Ubuntu 20.04 machines that should be moved to 22.04. Figured as good a reason as any for a full upgrade of everything.<br />
<br />
* Upgrade bitpost first. Upon reboot, my $*(@ IP changed again. Fuck you google. Spent a while resetting that. Here are the notes (also in my Red Dead RP journal, keepin it real (real accessible when everything's down), lol!):<br />
cast$ ssh bitpost # not bp or bitpost.com, so we get to the LAN resource<br />
sudo su -<br />
stronger_firewall_and_save # internet should now work<br />
# get new IP from whatsmyip<br />
# fix bitpost.com DNS at domains.google.com<br />
# WAIT for propagation.... might as well fix the other DNS records...<br />
sudo service dnsmasq restart<br />
ping bitpost.com # EVENTUALLY this will work! May need to repeat this AND previous step.<br />
* Ask Tom to update E-S DNS to use new IP<br />
* Upgrade abtdev1, then all Ubuntu boxes (glam is toughest), then positronic last, with this pattern:<br />
mh-update-ubuntu # and reboot<br />
sudo do-release-upgrade # best to connect directly, but ssh worked fine too<br />
sudo shutdown -h now # to prep for melange reboot<br />
* Upgrade hive's TrueNAS install, via https://hive CHECK FOR UPDATES, then shut it down<br />
* Update and reboot melange PROXMOX install, via https://melange:8006 Datacenter > melange > Updates<br />
* CHECK EVERYTHING<br />
** proxmox samba share for backups<br />
** samba shares<br />
** at ptl to ensure it can get to positronic<br />
** shitcutter and blogs and wiki and...<br />
** I had a terrible time getting GLAM apache + PHP working again now that Ubuntu uses PHP 8.1; just needed to ENABLE THE MODULE, ffs:<br />
a2enmod php8.1<br />
<br />
== 6.3 > 7.0 ==<br />
<br />
Proxmox uses apt for upgrades.<br />
I followed [https://pve.proxmox.com/wiki/Upgrade_from_6.x_to_7.0 this], for the most part.<br />
* Update all VMS<br />
* Shut down all VMS<br />
* Fully update current version's apt packages - this took me from 6.3 to 6.4, a necessary first step.<br />
sudo apt update<br />
sudo apt dist-upgrade<br />
* Upgrade basic apt sources list from buster to bullseye<br />
sudo sed -i 's/buster\/updates/bullseye-security/g;s/buster/bullseye/g' /etc/apt/sources.list<br />
# instructions discuss pve-enterprise but i needed to change pve-no-subscription instead - but same exact steps, otherwise<br />
# ie, leave this commented out, but might as well set to bullseye<br />
# /etc/apt/sources.list.d/pve-enterprise.list<br />
# and update this to bullseye<br />
# /etc/apt/sources.list.d/pve-no-subscription.list<br />
* Perform the full upgrade to bullseye / pm 7<br />
sudo apt update<br />
sudo apt dist-upgrade<br />
* Reboot<br />
<br />
== Manual restart notes ==<br />
<br />
NOTE: This shouldn't be a problem any more with newer staged order restart.<br />
<br />
One time bandit samba shares don't mount (it comes up too fast perhaps?). So restart them then restart qbt nox:<br />
mh-setup-samba-shares<br />
sudo service qbittorrent-nox restart<br />
<br />
I did another round of `apt update && apt dist-upgrade` without stopping containers and it went fine (with bandit fixup still needed after reboot, tho).<br />
<br />
sudo apt update<br />
sudo apt dist-upgrade<br />
ssh bandit<br />
mh-setup-samba-shares<br />
sudo service qbittorrent-nox restart<br />
<br />
== Add 7 1TB zraid ==<br />
<br />
After adding 7 new 1 TB ssds:<br />
🌐 m@melange [~] sudo lsblk |awk 'NR==1{print $0" DEVICE-ID(S)"}NR>1{dev=$1;printf $0" ";system("find /dev/disk/by-id -lname \"*"dev"\" -printf \" %p\"");print "";}'|grep -v -E 'part|lvm'<br />
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT DEVICE-ID(S)<br />
sdh 8:112 0 931.5G 0 disk /dev/disk/by-id/wwn-0x500a0751e5a2fb01 /dev/disk/by-id/ata-CT1000MX500SSD1_2122E5A2FB01<br />
sdi 8:128 0 931.5G 0 disk /dev/disk/by-id/ata-CT1000MX500SSD1_2122E5A2FD56 /dev/disk/by-id/wwn-0x500a0751e5a2fd56<br />
sdj 8:144 1 931.5G 0 disk /dev/disk/by-id/ata-CT1000MX500SSD1_2122E5A313D2 /dev/disk/by-id/wwn-0x500a0751e5a313d2<br />
sdk 8:160 1 931.5G 0 disk /dev/disk/by-id/ata-CT1000MX500SSD1_2122E5A313D6 /dev/disk/by-id/wwn-0x500a0751e5a313d6<br />
sdl 8:176 1 931.5G 0 disk /dev/disk/by-id/ata-CT1000MX500SSD1_2117E59AAE1B /dev/disk/by-id/wwn-0x500a0751e59aae1b<br />
sdm 8:192 1 931.5G 0 disk /dev/disk/by-id/wwn-0x500a0751e5a2e131 /dev/disk/by-id/ata-CT1000MX500SSD1_2121E5A2E131<br />
sdn 8:208 1 931.5G 0 disk /dev/disk/by-id/wwn-0x500a0751e5a3009a /dev/disk/by-id/ata-CT1000MX500SSD1_2122E5A3009A<br />
nvme0n1 259:0 0 931.5G 0 disk /dev/disk/by-id/nvme-eui.002538510141169d /dev/disk/by-id/nvme-Samsung_SSD_970_EVO_Plus_1TB_S4EWNJ0N107994E<br />
<br />
<br />
Before adding 7 new 1 TB ssds:<br />
<pre><br />
🌐 m@melange [~] ls /dev/<br />
autofs dm-8 i2c-7 net stdin tty28 tty5 ttyS12 ttyS6 vcsu1<br />
block dm-9 i2c-8 null stdout tty29 tty50 ttyS13 ttyS7 vcsu2<br />
btrfs-control dri i2c-9 nvme0 tty tty3 tty51 ttyS14 ttyS8 vcsu3<br />
bus ecryptfs initctl nvme0n1 tty0 tty30 tty52 ttyS15 ttyS9 vcsu4<br />
char fb0 input nvme0n1p1 tty1 tty31 tty53 ttyS16 udmabuf vcsu5<br />
console fd kmsg nvme0n1p2 tty10 tty32 tty54 ttyS17 uhid vcsu6<br />
core full kvm nvme0n1p3 tty11 tty33 tty55 ttyS18 uinput vfio<br />
cpu fuse lightnvm nvram tty12 tty34 tty56 ttyS19 urandom vga_arbiter<br />
cpu_dma_latency gpiochip0 log port tty13 tty35 tty57 ttyS2 userio vhci<br />
cuse hpet loop0 ppp tty14 tty36 tty58 ttyS20 vcs vhost-net<br />
disk hugepages loop1 pps0 tty15 tty37 tty59 ttyS21 vcs1 vhost-vsock<br />
dm-0 hwrng loop2 psaux tty16 tty38 tty6 ttyS22 vcs2 watchdog<br />
dm-1 i2c-0 loop3 ptmx tty17 tty39 tty60 ttyS23 vcs3 watchdog0<br />
dm-10 i2c-1 loop4 ptp0 tty18 tty4 tty61 ttyS24 vcs4 zero<br />
dm-11 i2c-10 loop5 pts tty19 tty40 tty62 ttyS25 vcs5 zfs<br />
dm-12 i2c-11 loop6 pve tty2 tty41 tty63 ttyS26 vcs6<br />
dm-13 i2c-12 loop7 random tty20 tty42 tty7 ttyS27 vcsa<br />
dm-14 i2c-13 loop-control rfkill tty21 tty43 tty8 ttyS28 vcsa1<br />
dm-2 i2c-14 mapper rtc tty22 tty44 tty9 ttyS29 vcsa2<br />
dm-3 i2c-2 mcelog rtc0 tty23 tty45 ttyprintk ttyS3 vcsa3<br />
dm-4 i2c-3 mem shm tty24 tty46 ttyS0 ttyS30 vcsa4<br />
dm-5 i2c-4 mpt2ctl snapshot tty25 tty47 ttyS1 ttyS31 vcsa5<br />
dm-6 i2c-5 mpt3ctl snd tty26 tty48 ttyS10 ttyS4 vcsa6<br />
dm-7 i2c-6 mqueue stderr tty27 tty49 ttyS11 ttyS5 vcsu<br />
<br />
</pre><br />
<br />
== macOS USB passthru failed attempt ==<br />
<br />
That doesn't work on macOS. Tried setting usb mapping via console, following this:<br />
sudo qm monitor 111<br />
qm> info usbhost<br />
qm> quit<br />
sudo qm set 111 -usb1 host=05ac:12a8<br />
<br />
No luck, same result. Reading his remarks on USB forwarding, try resetting machine type:<br />
machine: pc-q35-6.0 (instead of latest, which was 6.2 at time of writing)<br />
remove this from /etc/pve/qemu-server/111.conf: -global ICH9-LPC.acpi-pci-hotplug-with-bridge-support=off<br />
<br />
Hmm.. perhaps it is a conflict between Nick's usb keyboard config and my usb port selection... try plugging usb into another port and remapping...<br />
<br />
No luck. FFS. Reset to 6.2 and see if we have any luck with hotplug line removed from config... Nope.<br />
<br />
Keep trying permutations... nothing from googling indicates taht this shouldn't just FUCKING WORK...<br />
<br />
Remove this and re-add the hotplug line, on the off chance it shouldn't be used with q35 v6.2:<br />
-global nec-usb-xhci.msi=off<br />
<br />
Nope, that jsut caused a problem with "Springboard", not working on this Mac, or some shit. Re-adding the line...<br />
<br />
Well what now? Google more? <br />
<br />
Update and reboot proxmox and retry... no luck.<br />
<br />
Try changing from blue to light-blue port... the device is mapped so it should be passed through... nope.<br />
<br />
Try [https://raw.githubusercontent.com/vzamora/Proxmox-Cheatsheet/main/General%20Proxmox%20Setup%20and%20Notes/USB%20Passthrough%20Notes.md this guy's approach] to mount an EFI Disk<br />
lsusb<br />
Bus 004 Device 009: ID 05ac:12a8 Apple, Inc. iPhone 5/5C/5S/6/SE<br />
ls -al /dev/bus/usb/004/009<br />
crw-rw-r-- 1 root root 189, 392 Jul 22 16:10 /dev/bus/usb/004/009<br />
sudo emacs /etc/pve/qemu-server/111.conf<br />
lxc.cgroup.devices.allow: c 189:* rwm<br />
lxc.mount.entry: /dev/bus/usb/004 dev/bus/usb/004 none bind,optional,create=dir<br />
<br />
Nope.<br />
<br />
Try mapping the port instead of device ID, from the Proxmox UI... Nope.<br />
<br />
How can i check the apple side for any issues? straight up google for that, macOS not seeing a USB device.<br />
System Information > USB > nada<br />
<br />
hrmphhhh. Never got it working. RE-google next month maybe...<br />
<br />
== Add samba shares manually ==<br />
During original configuration, I added samba shares manually.<br />
sudo emacs /etc/fstab # and paste samba stanza from another machine<br />
sudo emacs /root/samba_credentials<br />
sudo mkdir /spiceflow && sudo chmod 777 /spiceflow<br />
🌐 m@melange [~] mkdir /spiceflow/bitpost<br />
🌐 m@melange [~] mkdir /spiceflow/grim<br />
🌐 m@melange [~] mkdir /spiceflow/mack<br />
🌐 m@melange [~] mkdir /spiceflow/reservoir<br />
🌐 m@melange [~] mkdir /spiceflow/sassy<br />
🌐 m@melange [~] mkdir /spiceflow/safe<br />
<br />
Now you can mount em up and hang em high!<br />
<br />
== Old VMs ==<br />
[[Mongolden]]</div>Mhttps://bitpost.com/w/index.php?title=Melange_history&diff=7465Melange history2024-03-23T20:18:59Z<p>M: </p>
<hr />
<div><br />
== GRIM DEATH ==<br />
NOTE on 2024/03/23 I removed the 'grim' pool from FreeNAS to free the drives for VM storage. The content was moved to 'mine'.<br />
<br />
Now I need to move the drives! I want to keep the PCI card as pass-thru, but the two grim drives are on it.<br />
* make note of all hive drive assignments<br />
* open melange<br />
** remove both grim drives from the PCI passthru<br />
** move the one mine drive that is on SATA from SATA to one of the PCI passthroughs<br />
** move one safe drive from SATA to the other of the PCI passthroughs<br />
** add both grim drives to SATA<br />
* close and restart melange and see if you can reconnect everything; if not, RESET/KEEP GOING brother!<br />
<br />
I removed the passthru on the drive:<br />
🌐 m@melange [~] sudo lsblk |awk 'NR==1{print $0" DEVICE-ID(S)"}NR>1{dev=$1;printf $0" ";system("find /dev/disk/by-id -lname \"*"dev"\" -printf \" %p\"");print "";}'|grep -v -E 'part|lvm'<br />
<br />
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT DEVICE-ID(S)<br />
sdh 8:112 1 7.3T 0 disk /dev/disk/by-id/ata-Samsung_SSD_870_QVO_8TB_S5VUNJ0W706320H /dev/disk/by-id/wwn-0x5002538f33710e1d<br />
<br />
sudo cat /etc/pve/qemu-server/104.conf<br />
<br />
sudo qm set 104 -delete ...<br />
<br />
== 2023 July full upgrade ==<br />
I need to install a Windows 11 VM. Also have Ubuntu 20.04 machines that should be moved to 22.04. Figured as good a reason as any for a full upgrade of everything.<br />
<br />
* Upgrade bitpost first. Upon reboot, my $*(@ IP changed again. Fuck you google. Spent a while resetting that. Here are the notes (also in my Red Dead RP journal, keepin it real (real accessible when everything's down), lol!):<br />
cast$ ssh bitpost # not bp or bitpost.com, so we get to the LAN resource<br />
sudo su -<br />
stronger_firewall_and_save # internet should now work<br />
# get new IP from whatsmyip<br />
# fix bitpost.com DNS at domains.google.com<br />
# WAIT for propagation.... might as well fix the other DNS records...<br />
sudo service dnsmasq restart<br />
ping bitpost.com # EVENTUALLY this will work! May need to repeat this AND previous step.<br />
* Ask Tom to update E-S DNS to use new IP<br />
* Upgrade abtdev1, then all Ubuntu boxes (glam is toughest), then positronic last, with this pattern:<br />
mh-update-ubuntu # and reboot<br />
sudo do-release-upgrade # best to connect directly, but ssh worked fine too<br />
sudo shutdown -h now # to prep for melange reboot<br />
* Upgrade hive's TrueNAS install, via https://hive CHECK FOR UPDATES, then shut it down<br />
* Update and reboot melange PROXMOX install, via https://melange:8006 Datacenter > melange > Updates<br />
* CHECK EVERYTHING<br />
** proxmox samba share for backups<br />
** samba shares<br />
** at ptl to ensure it can get to positronic<br />
** shitcutter and blogs and wiki and...<br />
** I had a terrible time getting GLAM apache + PHP working again now that Ubuntu uses PHP 8.1; just needed to ENABLE THE MODULE, ffs:<br />
a2enmod php8.1<br />
<br />
== 6.3 > 7.0 ==<br />
<br />
Proxmox uses apt for upgrades.<br />
I followed [https://pve.proxmox.com/wiki/Upgrade_from_6.x_to_7.0 this], for the most part.<br />
* Update all VMS<br />
* Shut down all VMS<br />
* Fully update current version's apt packages - this took me from 6.3 to 6.4, a necessary first step.<br />
sudo apt update<br />
sudo apt dist-upgrade<br />
* Upgrade basic apt sources list from buster to bullseye<br />
sudo sed -i 's/buster\/updates/bullseye-security/g;s/buster/bullseye/g' /etc/apt/sources.list<br />
# instructions discuss pve-enterprise but i needed to change pve-no-subscription instead - but same exact steps, otherwise<br />
# ie, leave this commented out, but might as well set to bullseye<br />
# /etc/apt/sources.list.d/pve-enterprise.list<br />
# and update this to bullseye<br />
# /etc/apt/sources.list.d/pve-no-subscription.list<br />
* Perform the full upgrade to bullseye / pm 7<br />
sudo apt update<br />
sudo apt dist-upgrade<br />
* Reboot<br />
<br />
== Manual restart notes ==<br />
<br />
NOTE: This shouldn't be a problem any more with newer staged order restart.<br />
<br />
One time bandit samba shares don't mount (it comes up too fast perhaps?). So restart them then restart qbt nox:<br />
mh-setup-samba-shares<br />
sudo service qbittorrent-nox restart<br />
<br />
I did another round of `apt update && apt dist-upgrade` without stopping containers and it went fine (with bandit fixup still needed after reboot, tho).<br />
<br />
sudo apt update<br />
sudo apt dist-upgrade<br />
ssh bandit<br />
mh-setup-samba-shares<br />
sudo service qbittorrent-nox restart<br />
<br />
== Add 7 1TB zraid ==<br />
<br />
After adding 7 new 1 TB ssds:<br />
🌐 m@melange [~] sudo lsblk |awk 'NR==1{print $0" DEVICE-ID(S)"}NR>1{dev=$1;printf $0" ";system("find /dev/disk/by-id -lname \"*"dev"\" -printf \" %p\"");print "";}'|grep -v -E 'part|lvm'<br />
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT DEVICE-ID(S)<br />
sdh 8:112 0 931.5G 0 disk /dev/disk/by-id/wwn-0x500a0751e5a2fb01 /dev/disk/by-id/ata-CT1000MX500SSD1_2122E5A2FB01<br />
sdi 8:128 0 931.5G 0 disk /dev/disk/by-id/ata-CT1000MX500SSD1_2122E5A2FD56 /dev/disk/by-id/wwn-0x500a0751e5a2fd56<br />
sdj 8:144 1 931.5G 0 disk /dev/disk/by-id/ata-CT1000MX500SSD1_2122E5A313D2 /dev/disk/by-id/wwn-0x500a0751e5a313d2<br />
sdk 8:160 1 931.5G 0 disk /dev/disk/by-id/ata-CT1000MX500SSD1_2122E5A313D6 /dev/disk/by-id/wwn-0x500a0751e5a313d6<br />
sdl 8:176 1 931.5G 0 disk /dev/disk/by-id/ata-CT1000MX500SSD1_2117E59AAE1B /dev/disk/by-id/wwn-0x500a0751e59aae1b<br />
sdm 8:192 1 931.5G 0 disk /dev/disk/by-id/wwn-0x500a0751e5a2e131 /dev/disk/by-id/ata-CT1000MX500SSD1_2121E5A2E131<br />
sdn 8:208 1 931.5G 0 disk /dev/disk/by-id/wwn-0x500a0751e5a3009a /dev/disk/by-id/ata-CT1000MX500SSD1_2122E5A3009A<br />
nvme0n1 259:0 0 931.5G 0 disk /dev/disk/by-id/nvme-eui.002538510141169d /dev/disk/by-id/nvme-Samsung_SSD_970_EVO_Plus_1TB_S4EWNJ0N107994E<br />
<br />
<br />
Before adding 7 new 1 TB ssds:<br />
<pre><br />
🌐 m@melange [~] ls /dev/<br />
autofs dm-8 i2c-7 net stdin tty28 tty5 ttyS12 ttyS6 vcsu1<br />
block dm-9 i2c-8 null stdout tty29 tty50 ttyS13 ttyS7 vcsu2<br />
btrfs-control dri i2c-9 nvme0 tty tty3 tty51 ttyS14 ttyS8 vcsu3<br />
bus ecryptfs initctl nvme0n1 tty0 tty30 tty52 ttyS15 ttyS9 vcsu4<br />
char fb0 input nvme0n1p1 tty1 tty31 tty53 ttyS16 udmabuf vcsu5<br />
console fd kmsg nvme0n1p2 tty10 tty32 tty54 ttyS17 uhid vcsu6<br />
core full kvm nvme0n1p3 tty11 tty33 tty55 ttyS18 uinput vfio<br />
cpu fuse lightnvm nvram tty12 tty34 tty56 ttyS19 urandom vga_arbiter<br />
cpu_dma_latency gpiochip0 log port tty13 tty35 tty57 ttyS2 userio vhci<br />
cuse hpet loop0 ppp tty14 tty36 tty58 ttyS20 vcs vhost-net<br />
disk hugepages loop1 pps0 tty15 tty37 tty59 ttyS21 vcs1 vhost-vsock<br />
dm-0 hwrng loop2 psaux tty16 tty38 tty6 ttyS22 vcs2 watchdog<br />
dm-1 i2c-0 loop3 ptmx tty17 tty39 tty60 ttyS23 vcs3 watchdog0<br />
dm-10 i2c-1 loop4 ptp0 tty18 tty4 tty61 ttyS24 vcs4 zero<br />
dm-11 i2c-10 loop5 pts tty19 tty40 tty62 ttyS25 vcs5 zfs<br />
dm-12 i2c-11 loop6 pve tty2 tty41 tty63 ttyS26 vcs6<br />
dm-13 i2c-12 loop7 random tty20 tty42 tty7 ttyS27 vcsa<br />
dm-14 i2c-13 loop-control rfkill tty21 tty43 tty8 ttyS28 vcsa1<br />
dm-2 i2c-14 mapper rtc tty22 tty44 tty9 ttyS29 vcsa2<br />
dm-3 i2c-2 mcelog rtc0 tty23 tty45 ttyprintk ttyS3 vcsa3<br />
dm-4 i2c-3 mem shm tty24 tty46 ttyS0 ttyS30 vcsa4<br />
dm-5 i2c-4 mpt2ctl snapshot tty25 tty47 ttyS1 ttyS31 vcsa5<br />
dm-6 i2c-5 mpt3ctl snd tty26 tty48 ttyS10 ttyS4 vcsa6<br />
dm-7 i2c-6 mqueue stderr tty27 tty49 ttyS11 ttyS5 vcsu<br />
<br />
</pre><br />
<br />
== macOS USB passthru failed attempt ==<br />
<br />
That doesn't work on macOS. Tried setting usb mapping via console, following this:<br />
sudo qm monitor 111<br />
qm> info usbhost<br />
qm> quit<br />
sudo qm set 111 -usb1 host=05ac:12a8<br />
<br />
No luck, same result. Reading his remarks on USB forwarding, try resetting machine type:<br />
machine: pc-q35-6.0 (instead of latest, which was 6.2 at time of writing)<br />
remove this from /etc/pve/qemu-server/111.conf: -global ICH9-LPC.acpi-pci-hotplug-with-bridge-support=off<br />
<br />
Hmm.. perhaps it is a conflict between Nick's usb keyboard config and my usb port selection... try plugging usb into another port and remapping...<br />
<br />
No luck. FFS. Reset to 6.2 and see if we have any luck with hotplug line removed from config... Nope.<br />
<br />
Keep trying permutations... nothing from googling indicates taht this shouldn't just FUCKING WORK...<br />
<br />
Remove this and re-add the hotplug line, on the off chance it shouldn't be used with q35 v6.2:<br />
-global nec-usb-xhci.msi=off<br />
<br />
Nope, that jsut caused a problem with "Springboard", not working on this Mac, or some shit. Re-adding the line...<br />
<br />
Well what now? Google more? <br />
<br />
Update and reboot proxmox and retry... no luck.<br />
<br />
Try changing from blue to light-blue port... the device is mapped so it should be passed through... nope.<br />
<br />
Try [https://raw.githubusercontent.com/vzamora/Proxmox-Cheatsheet/main/General%20Proxmox%20Setup%20and%20Notes/USB%20Passthrough%20Notes.md this guy's approach] to mount an EFI Disk<br />
lsusb<br />
Bus 004 Device 009: ID 05ac:12a8 Apple, Inc. iPhone 5/5C/5S/6/SE<br />
ls -al /dev/bus/usb/004/009<br />
crw-rw-r-- 1 root root 189, 392 Jul 22 16:10 /dev/bus/usb/004/009<br />
sudo emacs /etc/pve/qemu-server/111.conf<br />
lxc.cgroup.devices.allow: c 189:* rwm<br />
lxc.mount.entry: /dev/bus/usb/004 dev/bus/usb/004 none bind,optional,create=dir<br />
<br />
Nope.<br />
<br />
Try mapping the port instead of device ID, from the Proxmox UI... Nope.<br />
<br />
How can i check the apple side for any issues? straight up google for that, macOS not seeing a USB device.<br />
System Information > USB > nada<br />
<br />
hrmphhhh. Never got it working. RE-google next month maybe...<br />
<br />
== Add samba shares manually ==<br />
During original configuration, I added samba shares manually.<br />
sudo emacs /etc/fstab # and paste samba stanza from another machine<br />
sudo emacs /root/samba_credentials<br />
sudo mkdir /spiceflow && sudo chmod 777 /spiceflow<br />
🌐 m@melange [~] mkdir /spiceflow/bitpost<br />
🌐 m@melange [~] mkdir /spiceflow/grim<br />
🌐 m@melange [~] mkdir /spiceflow/mack<br />
🌐 m@melange [~] mkdir /spiceflow/reservoir<br />
🌐 m@melange [~] mkdir /spiceflow/sassy<br />
🌐 m@melange [~] mkdir /spiceflow/safe<br />
<br />
Now you can mount em up and hang em high!<br />
<br />
== Old VMs ==<br />
[[Mongolden]]</div>Mhttps://bitpost.com/w/index.php?title=A_better_Trader_testing&diff=7464A better Trader testing2024-03-12T03:12:23Z<p>M: </p>
<hr />
<div>== Intro ==<br />
<br />
This page introduces the app to members of the test team, and explains the basics of performing testing.<br />
<br />
== Projects ==<br />
<br />
There are two ways to get to the app, either via [https://abettertrader.com the website] or the iPhone app (installed and updated via the [https://apps.apple.com/us/app/testflight/id899247664 TestFlight app]). Both need testing separately from each other. Testing is managed on the project boards.<br />
<br />
=== Project Boards ===<br />
<br />
Each test task is represented by a "ticket" on a "board".<br />
<br />
Initially the ticket is created with a description of the tests that are needed. As you test, you will capture all the details of the tests in the ticket, including screenshots, whether the test passed, and all other details.<br />
<br />
The tickets on the boards will generally flow from left to right as the status changes:<br />
Open > Sprint > In progress > Failed > Ready For Testing > Testing > Ready For Release > Done (released)<br />
<br />
* [https://shitcutter.com/the-digital-age/a-better-trader/frontend/atui/-/boards/21 Website board]<br />
* [https://shitcutter.com/the-digital-age/a-better-trader/frontend/ios/abettertrader/-/boards/18 iPhone app board]<br />
<br />
=== Server ===<br />
<br />
In addition, there is a server that works behind the scenes. This is mostly tested automatically through tests written by the developers, and testers don't usually need to worry about it.<br />
<br />
* [https://shitcutter.com/the-digital-age/a-better-trader/server/-/boards/9 Server board]<br />
<br />
== Smoke Testing ==<br />
<br />
As the code changes, everything typically needs to be retested, as change to the code in one place can cause unintended "side effect" problems in other areas. Just because something works today, sadly that doesn't mean it will work tomorrow. Usually, everything re-breaks a million times. Yay. Good for job security though.<br />
<br />
So it is never bad to give the entire app a good retest. This is sometimes called a smoketest. For A better Trader, here's a good set of tests you can run through to do a smoke test.<br />
<br />
* When you add a stock, it becomes a "cycle" that is tracked across buys and sells. At any time you may be watching it to buy it, getting ready to sell it, or holding it (preventing it from being sold - this only applies to cycles that are currently owned). Verify that you can change these states for a cycle, by manually selling, buying and holding it.<br />
* During market hours (M-F 9:30am-4pm EST), buys and sells can happen quickly, but no buys or sells happen outside market hours. Outside market hours, a cycle will just get marked as "pending order". Outside market hours, make sure you can set up pending orders, and cancel them. You can also switch the cycle status between actively trading and being held.<br />
* Cycles can change state either because of direct requests by the user, or because the server bought or sold when it felt conditions were right. Test to ensure that the server is buying and selling during market hours. You will see cycles change between owned and watched, automatically. Once you get good at watching the charts, you will be able to predict these events better.</div>Mhttps://bitpost.com/w/index.php?title=A_better_Trader_testing&diff=7463A better Trader testing2024-03-12T03:01:35Z<p>M: /* Projects */</p>
<hr />
<div>== Intro ==<br />
<br />
This page introduces the app to members of the test team, and explains the basics of performing testing.<br />
<br />
== Projects ==<br />
<br />
There are two ways to get to the app, either via [https://abettertrader.com the website] or the iPhone app (installed and updated via the [https://apps.apple.com/us/app/testflight/id899247664 TestFlight app]). Both need testing separately from each other. Testing is managed on the project boards.<br />
<br />
=== Project Boards ===<br />
<br />
Each test task is represented by a "ticket" on a "board".<br />
<br />
Initially the ticket is created with a description of the tests that are needed. As you test, you will capture all the details of the tests in the ticket, including screenshots, whether the test passed, and all other details.<br />
<br />
The tickets on the boards will generally flow from left to right as the status changes:<br />
Open > Sprint > In progress > Failed > Ready For Testing > Testing > Ready For Release > Done (released)<br />
<br />
* [https://shitcutter.com/the-digital-age/a-better-trader/frontend/atui/-/boards/21 Website board]<br />
* [https://shitcutter.com/the-digital-age/a-better-trader/frontend/ios/abettertrader/-/boards/18 iPhone app board]<br />
<br />
=== Server ===<br />
<br />
In addition, there is a server that works behind the scenes. This is mostly tested automatically through tests written by the developers, and testers don't usually need to worry about it.<br />
<br />
* [https://shitcutter.com/the-digital-age/a-better-trader/server/-/boards/9 Server board]</div>Mhttps://bitpost.com/w/index.php?title=A_better_Trader_testing&diff=7462A better Trader testing2024-03-12T03:00:15Z<p>M: </p>
<hr />
<div>== Intro ==<br />
<br />
This page introduces the app to members of the test team, and explains the basics of performing testing.<br />
<br />
== Projects ==<br />
<br />
There are two ways to get to the app, either via [https://abettertrader.com the website] or the iPhone app (installed and updated via the [https://apps.apple.com/us/app/testflight/id899247664 TestFlight app]). Both need testing separately from each other. In addition, there is a server that works behind the scenes. This is mostly tested automatically through tests written by the developers.<br />
<br />
Testers will focus on the website and iPhone app. Testing is managed on the project boards.<br />
<br />
=== Project Boards ===<br />
<br />
Each test task is represented by a "ticket" on a "board".<br />
<br />
Initially the ticket is created with a description of the tests that are needed. As you test, you will capture all the details of the tests in the ticket, including screenshots, whether the test passed, and all other details.<br />
<br />
The tickets on the boards will generally flow from left to right as the status changes:<br />
Open > Sprint > In progress > Failed > Ready For Testing > Testing > Ready For Release > Done (released)<br />
<br />
* [https://shitcutter.com/the-digital-age/a-better-trader/frontend/atui/-/boards/21 Website board]<br />
* [https://shitcutter.com/the-digital-age/a-better-trader/frontend/ios/abettertrader/-/boards/18 iPhone app board]<br />
* [https://shitcutter.com/the-digital-age/a-better-trader/server/-/boards/9 Server board]</div>Mhttps://bitpost.com/w/index.php?title=A_better_Trader_testing&diff=7461A better Trader testing2024-03-12T02:54:32Z<p>M: /* Intro */</p>
<hr />
<div>== Intro ==<br />
<br />
This page introduces the app to members of the test team, and explains the basics of performing testing.<br />
<br />
== Projects ==<br />
<br />
There are two ways to get to the app, either via [https://abettertrader.com the website] or the iPhone app (installed and updated via the [https://apps.apple.com/us/app/testflight/id899247664 TestFlight app]). Both need testing separately from each other. In addition, there is a server that works behind the scenes. This is mostly tested automatically through tests written by the developers.<br />
<br />
Testers will focus on the website and iPhone app. Testing is managed on the project boards.<br />
<br />
== Project Boards ==<br />
<br />
* [https://shitcutter.com/the-digital-age/a-better-trader/frontend/atui/-/boards/21 Website board]<br />
* [https://shitcutter.com/the-digital-age/a-better-trader/frontend/ios/abettertrader/-/boards/18 iPhone app board]<br />
* [https://shitcutter.com/the-digital-age/a-better-trader/server/-/boards/9 Server board]</div>Mhttps://bitpost.com/w/index.php?title=A_better_Trader_testing&diff=7460A better Trader testing2024-03-12T02:54:07Z<p>M: Created page with "== Intro == This page introduces the app to members of the test team. == Projects == There are two ways to get to the app, either via [https://abettertrader.com the website] or the iPhone app (installed and updated via the [https://apps.apple.com/us/app/testflight/id899247664 TestFlight app]). Both need testing separately from each other. In addition, there is a server that works behind the scenes. This is mostly tested automatically through tests written by the de..."</p>
<hr />
<div>== Intro ==<br />
<br />
This page introduces the app to members of the test team.<br />
<br />
== Projects ==<br />
<br />
There are two ways to get to the app, either via [https://abettertrader.com the website] or the iPhone app (installed and updated via the [https://apps.apple.com/us/app/testflight/id899247664 TestFlight app]). Both need testing separately from each other. In addition, there is a server that works behind the scenes. This is mostly tested automatically through tests written by the developers.<br />
<br />
Testers will focus on the website and iPhone app. Testing is managed on the project boards.<br />
<br />
== Project Boards ==<br />
<br />
* [https://shitcutter.com/the-digital-age/a-better-trader/frontend/atui/-/boards/21 Website board]<br />
* [https://shitcutter.com/the-digital-age/a-better-trader/frontend/ios/abettertrader/-/boards/18 iPhone app board]<br />
* [https://shitcutter.com/the-digital-age/a-better-trader/server/-/boards/9 Server board]</div>Mhttps://bitpost.com/w/index.php?title=A_better_Trader&diff=7459A better Trader2024-03-12T02:44:39Z<p>M: </p>
<hr />
<div>[https://abettertrader.com LIVE SITE] -- [http://localhost:8080 DEV SITE]<br />
<br />
[https://bitpost.com/files/AbT_docs/ Generated Documentation]<br />
<br />
== [[A better Trader testing]] ==<br />
<br />
== Use Case ==<br />
<br />
ALL WE ASK OF USERS IS TO WATCH THE NEWS AND...<br />
<br />
-----------------------------------------------------<br />
BE INTUITIVE ABOUT A STOCK'S NEXT DIRECTION<br />
-----------------------------------------------------<br />
<br />
IF A PICKED IS LOOKING GOOD: let it ride; actions: buy; reset the bracket to hold it longer<br />
IF A PICKED IS LOOKING BAD: delete it - there are more fish in the sea, get OUT<br />
IF AN OWNED IS LOOKING GOOD: let it ride; actions: sell; hold<br />
IF AN OWNED IS LOOKING BAD: let it ride; actions: sell and drop, raise bracket<br />
REGULARLY ADD AND REMOVE PICKS<br />
<br />
THAT'S THE DAILY USAGE PATTERN<br />
we must do everything else! <br />
all that boring analysis should be done for them, unless they really want to obsess<br />
<br />
== Design ==<br />
<br />
=== Events ===<br />
<br />
* Types<br />
** this account's snapshots<br />
** significant highlights (across all accounts and cycles); NOTE to make this performant, we only use snapshots (all types) to find these highlights; snapshots should happen on every major change tho! Cool.<br />
** news (TBD)<br />
* Provide more detail for monthly-or-below, and less for above-monthly<br />
* The server consolidates data into all "significant" events for the timeline<br />
* The user can filter events on the client for readability in a small UI, but they are always available for the given timeframe so filter changes can be quickly applied<br />
<br />
==== Monthly ====<br />
<br />
===== Realtime events =====<br />
* All user account snapshot events<br />
* "significant" changes to any account or cycle<br />
<br />
(MORE TODO)<br />
<br />
===== Daily-batched events =====<br />
(MORE TODO)<br />
<br />
==== Above-Monthly ====<br />
* Only provide account SELLS. Easy to query, plenty of data.<br />
(MORE TODO)<br />
<br />
== Patterns ==<br />
<br />
=== Basics ===<br />
<br />
* Load from SQL/nosql tables into [[Major Objects]]<br />
* Use a tight single-threaded message loop with async timers that set booleans when it is time to perform tasks<br />
* Offload heavy automated analysis to after-hours external analyzers, with the goal of applying yesterday's best fit to today<br />
* Server should provide minimal concise data to client, and client Javascript should do all UI rendering work.<br />
** Traditionally, we would inject const variables into html<br />
** Best practice but more of a refactor: make three separate calls to server to get html, javascript, and data. The html and javascript become cacheable static files.<br />
<br />
=== Object model ===<br />
<br />
JSON schema is used to generate base data classes and database read/write code, to provide the most agile schema refactoring. Follow these patterns to keep it consistent:<br />
<br />
* use a constructor with defaults for all parameters:<br />
// This constructor serves several purposes:<br />
// 1) standard full-param constructor, efficient for both deserializing and initializing<br />
// 2) no-param constructor for reflection via quicktype<br />
// 3) id constructor for loading via id + quicktype fields<br />
// 4) id constructor for use as key for unsorted_set::find()<br />
BrokerAccount(<br />
int64_t db_id = PersistentIDObject::DBID_DO_NOT_SAVE, <br />
int64_t aar_add_arca_enabled = -1,<br />
...<br />
:<br />
// Call base class<br />
inherited(ba_max_db_id_,db_id),<br />
<br />
// internal members<br />
...<br />
{<br />
// persistent members<br />
...<br />
}<br />
<br />
* There are three use cases for new objects:<br />
** objects about to be loaded - use constructor params, and load in a value for db_id_<br />
** new objects that should be made persistent - track a max_db_id in the parent to provide the "next" db_id constructor parameter<br />
** temporary objects - use the default constructor<br />
<br />
* use addXXXToMemory() to init parent-child relationships<br />
StockRun& BrokerAccount::addStockRunToMemory(StockRun* psr, bool bInsertNewRank)<br />
psr->setParent(*this);<br />
runsByRank_.push_unsorted(psr);<br />
runs_.insert(psr);<br />
<br />
BrokerAccount &AppUser::addBrokerAccountToMemory(BrokerAccount *pba)<br />
// We need a valid db id.<br />
assert(pba->db_id_ != PersistentIDObject::DBID_UNSAVED);<br />
<br />
// The caller is responsible for ensuring the account doesn't exist.<br />
assert(findBrokerAccount(pba->db_id_) == accounts_.end());<br />
<br />
pba->setParent(*this);<br />
accounts_.insert(pba);<br />
accountsByStringId_.insert(pba);<br />
return *pba;<br />
<br />
* use pointers for parent objects<br />
set always-and-only once, on load<br />
use functions that return a reference, to use those objects<br />
example:<br />
// EXTERNAL REFERENCES<br />
// NOTE we are not responsible for these allocations.<br />
// Access pointers to parents as references.<br />
// Accces nullable pointers directly.<br />
void setParent(AppUser& au) { pau_ = &au; }<br />
AppUser& au() { assert(pau_ != 0); return *pau_; }<br />
AppUser& au() const { assert(pau_ != 0); return *pau_; }<br />
<br />
* squash 1:1 contained members into parent<br />
StockRun (Cycle):<br />
flatten these:<br />
Stock stock_;<br />
StockPick sp_;<br />
AutotradedStock as_;<br />
<br />
* store contained containers/vectors in separate tables, and fill in secondary pass<br />
StockRun (Cycle):<br />
BracketEvents sbe_;<br />
<br />
=== Web UI ===<br />
<br />
* bootstrap header and footer<br />
* possible subsection navbar (eg Accounts, Admin) that sticks to top with bootstrap header<br />
* forms: <br />
** For the simplest forms, just use inputs and a button, and capture the click in js. Don't use <form> as it is hard to prevent the default behavior.<br />
** For any substantial multi-field form, use helpers in at.js<br />
* tables: we define tables entirely in JSON and pop them up with bootstrap-table, see AnalysisData getJSON, getJSONColumnNames<br />
* ajax: see at.js<br />
* patch can provide partial JSON to do partial updates (don't touch fields that are not provided)<br />
* dates: moment.js - need to convert d3 date functions to moment<br />
* money: accounting.js<br />
<br />
== Documentation ==<br />
<br />
=== Performance Tracking ===<br />
<br />
We measure three types of performance:<br />
* SR: As a stock cycles, it tracks the %gain on each buy-sell; Each cycle may have a diff # of stocks but aggregating %change-per-cycle should have value<br />
* BA: This is where we can say "at time T1 we had value X; at time T2 we had value Y" and get precise gains.<br />
* AD: This is used across many cycles and accounts and needs aggregation similar to SR.<br />
<br />
* SR and AD performance should be avg-percentage based as there is no base "value" like with BA<br />
** use an "average gain per buy-sell cycle" => d_avg_pct_gain_[GTT_COUNT] + sells_count_[GTT_COUNT]<br />
<br />
* cycle stopsells may need closer inspection<br />
** track stopsells; perhaps red-flag at 1 stopsell, then bail on 2?<br />
** cycle stopsell should (eventually?) flag the aps as in critical need of an update via reanalysis of recent history; rerun analysis, then reset the stopsell to zero<br />
* track performance of ad<br />
** nothing to do with need to rerank or reanalyze<br />
** but just so we learn over time what the best metaranges are<br />
** we want to keep working in this area, expanding as it makes sense<br />
** eg separate stocks' volatility by price, volume, market...etc.<br />
<br />
=== [https://bitpost.com/files/AbT_docs/index.html Doxygen and SchemaSpy diagrams] ===<br />
<br />
SchemaSpy was used on Sqlite relationships to generate a nice [https://bitpost.com/files/AbT_docs/schemaspy/output/sqlite/relationships.html foreign key map].<br />
<br />
Doxygen shows class diagrams. Here are some central relationships:<br />
<br />
* [https://bitpost.com/files/AbT_docs/doxygen/output/html/class_persistent_object.html PersistentObject] class hierarchy shows all persistent classes<br />
* [https://bitpost.com/files/AbT_docs/doxygen/output/html/class_at_http_server.html AtHttpServer] has call graphs for functions like [https://bitpost.com/files/AbT_docs/doxygen/output/html/class_at_http_server.html#abf8c8e1e8d1d0ba417470d6298e804cc GetAccount]<br />
* [https://bitpost.com/files/AbT_docs/doxygen/output/html/class_broker_account.html BrokerAccount] [https://bitpost.com/files/AbT_docs/doxygen/output/html/class_broker_account.html#ae8315bd08d9f69cedaef04ca4d1d1c97 ProcessQuote] call hierarchy<br />
<br />
=== MODEL ===<br />
<br />
AtController<br />
ui_<br />
memory_model_<br />
timers<br />
<br />
MemoryModel: delayed-write datastore manager; use dirty flags + timed, transactioned saveDirtyObjects() call<br />
prefs<br />
sq_<br />
brokers<br />
aps_<br />
users_<br />
<br />
TradingModel<br />
AppUser<br />
BrokerAccount: <br />
runs_ (sorted by id) <br />
runsByRank_ <br />
contains all cycles sorted by rank<br />
ONLY MANUAL has valid rank values<br />
but additional sort criteria is used to handle all cycles<br />
see StockRunsByRank_lessthan() <br />
broker (not worthy of its own layer)<br />
StockRun: rank, active, owned<br />
StockPick+AutotradedStock: quote processing<br />
SPASBracketEvent: stores one bracket-change event; includes triggering quote, bActive, buy/sell {quantity, commission, value}<br />
StockSnapshot: run, symbol, quote, quantity (for account snapshot history)<br />
<br />
Analysis data<br />
StockQuote <br />
db only has time+symbol+price <br />
memory also has p_aad<br />
AutoAnalysisData<br />
symbol<br />
aps_id<br />
stopsells<br />
profitable_sells <br />
StockRun<br />
aps<br />
AutotradeParameterSet<br />
bracket params<br />
many analysis vars - move these out to aad <br />
BracketEvent<br />
contains all details about any possible event<br />
for report and analysis we have a tighter version: <br />
typedev vector<RunQAB> RunHistory<br />
RunQAB is just quote+time + optional Bracket ptr<br />
<br />
Order lifespan<br />
Sim and analysis buy: place order, wait for next stock quote, buyWasExecuted()<br />
Live buy: place order, poll for execution, buyWasExecuted()<br />
<br />
Stock model<br />
Stock<br />
StockQuote& quote_;<br />
typedef std::pair<const string,StockQuoteDetails*> StockQuote;<br />
StockQuoteDetails<br />
double d_quote_;<br />
time_t timestamp_;<br />
(+spike logic)<br />
int64_t n_quantity_;<br />
StockOrder so_;<br />
int64_t order_id_;<br />
ORDER_STATUS status_;<br />
int64_t quote_db_id_;<br />
<br />
=== QUOTE PROCESSING ===<br />
<br />
AlpacaQuotesWss::startWss() on_message for EVERY T QUOTE WE GET<br />
AlpacaQuotesWss::processQuote<br />
quotes_.push_back(q);<br />
<br />
AlpacaInterface::getQuotes()<br />
if (p_quotes_wss_->drainQuotesQueue(latest_quotes))<br />
return vetAndProcessQuotes(latest_quotes);<br />
---<br />
patc_->mm().processQuote(<br />
<br />
^^^ all that does nothing but provide raw quotes, no worries there<br />
<br />
bool MemoryModel::processQuote<br />
bool bReset = !sqd.bValid() || sqd.resetBracketsAtMarketOpen_;<br />
if ( bRealtime() ) bProcess = sqd.addToSpikeHistory(dQuote, timestamp);<br />
if ( bReset || bProces )<br />
---<br />
pa->processQuote(sq, bReset);<br />
<br />
BrokerAccount::processQuote(<br />
// skip quote if APS is not ready!<br />
if (!psr->aps().bIsReady() && au().mm().bRealtime()) continue;<br />
<br />
if (broker().bSimulation() && psr->bBuyOrderPending())<br />
psr->executeBuyOnQuote();<br />
else if (broker().bSimulation() && psr->bSellOrderPending())<br />
psr->executeSellOnQuote();<br />
else if (psr->processQuoteForBuy(sq))<br />
vsrToBuy.push_back(psr);<br />
else if (psr->processQuoteForSell(sq))<br />
vsrToSell.push_back(psr);<br />
<br />
for (StockRun *psr : vsrToSell)<br />
sell(*psr);<br />
<br />
for (StockRun *psr : vsrToBuy)<br />
// Try to buy until we're full!<br />
if (bMaxOpenOrders() || !buy(*psr))<br />
psr->resetBracketOnDelayedBuyAsNeeded();<br />
---<br />
next processing is here:<br />
StockRun::processQuoteForBuy<br />
if (!bUnowned()) (bOrderUnresolved()) return false;<br />
if (handlePickInstaspike()) log<br />
else if (bBounceType())<br />
---<br />
bool StockRun::handlePickInstaspike()<br />
if (<br />
sq().second->bSpikeFlatUp(aps().spike_protection_percent_) <br />
|| sq().second->bSpikeFlatDown(aps().spike_protection_percent_))<br />
return true; // Ignore it.<br />
---<br />
bSpikeFlatUp<br />
return ((quote_ - d_last_quote_) / quote_ > dPct);<br />
bSpikeFlatDown<br />
return (abs(d_last_quote_ - d_last_last_quote_) / d_last_quote_ < 0.002) && bSpikeDown(dPct);<br />
bSpikeDown<br />
return ((d_last_quote_ - quote_) / d_last_quote_ > dPct);<br />
---<br />
and here:<br />
StockRun::processQuoteForSell<br />
if (handleOwnedInstaspike()) // log-and-drop<br />
---<br />
StockRun::handleOwnedInstaspike()<br />
if (sq().second->bSpikeUpDown) // reset to last-last<br />
else if (sq().second->bSpikeFlatDown) // ignore it<br />
---<br />
bSpikeUpDown<br />
return ((d_last_quote_ - d_last_last_quote_) / d_last_quote_ > dPct) && bSpikeDown(dPct) && (abs(quote_ - d_last_quote_) / quote_ < 0.002);<br />
<br />
=== DEBUG INTO REST API HANDLERS ===<br />
<br />
break on server_http.hpp ln~370: if(REGEX_NS::regex_match(request->path, sm_res, regex_method.first)) {<br />
watch regex_method.first.str<br />
watch request->path<br />
<br />
=== CI ===<br />
<br />
MASTER SCRIPT: atci<br />
<br />
We will have a live site, a constantly running CI site, and multiple dev environments.<br />
<br />
RUN LIVE at bitpost.com:<br />
m@bitpost rs at<br />
m@bitpost # if that doesn't work, start a session: screen -S at <br />
m@bitpost cd ~/development/thedigitalage/AbetterTrader/server-prod<br />
m@bitpost atlive<br />
========================================================<br />
*** LIVE MODE ***<br />
========================================================<br />
CTRL-A CTRL-D<br />
<br />
RUN CI at bitpost.com:<br />
# Keep this running to ensure that changes are dynamically built as they are committed<br />
# It should run at a predictable publicly available url that can be checked regularly<br />
# It runs in TEST but should run in a test mode that has an account assigned to it so it is very much like LIVE<br />
# It runs release build in test mode<br />
CTRL-A CTRL-D<br />
<br />
RUN DEV anywhere but bitpost:<br />
# Dev has complete control; most common tasks:<br />
# Code fast with a local CI loop - as soon as a file is changed, CI should restart server in test mode, displaying server log and [https://addons.mozilla.org/en-US/firefox/addon/auto-reload/ refreshing server page]<br />
# kill server, build, run, refresh browser<br />
# Turn off CI loop to debug via IDE<br />
# Stop prod, pull down production database, run LIVE mode in debugger to diagnose production problems<br />
<br />
<br />
==== Old notes from `at readme` ====<br />
<pre><br />
TODO NEEDS WORK!!<br />
<br />
1) Set up a new dev environment: at setup <br />
This function defines continuous integration steps for our project.<br />
See: https://bitpost.com/wiki/Continuous_Integration<br />
<br />
1) Continuous Integration environment<br />
There should only be one official CI repository.<br />
The CI repository should be a clone of the central bare .git repo.<br />
The central bare .git repo should have a git post-receive hook that pushes all code to the CI repo as soon as it is received.<br />
[atci cwatch] will watch for receipt of new code pushes, and stop+rebuild+import+restart the CI server in test mode.<br />
The script will copy production data, massaged to run in test mode.<br />
<br />
2) Get ci status<br />
[atci cconsole] will restore the screen of the running ci server<br />
[atci cstatus] TODO TOTHINK: will be called via php via ajax from the main development webpage (bitpost.com)<br />
to query the CI server and report its one-line status 24/7. Should include build/run/test status.<br />
<br />
3) Development environment<br />
During "fast" code development:<br />
[atci dwatch] will watch for code saves, and stop+rebuild+import+restart the dev server in test mode.<br />
[atci dconsole] will watch for code saves, and stop+rebuild+import+restart the dev server in test mode.<br />
[atci dstatus] will give a one-line status of the running dev server.<br />
During "careful" code development, the developer can call substeps to do these specific tasks. Common tasks:<br />
[atci dbuild] builds release mode.<br />
[atci dbuild debug] builds debug mode.<br />
To sync changes:<br />
[atci ds] top-level dev script to commit+tag dev.<br />
See usage for the complete list.<br />
<br />
Call these in one of two ways: [atci cmd ...] or [atcmd ...]<br />
See https://bitpost.com/news for more bloviating. Happy trading! :-)<br />
<br />
</pre><br />
<br />
=== Thread locking model ===<br />
OLD model was to do async operations, sending them through the APIRequestCache. The problem with that was that the website could not give immediate feedback. FUCKING WORTHLESS. New model uses the same exact locking, just does it as needed, where needed. We just need to chose that wisely.<br />
<br />
* Lock at USER LEVEL, as low-level as possible, but as infrequently as possible - not necessarily easy<br />
* Lock container reads with reads lock, which allows multiple reads but no writing<br />
// Lock user for reading<br />
boost::shared_lock<boost::shared_mutex> lock(p_user->rw_mutex_);<br />
* Lock container writes with exclusive write lock<br />
// Lock user for writing<br />
boost::lock_guard<boost::shared_mutex> uniqueLock(p_user->rw_mutex_);<br />
<br />
There is also a global mutex, for locking during AppUser container operations, etc.<br />
* reading<br />
boost::shared_lock<boost::shared_mutex> lock(g_p_local->rw_mutex_); <br />
* writing<br />
boost::lock_guard<boost::shared_mutex> uniqueLock(g_p_local->rw_mutex_);<br />
<br />
=== DAILY MAINTENANCE ===<br />
* Data is segmented into files, one per day<br />
* to determine end-of-day, timestamps are checked as they come in with quotes (we had no better way to tell)<br />
* At end of day, perform maintenance<br />
** perform maintenance only once, checking for existing filename with date of "today"<br />
** purge nearly all quotes and bracket events, while ensuring the new database remains self-contained<br />
*** preserve last-available quote for all stocks<br />
*** create a fresh new starting snapshot of all accounts using the preserved quotes<br />
** postpone next quote retrieval until market is ready to open again<br />
Pseudo:<br />
EtradeInterface::processQuotes()<br />
if (patc_->bTimestampIsOutsideMarketHours(pt))<br />
patc_->checkCurrentTimeForOutsideMarketHours()<br />
---<br />
checkCurrentTimeForOutsideMarketHours<br />
// Do not rely on quote timestamps.<br />
ptime ptnow = second_clock::local_time();<br />
<br />
if (bMarketOpenOrAboutTo(ptnow))<br />
return false;<br />
<br />
// If we are just now switching to outside-hours, immediately take action.<br />
set_quotes_timer_to_next_market_open(); // This will cause the quotes timer to reset to a loooong pause.<br />
if (bTimestampIsAfterHours(ptnow)) // must be AFTER not before<br />
if (g_p_local->performAfterHoursMaintenance(ptnow))<br />
// Always start each new day with a pre-market account snapshot, <br />
pa->addSnapshotToMemory(snaptime);<br />
runAnalysis();<br />
<br />
=== PRODUCTION ASSETS ===<br />
<br />
Due to potentially large sizes, I moved all bitpost production live assets to the software raid. Extra log backups are in logs folder. Extra db backups are in db_archive folder.<br />
<br />
at_server_live.log -> /spiceflow/softraid/development/thedigitalage/AbetterTrader/ProductionAssets/logs/at_server_live.log<br />
db_analysis -> /spiceflow/softraid/development/thedigitalage/AbetterTrader/ProductionAssets/db_analysis<br />
db_archive -> /spiceflow/softraid/development/thedigitalage/AbetterTrader/ProductionAssets/db_archive<br />
<br />
=== ANALYSIS ===<br />
<br />
==== AnalysisData Bracketing Overview ====<br />
Bracket vars:<br />
"ph1_initial_drop_high": 0.9986,<br />
"ph1_initial_drop_low": 0.9571,<br />
"ph1_initial_drop_steps": 4,<br />
"ph2_initial_rise_high": 1.0429,<br />
"ph2_initial_rise_low": 1.0014,<br />
"ph2_initial_rise_steps": 4,<br />
"ph3_loss_drop_high": 0.85,<br />
"ph3_loss_drop_low": 0.9929,<br />
"ph3_loss_drop_steps": 10,<br />
"ph3_profit_rise_high": 1.12,<br />
"ph3_profit_rise_low": 1.0071,<br />
"ph3_profit_rise_steps": 10,<br />
"ph4_profit_harvest_high": 0.9571,<br />
"ph4_profit_harvest_low": 0.9986,<br />
"ph4_profit_harvest_steps": 4,<br />
<br />
APS monte carlo steps are taken from that.<br />
<br />
===== steps =====<br />
There is a limit to the number of total steps that can be done overnight. The steps above are able to be performed. We should work on increasing step count as possible by optimizing, and improving hardware.<br />
<br />
===== bracket constraints =====<br />
`high` and `low` vars are given, then adjusted using a standard deviation, but within limits to keep meaningless tiny trades from happening.<br />
<br />
Example values:<br />
"ph1_initial_drop_high": 0.9986,<br />
"ph1_initial_drop_low": 0.9571,<br />
"ph1_initial_drop_steps": 4,<br />
"ph2_initial_rise_high": 1.0429,<br />
"ph2_initial_rise_low": 1.0014,<br />
"ph2_initial_rise_steps": 4,<br />
"ph3_loss_drop_high": 0.85,<br />
"ph3_loss_drop_low": 0.9929,<br />
"ph3_loss_drop_steps": 10,<br />
"ph3_profit_rise_high": 1.12,<br />
"ph3_profit_rise_low": 1.0071,<br />
"ph3_profit_rise_steps": 10,<br />
"ph4_profit_harvest_high": 0.9571,<br />
"ph4_profit_harvest_low": 0.9986,<br />
"ph4_profit_harvest_steps": 4,<br />
<br />
==== TRENDLINE (REFACTOR SIX) ====<br />
<br />
Keep focus on the final objective:<br />
* I want to SEE A STOCK'S CYCLE, CLEARLY, so i can harvest it<br />
* the cycle has to be around a base trendline<br />
<br />
We will use recent non-reduced data.<br />
<br />
We need to dynamically display, to clue us in to the big picture.<br />
<br />
More to come.<br />
<br />
==== ANALYZE PSEUDO (REFACTOR FOUR) ====<br />
<br />
* we run autoanalysis for every known symbol, with minimal interaction from user - they just decide to use it or not<br />
* we do a standard deviation on the data, and a monte-carlo-like loop through ranges APS values based on sd multipliers<br />
* we are working toward a distributed microservice approach with many analysis engines across LAN<br />
<br />
Function overview:<br />
bool AtController::load_startup_data()<br />
if (b_analyze_on_startup_)<br />
getAnalyzerController().requeue_analyses();<br />
bool AtController::checkCurrentTimeForOutsideMarketHours()<br />
if (getModel().bMarketOpenOrAboutTo(ptnow))<br />
return false;<br />
if (!b_after_hours_)<br />
b_after_hours_= true;<br />
// Attempt maintenance and analysis, but only if we are AFTER hours (not before).<br />
if (getModel().bTimestampIsAfterHours(ptnow))<br />
getModel().performAfterHoursMaintenance(ptnow);<br />
<br />
void AnalyzerController::requeue_analyses()<br />
stop_and_clear_jobs();<br />
fill_all_job_slots();<br />
<br />
=== REPORTING JSON ===<br />
<br />
PATTERN<br />
-------<br />
bool handler()<br />
mm().readXxxJson(json) (done in derived model)<br />
for r<br />
buildAccountCyclesPerformanceJSONRow( (done in MM)<br />
row["snapshot_action"].as<int64_t>()<br />
) <br />
archive().readXxxJson(json) (done in derived model)<br />
for row<br />
buildAccountCyclesPerformanceJSONRow( (done in MM)<br />
query.getColumn(0).getInt64(),<br />
<br />
FOLLOW OUR PATTERN WITH all handlers:<br />
* PostAccountPerformanceCycles<br />
<br />
AtHttpServer::PostAccountPerformanceCycles() <br />
readAccountCyclesPerformanceJSON (derived models)<br />
buildAccountCyclesPerformanceJSONRow (mm)<br />
<br />
* GetAccountPerformance<br />
* PostAccountPerformance<br />
NOTE that these ones actually completely harvest the data first,<br />
due to reporting requirements (JSON can't be built directly from db rows)<br />
<br />
- PostAccountActivity<br />
- PostAccountActivityTable<br />
<br />
=== QAB charts ===<br />
<br />
CHART TIMEFRAME DESIGN<br />
<br />
use cases:<br />
user wants to see performance across a variety of time frames <- PERFORMANCE PAGE only!<br />
user wants to see historical brackets for older days <- PERFORMANCE PAGE only! lower priority!<br />
user wants to perform immediate actions on realtime chart<br />
user wants to do autoanalysis across a range and then manually tweak it<br />
<br />
requirements<br />
round 1: we can satisfy everything with TODAY ONLY (show today archive if after hours)<br />
round 2: add a separate per-day performance chart<br />
round 3: add a date picker to the chart to let the user select an older day to show<br />
<br />
node reduction<br />
data DISPLAY only needs to show action points and highs/lows<br />
aggressively node-reduce to fit the requested screen size!<br />
given: number of pixels of width<br />
provide: all bracket quotes plus the lowest+highest quotes in each 2-pixel range (minimum - adjustable to more aggressive clipping if desired)<br />
internal data ANALYSIS should use all points<br />
<br />
CHART DATA RETRIEVAL<br />
<pre><br />
function addStock(cycle) {<br />
restAndApply('GET','runs/'+cycle.run+'/live.json?pixel_limit='+$(window).width()*2...<br />
---<br />
void AtHttpsServer::GetRunLive(API_call& call)<br />
g_p_local->readRunLive(p_user->db_id_, account_id, run_id, symbol, pixel_limit, atc_.bAfterHours(), rh);<br />
atc_.thread_buildRunJSON(rh,*sr.paps_,run_id);<br />
<br />
var analysisChange = function(event) {<br />
$.getJSON('runs/'+run+'/analysis.json?pixel_limit='+$(window).width()*2+'&aggressiveness='+event.value, function( data ) {<br />
---<br />
AtHttpsServer::GetRunAnalysis(API_call& call)<br />
atc_.thread_handleAnalysisRequest(<br />
---<br />
AtController::thread_handleAnalysisRequest(BrokerAccount& ba,int64_t run_id,bool b_autoanalyze,double d_aggressiveness,int32_t pixel_limit,string& http_reply)<br />
g_p_local->readRunHistory()<br />
thread_analyzeHistory()<br />
thread_buildRunJSON(rh,apsA,apsA.run_id_);<br />
<br />
-- NOT CURRENTLY CALLED --<br />
function displayHistory(run)<br />
$.getJSON('runs/'+run+'/history.json?pixel_limit='+$(window).width()*2+'&days=3', function( data ) {<br />
---<br />
AtHttpsServer::GetRunHistory(API_call& call)<br />
g_p_local->readRunHistory(p_user->db_id_,account_id,run_id,symbol,days,sr.paps_->n_analysis_quotes_per_day_reqd_,pixel_limit,rh);<br />
atc_.thread_buildRunJSON(rh,*sr.paps_,run_id);<br />
</pre><br />
<br />
=== DEBUG LIVE ===<br />
NOTE that you WILL lose stock quote data during the debugging time, until we set up a second PROD environment.<br />
* WRITE INITIAL DEBUG CODE in any DEV environment<br />
* UPDATE DEBUGGER to run with [live] parameter instead of debug ones<br />
* COPY DATABASE directly from PROD environment to DEV environment: atimport prod<br />
* STOP PROD environment at_server<br />
* DEBUG. quickly. ;-)<br />
* PUSH any code fix (without debug code) back to PROD env<br />
* RESTART PROD and see if the fix worked<br />
* REVERT DEV environment: clean any debug code, redo an atimport and reset the debugger parameters<br />
<br />
=== HTML SHARED HEADER/FOOTER ===<br />
<pre><br />
-------------------------------------------------------------------------------<br />
THREE PARTS THAT MUST BE IN EVERY HTML FILE:<br />
-------------------------------------------------------------------------------<br />
<br />
1) all code above <container>, including these replaceables:<br />
a) logout: <button type='button' id='logout' class='btn btn-margined btn-xs btn-primary pull-right'>Log out</button><br />
b) breadcrumbs: <!--bread--><li><a href="/1">1</a></li><li class="active">2</li><!--crumbs--><br />
2) logout button handler<br />
$( document ).ready(function() {<br />
3) footer and [Bootstrap core Javascript]<br />
<br />
what a maintenance nightmare - but it seems best to do all 10-12 files manually<br />
-------------------------------------------------------------------------------<br />
</pre><br />
<br />
=== HAPROXY and LOAD BALANCING ===<br />
<br />
For the first 1000 paid users, we will NOT do load balancing.<br />
* Use haproxy Layer 7 (http) load balancing to redirect abettertrader.com requests to a bitpost.com custom port.<br />
<br />
For load balancing, there are two database design choices:<br />
* Each server gets its own quotes and saves all its own data<br />
** Need to read user id from each request and send each user to a predetermined server<br />
** Need multiple Etrade accounts, one for each server, unless we get a deal with Etrade<br />
* Switch to a distributed database with master-master replication<br />
** A lot of work<br />
** Might kill sub-second performance? Might not. We already have delayed-write.<br />
<br />
<br />
=== TIMESTAMP STANDARDIZATION ===<br />
<br />
Standardize internal times as int64_t milliseconds since 1970 in UTC. That's not ideal as it doesn't deal with leap seconds. But makes our time handling code much faster, so worth the tradeoff.<br />
<br />
Display times in local time.<br />
<br />
{| class="wikitable"<br />
|+Timestamp TODO<br />
|-<br />
|Chart URL<br />
|The time should default to 9:30am EST which should display in URL as something like: 2019-09-13T13:30:00.000Z<br />
|-<br />
|Performance page<br />
|?<br />
|-<br />
|Activity page<br />
|?<br />
|}<br />
<br />
=== OLDER NOTES ===<br />
<br />
==== WALKING DATABASE FILES ====<br />
<br />
(we are moving to postgres for archiving!)<br />
<br />
There are two types of historical requests:<br />
* a specific date range, usually requested by user; use this:<br />
getDatabaseNames(startdate,enddate)<br />
<br />
* a specific number of days, usually requested by analysis; loop with this:<br />
getPreviousDatabaseName(dt,db_name)<br />
<br />
* there is also this, which skips non-market days, but we want them for now when walking db files:<br />
get_previous_market_day(pt)<br />
<br />
Move to mongo soon! :-)<br />
<br />
==== API PSEUDO (no longer of much use) ==== <br />
<br />
APIGetRunLive::handle_call()<br />
g_p_local->getRunLiveJSON()<br />
readRunQAB(s_str_db_name...)<br />
(SAME as readRunLive!!)<br />
<br />
APIGetRunHistory::handle_call()<br />
g_p_local->getRunHistoryJSON()<br />
readRunHistory<br />
readRunQAB<br />
<br />
==== Qt Creator settings (moved on > CLion > vs code) ==== <br />
* Make sure you have already run [atbuild] and [atbuild debug].<br />
* Open CMakeLists.txt as a Qt Creator project.<br />
* It will force you to do CMake - pick cmake-release folder and let it go.<br />
* Rename the build config to debug.<br />
* Clone it to release and change folder to release.<br />
* Delete make step and replace it with custom build:<br />
./build.sh<br />
(no args)<br />
%{buildDir}<br />
* Create run setups:<br />
you have to use hardcoded path to BASE working dir (or leave it blank maybe?): <br />
<br />
/home/m/development/thedigitalage/AbetterTrader/server<br />
<br />
[x] run in terminal<br />
I recommend using TEST args for both debug and release: localhost 8000 test reanalyze (matches attest)<br />
LIVE args may occasionally be needed for use with [atimport prod]: localhost 8080 live (matches atlive)<br />
<br />
==== MONTHLY MANUAL MAINTENANCE ====<br />
<br />
(This is now available via the admin Summary page.)<br />
<br />
Automate as much as possible, but this is not that bad and safer to do manually when we know the time is right:<br />
monthly db maintenance<br />
<br />
just monthly:<br />
update PrefStr set value = "JANUARY 2018 LEADERBOARD" where name="LeaderboardTitle";<br />
update PrefStr set value = "Leader at 1/31 closing bell wins<br />December winner: cfjaques Go Cara!" where name="LeaderboardDescription";<br />
update Accounts set leaderboard_initial_value = total_managed_value;<br />
update AnalysisData set avg_pct_gain_mtd = 0, sells_count_mtd = 0;<br />
update AnalysisData set avg_pct_gain_mtd = 0, sells_count_mtd = 0;<br />
update StockRuns set avg_pct_gain_mtd = 0, sells_count_mtd = 0;<br />
update StockRuns set avg_pct_gain_mtd = 0, sells_count_mtd = 0;<br />
<br />
AND ANNUAL! HAPPY NEW YEAR 2018!<br />
<br />
update PrefStr set value = "JANUARY 2018 LEADERBOARD" where name="LeaderboardTitle";<br />
update PrefStr set value = "Leader at 1/31 closing bell wins<br />December winner: cfjaques Go Cara!" where name="LeaderboardDescription";<br />
update Accounts set leaderboard_initial_value = total_managed_value, year_initial_value = total_managed_value;<br />
update AnalysisData set avg_pct_gain_ytd = 0, sells_count_ytd = 0, avg_pct_gain_mtd = 0, sells_count_mtd = 0;<br />
update AnalysisData set avg_pct_gain_ytd = 0, sells_count_ytd = 0, avg_pct_gain_mtd = 0, sells_count_mtd = 0;<br />
update StockRuns set avg_pct_gain_ytd = 0, sells_count_ytd = 0, avg_pct_gain_mtd = 0, sells_count_mtd = 0;<br />
update StockRuns set avg_pct_gain_ytd = 0, sells_count_ytd = 0, avg_pct_gain_mtd = 0, sells_count_mtd = 0;<br />
<br />
if you need to hard-reset all the simulation accounts, do this and restart to fix all CASH:<br />
update Accounts set initial_managed_value = 100000, total_managed_value = 100000, net_account_value = 100000 where broker_id=2;<br />
<br />
== Developer environment setup ==<br />
<br />
The setup script is [as setup [nopostres]]. The installation script can be used for initial setup, and rerun to upgrade components like boost SWS SWSS and postgres.<br />
<br />
=== First time install ===<br />
* First, set up YET ANOTHER GODDAMN BOX:<br />
setup_linux.sh [desktop | nodesktop]<br />
<br />
* Clone the good stuff<br />
mh c Get all the goodness<br />
<br />
=== Dependencies ===<br />
<br />
==== Choose local postgres server or remote ====<br />
You can choose to install a full postgres server (usually desired on a laptop):<br />
at setup<br />
Or just install postgres client, and point your dev installation at another postgres server installation (typically use positronic if you're on the LAN):<br />
at se nopostgres<br />
<br />
==== To upgrade boost ====<br />
* update the [[Development reference | boost]] version in .bashrc<br />
* [cdl] and remove existing boost install<br />
* rerun [at se] or [at se nopostres] as appropriate<br />
<br />
==== libpqxx ====<br />
We use our own fork of pqxx, github:moodboom<br />
<br />
I keep the fork updated on cast.<br />
We keep a repo of it in development/Libraries/c++/libpqxx/source/libpqxx<br />
We keep a repo of the parent that we forked from, here:<br />
development/Libraries/c++/libpqxx/source/libpqxx-jvt-parent<br />
It is a straight git-clone of the jvt repo.<br />
To rebase on top of the latest parent release:<br />
cd libpqxx-jvt-parent<br />
git pull<br />
git reset --hard tags/7.7.0 # or whatever latest release is<br />
cd ../libpqxx<br />
# this should already be done:<br />
# git remote add jvt-parent ../libpqxx-jvt-parent<br />
# git checkout -b jvt-parent<br />
# git fetch --all<br />
# git branch --set-upstream-to=jvt-parent/master jvt-parent<br />
git checkout jvt-parent && git pull<br />
git checkout master && git rebase jvt-parent<br />
# fix up the merge and commit and push -f!<br />
To force-push all the way back to github:<br />
🤔 m@morosoph [~/development/Libraries/c++/libpqxx.git] git push --set-upstream origin master -f<br />
<br />
==== Simple-Web-Server ====<br />
I keep my own fork on gitlab.<br />
I pull parent fork changes in on cast.<br />
<br />
To get a new release going:<br />
cd development/Libraries/c++/Simple-Web-Server<br />
git branch<br />
eidheim-parent<br />
master<br />
release/abt-0.0.3<br />
* release/abt-0.0.4<br />
git checkout -b release/abt-0.0.5<br />
# make sure development/Libraries/c++/Simple-Web-Server-eidheim has most recent commits pulled <br />
git checkout eidheim-parent<br />
git pull<br />
git checkout release/abt-0.0.5<br />
git rebase eidheim-parent<br />
git push --all<br />
<br />
Something like that, anyway :P<br />
<br />
==== Simple-WebSocket-Server ====<br />
Similar to SWS.<br />
<br />
=== Clone prod database ===<br />
You can easily pull a sanitized copy of prod down for local usage. It will use the dev account instead of prod. No reason not to do this OFTEN!<br />
<br />
Also, you probably don't need quotes! Those are HUGE. It's fast if you skip them.<br />
<br />
ssh positronic<br />
mh-add-postgres-db at_whatevs<br />
at dump noquotes<br />
at clone positronic-at_live-noquotes-2022-05-15-162423 at_whatevs<br />
# set up a launch.json block to use it - probably with "offline" too<br />
<br />
== [[Trading]] ==</div>Mhttps://bitpost.com/w/index.php?title=A_better_Trader&diff=7458A better Trader2024-03-12T02:43:50Z<p>M: </p>
<hr />
<div>[https://abettertrader.com LIVE SITE] -- [http://localhost:8080 DEV SITE]<br />
<br />
[https://bitpost.com/files/AbT_docs/ Generated Documentation]<br />
<br />
[[A better Trader testing]]<br />
<br />
== Use Case ==<br />
<br />
ALL WE ASK OF USERS IS TO WATCH THE NEWS AND...<br />
<br />
-----------------------------------------------------<br />
BE INTUITIVE ABOUT A STOCK'S NEXT DIRECTION<br />
-----------------------------------------------------<br />
<br />
IF A PICKED IS LOOKING GOOD: let it ride; actions: buy; reset the bracket to hold it longer<br />
IF A PICKED IS LOOKING BAD: delete it - there are more fish in the sea, get OUT<br />
IF AN OWNED IS LOOKING GOOD: let it ride; actions: sell; hold<br />
IF AN OWNED IS LOOKING BAD: let it ride; actions: sell and drop, raise bracket<br />
REGULARLY ADD AND REMOVE PICKS<br />
<br />
THAT'S THE DAILY USAGE PATTERN<br />
we must do everything else! <br />
all that boring analysis should be done for them, unless they really want to obsess<br />
<br />
== Design ==<br />
<br />
=== Events ===<br />
<br />
* Types<br />
** this account's snapshots<br />
** significant highlights (across all accounts and cycles); NOTE to make this performant, we only use snapshots (all types) to find these highlights; snapshots should happen on every major change tho! Cool.<br />
** news (TBD)<br />
* Provide more detail for monthly-or-below, and less for above-monthly<br />
* The server consolidates data into all "significant" events for the timeline<br />
* The user can filter events on the client for readability in a small UI, but they are always available for the given timeframe so filter changes can be quickly applied<br />
<br />
==== Monthly ====<br />
<br />
===== Realtime events =====<br />
* All user account snapshot events<br />
* "significant" changes to any account or cycle<br />
<br />
(MORE TODO)<br />
<br />
===== Daily-batched events =====<br />
(MORE TODO)<br />
<br />
==== Above-Monthly ====<br />
* Only provide account SELLS. Easy to query, plenty of data.<br />
(MORE TODO)<br />
<br />
== Patterns ==<br />
<br />
=== Basics ===<br />
<br />
* Load from SQL/nosql tables into [[Major Objects]]<br />
* Use a tight single-threaded message loop with async timers that set booleans when it is time to perform tasks<br />
* Offload heavy automated analysis to after-hours external analyzers, with the goal of applying yesterday's best fit to today<br />
* Server should provide minimal concise data to client, and client Javascript should do all UI rendering work.<br />
** Traditionally, we would inject const variables into html<br />
** Best practice but more of a refactor: make three separate calls to server to get html, javascript, and data. The html and javascript become cacheable static files.<br />
<br />
=== Object model ===<br />
<br />
JSON schema is used to generate base data classes and database read/write code, to provide the most agile schema refactoring. Follow these patterns to keep it consistent:<br />
<br />
* use a constructor with defaults for all parameters:<br />
// This constructor serves several purposes:<br />
// 1) standard full-param constructor, efficient for both deserializing and initializing<br />
// 2) no-param constructor for reflection via quicktype<br />
// 3) id constructor for loading via id + quicktype fields<br />
// 4) id constructor for use as key for unsorted_set::find()<br />
BrokerAccount(<br />
int64_t db_id = PersistentIDObject::DBID_DO_NOT_SAVE, <br />
int64_t aar_add_arca_enabled = -1,<br />
...<br />
:<br />
// Call base class<br />
inherited(ba_max_db_id_,db_id),<br />
<br />
// internal members<br />
...<br />
{<br />
// persistent members<br />
...<br />
}<br />
<br />
* There are three use cases for new objects:<br />
** objects about to be loaded - use constructor params, and load in a value for db_id_<br />
** new objects that should be made persistent - track a max_db_id in the parent to provide the "next" db_id constructor parameter<br />
** temporary objects - use the default constructor<br />
<br />
* use addXXXToMemory() to init parent-child relationships<br />
StockRun& BrokerAccount::addStockRunToMemory(StockRun* psr, bool bInsertNewRank)<br />
psr->setParent(*this);<br />
runsByRank_.push_unsorted(psr);<br />
runs_.insert(psr);<br />
<br />
BrokerAccount &AppUser::addBrokerAccountToMemory(BrokerAccount *pba)<br />
// We need a valid db id.<br />
assert(pba->db_id_ != PersistentIDObject::DBID_UNSAVED);<br />
<br />
// The caller is responsible for ensuring the account doesn't exist.<br />
assert(findBrokerAccount(pba->db_id_) == accounts_.end());<br />
<br />
pba->setParent(*this);<br />
accounts_.insert(pba);<br />
accountsByStringId_.insert(pba);<br />
return *pba;<br />
<br />
* use pointers for parent objects<br />
set always-and-only once, on load<br />
use functions that return a reference, to use those objects<br />
example:<br />
// EXTERNAL REFERENCES<br />
// NOTE we are not responsible for these allocations.<br />
// Access pointers to parents as references.<br />
// Accces nullable pointers directly.<br />
void setParent(AppUser& au) { pau_ = &au; }<br />
AppUser& au() { assert(pau_ != 0); return *pau_; }<br />
AppUser& au() const { assert(pau_ != 0); return *pau_; }<br />
<br />
* squash 1:1 contained members into parent<br />
StockRun (Cycle):<br />
flatten these:<br />
Stock stock_;<br />
StockPick sp_;<br />
AutotradedStock as_;<br />
<br />
* store contained containers/vectors in separate tables, and fill in secondary pass<br />
StockRun (Cycle):<br />
BracketEvents sbe_;<br />
<br />
=== Web UI ===<br />
<br />
* bootstrap header and footer<br />
* possible subsection navbar (eg Accounts, Admin) that sticks to top with bootstrap header<br />
* forms: <br />
** For the simplest forms, just use inputs and a button, and capture the click in js. Don't use <form> as it is hard to prevent the default behavior.<br />
** For any substantial multi-field form, use helpers in at.js<br />
* tables: we define tables entirely in JSON and pop them up with bootstrap-table, see AnalysisData getJSON, getJSONColumnNames<br />
* ajax: see at.js<br />
* patch can provide partial JSON to do partial updates (don't touch fields that are not provided)<br />
* dates: moment.js - need to convert d3 date functions to moment<br />
* money: accounting.js<br />
<br />
== Documentation ==<br />
<br />
=== Performance Tracking ===<br />
<br />
We measure three types of performance:<br />
* SR: As a stock cycles, it tracks the %gain on each buy-sell; Each cycle may have a diff # of stocks but aggregating %change-per-cycle should have value<br />
* BA: This is where we can say "at time T1 we had value X; at time T2 we had value Y" and get precise gains.<br />
* AD: This is used across many cycles and accounts and needs aggregation similar to SR.<br />
<br />
* SR and AD performance should be avg-percentage based as there is no base "value" like with BA<br />
** use an "average gain per buy-sell cycle" => d_avg_pct_gain_[GTT_COUNT] + sells_count_[GTT_COUNT]<br />
<br />
* cycle stopsells may need closer inspection<br />
** track stopsells; perhaps red-flag at 1 stopsell, then bail on 2?<br />
** cycle stopsell should (eventually?) flag the aps as in critical need of an update via reanalysis of recent history; rerun analysis, then reset the stopsell to zero<br />
* track performance of ad<br />
** nothing to do with need to rerank or reanalyze<br />
** but just so we learn over time what the best metaranges are<br />
** we want to keep working in this area, expanding as it makes sense<br />
** eg separate stocks' volatility by price, volume, market...etc.<br />
<br />
=== [https://bitpost.com/files/AbT_docs/index.html Doxygen and SchemaSpy diagrams] ===<br />
<br />
SchemaSpy was used on Sqlite relationships to generate a nice [https://bitpost.com/files/AbT_docs/schemaspy/output/sqlite/relationships.html foreign key map].<br />
<br />
Doxygen shows class diagrams. Here are some central relationships:<br />
<br />
* [https://bitpost.com/files/AbT_docs/doxygen/output/html/class_persistent_object.html PersistentObject] class hierarchy shows all persistent classes<br />
* [https://bitpost.com/files/AbT_docs/doxygen/output/html/class_at_http_server.html AtHttpServer] has call graphs for functions like [https://bitpost.com/files/AbT_docs/doxygen/output/html/class_at_http_server.html#abf8c8e1e8d1d0ba417470d6298e804cc GetAccount]<br />
* [https://bitpost.com/files/AbT_docs/doxygen/output/html/class_broker_account.html BrokerAccount] [https://bitpost.com/files/AbT_docs/doxygen/output/html/class_broker_account.html#ae8315bd08d9f69cedaef04ca4d1d1c97 ProcessQuote] call hierarchy<br />
<br />
=== MODEL ===<br />
<br />
AtController<br />
ui_<br />
memory_model_<br />
timers<br />
<br />
MemoryModel: delayed-write datastore manager; use dirty flags + timed, transactioned saveDirtyObjects() call<br />
prefs<br />
sq_<br />
brokers<br />
aps_<br />
users_<br />
<br />
TradingModel<br />
AppUser<br />
BrokerAccount: <br />
runs_ (sorted by id) <br />
runsByRank_ <br />
contains all cycles sorted by rank<br />
ONLY MANUAL has valid rank values<br />
but additional sort criteria is used to handle all cycles<br />
see StockRunsByRank_lessthan() <br />
broker (not worthy of its own layer)<br />
StockRun: rank, active, owned<br />
StockPick+AutotradedStock: quote processing<br />
SPASBracketEvent: stores one bracket-change event; includes triggering quote, bActive, buy/sell {quantity, commission, value}<br />
StockSnapshot: run, symbol, quote, quantity (for account snapshot history)<br />
<br />
Analysis data<br />
StockQuote <br />
db only has time+symbol+price <br />
memory also has p_aad<br />
AutoAnalysisData<br />
symbol<br />
aps_id<br />
stopsells<br />
profitable_sells <br />
StockRun<br />
aps<br />
AutotradeParameterSet<br />
bracket params<br />
many analysis vars - move these out to aad <br />
BracketEvent<br />
contains all details about any possible event<br />
for report and analysis we have a tighter version: <br />
typedev vector<RunQAB> RunHistory<br />
RunQAB is just quote+time + optional Bracket ptr<br />
<br />
Order lifespan<br />
Sim and analysis buy: place order, wait for next stock quote, buyWasExecuted()<br />
Live buy: place order, poll for execution, buyWasExecuted()<br />
<br />
Stock model<br />
Stock<br />
StockQuote& quote_;<br />
typedef std::pair<const string,StockQuoteDetails*> StockQuote;<br />
StockQuoteDetails<br />
double d_quote_;<br />
time_t timestamp_;<br />
(+spike logic)<br />
int64_t n_quantity_;<br />
StockOrder so_;<br />
int64_t order_id_;<br />
ORDER_STATUS status_;<br />
int64_t quote_db_id_;<br />
<br />
=== QUOTE PROCESSING ===<br />
<br />
AlpacaQuotesWss::startWss() on_message for EVERY T QUOTE WE GET<br />
AlpacaQuotesWss::processQuote<br />
quotes_.push_back(q);<br />
<br />
AlpacaInterface::getQuotes()<br />
if (p_quotes_wss_->drainQuotesQueue(latest_quotes))<br />
return vetAndProcessQuotes(latest_quotes);<br />
---<br />
patc_->mm().processQuote(<br />
<br />
^^^ all that does nothing but provide raw quotes, no worries there<br />
<br />
bool MemoryModel::processQuote<br />
bool bReset = !sqd.bValid() || sqd.resetBracketsAtMarketOpen_;<br />
if ( bRealtime() ) bProcess = sqd.addToSpikeHistory(dQuote, timestamp);<br />
if ( bReset || bProces )<br />
---<br />
pa->processQuote(sq, bReset);<br />
<br />
BrokerAccount::processQuote(<br />
// skip quote if APS is not ready!<br />
if (!psr->aps().bIsReady() && au().mm().bRealtime()) continue;<br />
<br />
if (broker().bSimulation() && psr->bBuyOrderPending())<br />
psr->executeBuyOnQuote();<br />
else if (broker().bSimulation() && psr->bSellOrderPending())<br />
psr->executeSellOnQuote();<br />
else if (psr->processQuoteForBuy(sq))<br />
vsrToBuy.push_back(psr);<br />
else if (psr->processQuoteForSell(sq))<br />
vsrToSell.push_back(psr);<br />
<br />
for (StockRun *psr : vsrToSell)<br />
sell(*psr);<br />
<br />
for (StockRun *psr : vsrToBuy)<br />
// Try to buy until we're full!<br />
if (bMaxOpenOrders() || !buy(*psr))<br />
psr->resetBracketOnDelayedBuyAsNeeded();<br />
---<br />
next processing is here:<br />
StockRun::processQuoteForBuy<br />
if (!bUnowned()) (bOrderUnresolved()) return false;<br />
if (handlePickInstaspike()) log<br />
else if (bBounceType())<br />
---<br />
bool StockRun::handlePickInstaspike()<br />
if (<br />
sq().second->bSpikeFlatUp(aps().spike_protection_percent_) <br />
|| sq().second->bSpikeFlatDown(aps().spike_protection_percent_))<br />
return true; // Ignore it.<br />
---<br />
bSpikeFlatUp<br />
return ((quote_ - d_last_quote_) / quote_ > dPct);<br />
bSpikeFlatDown<br />
return (abs(d_last_quote_ - d_last_last_quote_) / d_last_quote_ < 0.002) && bSpikeDown(dPct);<br />
bSpikeDown<br />
return ((d_last_quote_ - quote_) / d_last_quote_ > dPct);<br />
---<br />
and here:<br />
StockRun::processQuoteForSell<br />
if (handleOwnedInstaspike()) // log-and-drop<br />
---<br />
StockRun::handleOwnedInstaspike()<br />
if (sq().second->bSpikeUpDown) // reset to last-last<br />
else if (sq().second->bSpikeFlatDown) // ignore it<br />
---<br />
bSpikeUpDown<br />
return ((d_last_quote_ - d_last_last_quote_) / d_last_quote_ > dPct) && bSpikeDown(dPct) && (abs(quote_ - d_last_quote_) / quote_ < 0.002);<br />
<br />
=== DEBUG INTO REST API HANDLERS ===<br />
<br />
break on server_http.hpp ln~370: if(REGEX_NS::regex_match(request->path, sm_res, regex_method.first)) {<br />
watch regex_method.first.str<br />
watch request->path<br />
<br />
=== CI ===<br />
<br />
MASTER SCRIPT: atci<br />
<br />
We will have a live site, a constantly running CI site, and multiple dev environments.<br />
<br />
RUN LIVE at bitpost.com:<br />
m@bitpost rs at<br />
m@bitpost # if that doesn't work, start a session: screen -S at <br />
m@bitpost cd ~/development/thedigitalage/AbetterTrader/server-prod<br />
m@bitpost atlive<br />
========================================================<br />
*** LIVE MODE ***<br />
========================================================<br />
CTRL-A CTRL-D<br />
<br />
RUN CI at bitpost.com:<br />
# Keep this running to ensure that changes are dynamically built as they are committed<br />
# It should run at a predictable publicly available url that can be checked regularly<br />
# It runs in TEST but should run in a test mode that has an account assigned to it so it is very much like LIVE<br />
# It runs release build in test mode<br />
CTRL-A CTRL-D<br />
<br />
RUN DEV anywhere but bitpost:<br />
# Dev has complete control; most common tasks:<br />
# Code fast with a local CI loop - as soon as a file is changed, CI should restart server in test mode, displaying server log and [https://addons.mozilla.org/en-US/firefox/addon/auto-reload/ refreshing server page]<br />
# kill server, build, run, refresh browser<br />
# Turn off CI loop to debug via IDE<br />
# Stop prod, pull down production database, run LIVE mode in debugger to diagnose production problems<br />
<br />
<br />
==== Old notes from `at readme` ====<br />
<pre><br />
TODO NEEDS WORK!!<br />
<br />
1) Set up a new dev environment: at setup <br />
This function defines continuous integration steps for our project.<br />
See: https://bitpost.com/wiki/Continuous_Integration<br />
<br />
1) Continuous Integration environment<br />
There should only be one official CI repository.<br />
The CI repository should be a clone of the central bare .git repo.<br />
The central bare .git repo should have a git post-receive hook that pushes all code to the CI repo as soon as it is received.<br />
[atci cwatch] will watch for receipt of new code pushes, and stop+rebuild+import+restart the CI server in test mode.<br />
The script will copy production data, massaged to run in test mode.<br />
<br />
2) Get ci status<br />
[atci cconsole] will restore the screen of the running ci server<br />
[atci cstatus] TODO TOTHINK: will be called via php via ajax from the main development webpage (bitpost.com)<br />
to query the CI server and report its one-line status 24/7. Should include build/run/test status.<br />
<br />
3) Development environment<br />
During "fast" code development:<br />
[atci dwatch] will watch for code saves, and stop+rebuild+import+restart the dev server in test mode.<br />
[atci dconsole] will watch for code saves, and stop+rebuild+import+restart the dev server in test mode.<br />
[atci dstatus] will give a one-line status of the running dev server.<br />
During "careful" code development, the developer can call substeps to do these specific tasks. Common tasks:<br />
[atci dbuild] builds release mode.<br />
[atci dbuild debug] builds debug mode.<br />
To sync changes:<br />
[atci ds] top-level dev script to commit+tag dev.<br />
See usage for the complete list.<br />
<br />
Call these in one of two ways: [atci cmd ...] or [atcmd ...]<br />
See https://bitpost.com/news for more bloviating. Happy trading! :-)<br />
<br />
</pre><br />
<br />
=== Thread locking model ===<br />
OLD model was to do async operations, sending them through the APIRequestCache. The problem with that was that the website could not give immediate feedback. FUCKING WORTHLESS. New model uses the same exact locking, just does it as needed, where needed. We just need to chose that wisely.<br />
<br />
* Lock at USER LEVEL, as low-level as possible, but as infrequently as possible - not necessarily easy<br />
* Lock container reads with reads lock, which allows multiple reads but no writing<br />
// Lock user for reading<br />
boost::shared_lock<boost::shared_mutex> lock(p_user->rw_mutex_);<br />
* Lock container writes with exclusive write lock<br />
// Lock user for writing<br />
boost::lock_guard<boost::shared_mutex> uniqueLock(p_user->rw_mutex_);<br />
<br />
There is also a global mutex, for locking during AppUser container operations, etc.<br />
* reading<br />
boost::shared_lock<boost::shared_mutex> lock(g_p_local->rw_mutex_); <br />
* writing<br />
boost::lock_guard<boost::shared_mutex> uniqueLock(g_p_local->rw_mutex_);<br />
<br />
=== DAILY MAINTENANCE ===<br />
* Data is segmented into files, one per day<br />
* to determine end-of-day, timestamps are checked as they come in with quotes (we had no better way to tell)<br />
* At end of day, perform maintenance<br />
** perform maintenance only once, checking for existing filename with date of "today"<br />
** purge nearly all quotes and bracket events, while ensuring the new database remains self-contained<br />
*** preserve last-available quote for all stocks<br />
*** create a fresh new starting snapshot of all accounts using the preserved quotes<br />
** postpone next quote retrieval until market is ready to open again<br />
Pseudo:<br />
EtradeInterface::processQuotes()<br />
if (patc_->bTimestampIsOutsideMarketHours(pt))<br />
patc_->checkCurrentTimeForOutsideMarketHours()<br />
---<br />
checkCurrentTimeForOutsideMarketHours<br />
// Do not rely on quote timestamps.<br />
ptime ptnow = second_clock::local_time();<br />
<br />
if (bMarketOpenOrAboutTo(ptnow))<br />
return false;<br />
<br />
// If we are just now switching to outside-hours, immediately take action.<br />
set_quotes_timer_to_next_market_open(); // This will cause the quotes timer to reset to a loooong pause.<br />
if (bTimestampIsAfterHours(ptnow)) // must be AFTER not before<br />
if (g_p_local->performAfterHoursMaintenance(ptnow))<br />
// Always start each new day with a pre-market account snapshot, <br />
pa->addSnapshotToMemory(snaptime);<br />
runAnalysis();<br />
<br />
=== PRODUCTION ASSETS ===<br />
<br />
Due to potentially large sizes, I moved all bitpost production live assets to the software raid. Extra log backups are in logs folder. Extra db backups are in db_archive folder.<br />
<br />
at_server_live.log -> /spiceflow/softraid/development/thedigitalage/AbetterTrader/ProductionAssets/logs/at_server_live.log<br />
db_analysis -> /spiceflow/softraid/development/thedigitalage/AbetterTrader/ProductionAssets/db_analysis<br />
db_archive -> /spiceflow/softraid/development/thedigitalage/AbetterTrader/ProductionAssets/db_archive<br />
<br />
=== ANALYSIS ===<br />
<br />
==== AnalysisData Bracketing Overview ====<br />
Bracket vars:<br />
"ph1_initial_drop_high": 0.9986,<br />
"ph1_initial_drop_low": 0.9571,<br />
"ph1_initial_drop_steps": 4,<br />
"ph2_initial_rise_high": 1.0429,<br />
"ph2_initial_rise_low": 1.0014,<br />
"ph2_initial_rise_steps": 4,<br />
"ph3_loss_drop_high": 0.85,<br />
"ph3_loss_drop_low": 0.9929,<br />
"ph3_loss_drop_steps": 10,<br />
"ph3_profit_rise_high": 1.12,<br />
"ph3_profit_rise_low": 1.0071,<br />
"ph3_profit_rise_steps": 10,<br />
"ph4_profit_harvest_high": 0.9571,<br />
"ph4_profit_harvest_low": 0.9986,<br />
"ph4_profit_harvest_steps": 4,<br />
<br />
APS monte carlo steps are taken from that.<br />
<br />
===== steps =====<br />
There is a limit to the number of total steps that can be done overnight. The steps above are able to be performed. We should work on increasing step count as possible by optimizing, and improving hardware.<br />
<br />
===== bracket constraints =====<br />
`high` and `low` vars are given, then adjusted using a standard deviation, but within limits to keep meaningless tiny trades from happening.<br />
<br />
Example values:<br />
"ph1_initial_drop_high": 0.9986,<br />
"ph1_initial_drop_low": 0.9571,<br />
"ph1_initial_drop_steps": 4,<br />
"ph2_initial_rise_high": 1.0429,<br />
"ph2_initial_rise_low": 1.0014,<br />
"ph2_initial_rise_steps": 4,<br />
"ph3_loss_drop_high": 0.85,<br />
"ph3_loss_drop_low": 0.9929,<br />
"ph3_loss_drop_steps": 10,<br />
"ph3_profit_rise_high": 1.12,<br />
"ph3_profit_rise_low": 1.0071,<br />
"ph3_profit_rise_steps": 10,<br />
"ph4_profit_harvest_high": 0.9571,<br />
"ph4_profit_harvest_low": 0.9986,<br />
"ph4_profit_harvest_steps": 4,<br />
<br />
==== TRENDLINE (REFACTOR SIX) ====<br />
<br />
Keep focus on the final objective:<br />
* I want to SEE A STOCK'S CYCLE, CLEARLY, so i can harvest it<br />
* the cycle has to be around a base trendline<br />
<br />
We will use recent non-reduced data.<br />
<br />
We need to dynamically display, to clue us in to the big picture.<br />
<br />
More to come.<br />
<br />
==== ANALYZE PSEUDO (REFACTOR FOUR) ====<br />
<br />
* we run autoanalysis for every known symbol, with minimal interaction from user - they just decide to use it or not<br />
* we do a standard deviation on the data, and a monte-carlo-like loop through ranges APS values based on sd multipliers<br />
* we are working toward a distributed microservice approach with many analysis engines across LAN<br />
<br />
Function overview:<br />
bool AtController::load_startup_data()<br />
if (b_analyze_on_startup_)<br />
getAnalyzerController().requeue_analyses();<br />
bool AtController::checkCurrentTimeForOutsideMarketHours()<br />
if (getModel().bMarketOpenOrAboutTo(ptnow))<br />
return false;<br />
if (!b_after_hours_)<br />
b_after_hours_= true;<br />
// Attempt maintenance and analysis, but only if we are AFTER hours (not before).<br />
if (getModel().bTimestampIsAfterHours(ptnow))<br />
getModel().performAfterHoursMaintenance(ptnow);<br />
<br />
void AnalyzerController::requeue_analyses()<br />
stop_and_clear_jobs();<br />
fill_all_job_slots();<br />
<br />
=== REPORTING JSON ===<br />
<br />
PATTERN<br />
-------<br />
bool handler()<br />
mm().readXxxJson(json) (done in derived model)<br />
for r<br />
buildAccountCyclesPerformanceJSONRow( (done in MM)<br />
row["snapshot_action"].as<int64_t>()<br />
) <br />
archive().readXxxJson(json) (done in derived model)<br />
for row<br />
buildAccountCyclesPerformanceJSONRow( (done in MM)<br />
query.getColumn(0).getInt64(),<br />
<br />
FOLLOW OUR PATTERN WITH all handlers:<br />
* PostAccountPerformanceCycles<br />
<br />
AtHttpServer::PostAccountPerformanceCycles() <br />
readAccountCyclesPerformanceJSON (derived models)<br />
buildAccountCyclesPerformanceJSONRow (mm)<br />
<br />
* GetAccountPerformance<br />
* PostAccountPerformance<br />
NOTE that these ones actually completely harvest the data first,<br />
due to reporting requirements (JSON can't be built directly from db rows)<br />
<br />
- PostAccountActivity<br />
- PostAccountActivityTable<br />
<br />
=== QAB charts ===<br />
<br />
CHART TIMEFRAME DESIGN<br />
<br />
use cases:<br />
user wants to see performance across a variety of time frames <- PERFORMANCE PAGE only!<br />
user wants to see historical brackets for older days <- PERFORMANCE PAGE only! lower priority!<br />
user wants to perform immediate actions on realtime chart<br />
user wants to do autoanalysis across a range and then manually tweak it<br />
<br />
requirements<br />
round 1: we can satisfy everything with TODAY ONLY (show today archive if after hours)<br />
round 2: add a separate per-day performance chart<br />
round 3: add a date picker to the chart to let the user select an older day to show<br />
<br />
node reduction<br />
data DISPLAY only needs to show action points and highs/lows<br />
aggressively node-reduce to fit the requested screen size!<br />
given: number of pixels of width<br />
provide: all bracket quotes plus the lowest+highest quotes in each 2-pixel range (minimum - adjustable to more aggressive clipping if desired)<br />
internal data ANALYSIS should use all points<br />
<br />
CHART DATA RETRIEVAL<br />
<pre><br />
function addStock(cycle) {<br />
restAndApply('GET','runs/'+cycle.run+'/live.json?pixel_limit='+$(window).width()*2...<br />
---<br />
void AtHttpsServer::GetRunLive(API_call& call)<br />
g_p_local->readRunLive(p_user->db_id_, account_id, run_id, symbol, pixel_limit, atc_.bAfterHours(), rh);<br />
atc_.thread_buildRunJSON(rh,*sr.paps_,run_id);<br />
<br />
var analysisChange = function(event) {<br />
$.getJSON('runs/'+run+'/analysis.json?pixel_limit='+$(window).width()*2+'&aggressiveness='+event.value, function( data ) {<br />
---<br />
AtHttpsServer::GetRunAnalysis(API_call& call)<br />
atc_.thread_handleAnalysisRequest(<br />
---<br />
AtController::thread_handleAnalysisRequest(BrokerAccount& ba,int64_t run_id,bool b_autoanalyze,double d_aggressiveness,int32_t pixel_limit,string& http_reply)<br />
g_p_local->readRunHistory()<br />
thread_analyzeHistory()<br />
thread_buildRunJSON(rh,apsA,apsA.run_id_);<br />
<br />
-- NOT CURRENTLY CALLED --<br />
function displayHistory(run)<br />
$.getJSON('runs/'+run+'/history.json?pixel_limit='+$(window).width()*2+'&days=3', function( data ) {<br />
---<br />
AtHttpsServer::GetRunHistory(API_call& call)<br />
g_p_local->readRunHistory(p_user->db_id_,account_id,run_id,symbol,days,sr.paps_->n_analysis_quotes_per_day_reqd_,pixel_limit,rh);<br />
atc_.thread_buildRunJSON(rh,*sr.paps_,run_id);<br />
</pre><br />
<br />
=== DEBUG LIVE ===<br />
NOTE that you WILL lose stock quote data during the debugging time, until we set up a second PROD environment.<br />
* WRITE INITIAL DEBUG CODE in any DEV environment<br />
* UPDATE DEBUGGER to run with [live] parameter instead of debug ones<br />
* COPY DATABASE directly from PROD environment to DEV environment: atimport prod<br />
* STOP PROD environment at_server<br />
* DEBUG. quickly. ;-)<br />
* PUSH any code fix (without debug code) back to PROD env<br />
* RESTART PROD and see if the fix worked<br />
* REVERT DEV environment: clean any debug code, redo an atimport and reset the debugger parameters<br />
<br />
=== HTML SHARED HEADER/FOOTER ===<br />
<pre><br />
-------------------------------------------------------------------------------<br />
THREE PARTS THAT MUST BE IN EVERY HTML FILE:<br />
-------------------------------------------------------------------------------<br />
<br />
1) all code above <container>, including these replaceables:<br />
a) logout: <button type='button' id='logout' class='btn btn-margined btn-xs btn-primary pull-right'>Log out</button><br />
b) breadcrumbs: <!--bread--><li><a href="/1">1</a></li><li class="active">2</li><!--crumbs--><br />
2) logout button handler<br />
$( document ).ready(function() {<br />
3) footer and [Bootstrap core Javascript]<br />
<br />
what a maintenance nightmare - but it seems best to do all 10-12 files manually<br />
-------------------------------------------------------------------------------<br />
</pre><br />
<br />
=== HAPROXY and LOAD BALANCING ===<br />
<br />
For the first 1000 paid users, we will NOT do load balancing.<br />
* Use haproxy Layer 7 (http) load balancing to redirect abettertrader.com requests to a bitpost.com custom port.<br />
<br />
For load balancing, there are two database design choices:<br />
* Each server gets its own quotes and saves all its own data<br />
** Need to read user id from each request and send each user to a predetermined server<br />
** Need multiple Etrade accounts, one for each server, unless we get a deal with Etrade<br />
* Switch to a distributed database with master-master replication<br />
** A lot of work<br />
** Might kill sub-second performance? Might not. We already have delayed-write.<br />
<br />
<br />
=== TIMESTAMP STANDARDIZATION ===<br />
<br />
Standardize internal times as int64_t milliseconds since 1970 in UTC. That's not ideal as it doesn't deal with leap seconds. But makes our time handling code much faster, so worth the tradeoff.<br />
<br />
Display times in local time.<br />
<br />
{| class="wikitable"<br />
|+Timestamp TODO<br />
|-<br />
|Chart URL<br />
|The time should default to 9:30am EST which should display in URL as something like: 2019-09-13T13:30:00.000Z<br />
|-<br />
|Performance page<br />
|?<br />
|-<br />
|Activity page<br />
|?<br />
|}<br />
<br />
=== OLDER NOTES ===<br />
<br />
==== WALKING DATABASE FILES ====<br />
<br />
(we are moving to postgres for archiving!)<br />
<br />
There are two types of historical requests:<br />
* a specific date range, usually requested by user; use this:<br />
getDatabaseNames(startdate,enddate)<br />
<br />
* a specific number of days, usually requested by analysis; loop with this:<br />
getPreviousDatabaseName(dt,db_name)<br />
<br />
* there is also this, which skips non-market days, but we want them for now when walking db files:<br />
get_previous_market_day(pt)<br />
<br />
Move to mongo soon! :-)<br />
<br />
==== API PSEUDO (no longer of much use) ==== <br />
<br />
APIGetRunLive::handle_call()<br />
g_p_local->getRunLiveJSON()<br />
readRunQAB(s_str_db_name...)<br />
(SAME as readRunLive!!)<br />
<br />
APIGetRunHistory::handle_call()<br />
g_p_local->getRunHistoryJSON()<br />
readRunHistory<br />
readRunQAB<br />
<br />
==== Qt Creator settings (moved on > CLion > vs code) ==== <br />
* Make sure you have already run [atbuild] and [atbuild debug].<br />
* Open CMakeLists.txt as a Qt Creator project.<br />
* It will force you to do CMake - pick cmake-release folder and let it go.<br />
* Rename the build config to debug.<br />
* Clone it to release and change folder to release.<br />
* Delete make step and replace it with custom build:<br />
./build.sh<br />
(no args)<br />
%{buildDir}<br />
* Create run setups:<br />
you have to use hardcoded path to BASE working dir (or leave it blank maybe?): <br />
<br />
/home/m/development/thedigitalage/AbetterTrader/server<br />
<br />
[x] run in terminal<br />
I recommend using TEST args for both debug and release: localhost 8000 test reanalyze (matches attest)<br />
LIVE args may occasionally be needed for use with [atimport prod]: localhost 8080 live (matches atlive)<br />
<br />
==== MONTHLY MANUAL MAINTENANCE ====<br />
<br />
(This is now available via the admin Summary page.)<br />
<br />
Automate as much as possible, but this is not that bad and safer to do manually when we know the time is right:<br />
monthly db maintenance<br />
<br />
just monthly:<br />
update PrefStr set value = "JANUARY 2018 LEADERBOARD" where name="LeaderboardTitle";<br />
update PrefStr set value = "Leader at 1/31 closing bell wins<br />December winner: cfjaques Go Cara!" where name="LeaderboardDescription";<br />
update Accounts set leaderboard_initial_value = total_managed_value;<br />
update AnalysisData set avg_pct_gain_mtd = 0, sells_count_mtd = 0;<br />
update AnalysisData set avg_pct_gain_mtd = 0, sells_count_mtd = 0;<br />
update StockRuns set avg_pct_gain_mtd = 0, sells_count_mtd = 0;<br />
update StockRuns set avg_pct_gain_mtd = 0, sells_count_mtd = 0;<br />
<br />
AND ANNUAL! HAPPY NEW YEAR 2018!<br />
<br />
update PrefStr set value = "JANUARY 2018 LEADERBOARD" where name="LeaderboardTitle";<br />
update PrefStr set value = "Leader at 1/31 closing bell wins<br />December winner: cfjaques Go Cara!" where name="LeaderboardDescription";<br />
update Accounts set leaderboard_initial_value = total_managed_value, year_initial_value = total_managed_value;<br />
update AnalysisData set avg_pct_gain_ytd = 0, sells_count_ytd = 0, avg_pct_gain_mtd = 0, sells_count_mtd = 0;<br />
update AnalysisData set avg_pct_gain_ytd = 0, sells_count_ytd = 0, avg_pct_gain_mtd = 0, sells_count_mtd = 0;<br />
update StockRuns set avg_pct_gain_ytd = 0, sells_count_ytd = 0, avg_pct_gain_mtd = 0, sells_count_mtd = 0;<br />
update StockRuns set avg_pct_gain_ytd = 0, sells_count_ytd = 0, avg_pct_gain_mtd = 0, sells_count_mtd = 0;<br />
<br />
if you need to hard-reset all the simulation accounts, do this and restart to fix all CASH:<br />
update Accounts set initial_managed_value = 100000, total_managed_value = 100000, net_account_value = 100000 where broker_id=2;<br />
<br />
== Developer environment setup ==<br />
<br />
The setup script is [as setup [nopostres]]. The installation script can be used for initial setup, and rerun to upgrade components like boost SWS SWSS and postgres.<br />
<br />
=== First time install ===<br />
* First, set up YET ANOTHER GODDAMN BOX:<br />
setup_linux.sh [desktop | nodesktop]<br />
<br />
* Clone the good stuff<br />
mh c Get all the goodness<br />
<br />
=== Dependencies ===<br />
<br />
==== Choose local postgres server or remote ====<br />
You can choose to install a full postgres server (usually desired on a laptop):<br />
at setup<br />
Or just install postgres client, and point your dev installation at another postgres server installation (typically use positronic if you're on the LAN):<br />
at se nopostgres<br />
<br />
==== To upgrade boost ====<br />
* update the [[Development reference | boost]] version in .bashrc<br />
* [cdl] and remove existing boost install<br />
* rerun [at se] or [at se nopostres] as appropriate<br />
<br />
==== libpqxx ====<br />
We use our own fork of pqxx, github:moodboom<br />
<br />
I keep the fork updated on cast.<br />
We keep a repo of it in development/Libraries/c++/libpqxx/source/libpqxx<br />
We keep a repo of the parent that we forked from, here:<br />
development/Libraries/c++/libpqxx/source/libpqxx-jvt-parent<br />
It is a straight git-clone of the jvt repo.<br />
To rebase on top of the latest parent release:<br />
cd libpqxx-jvt-parent<br />
git pull<br />
git reset --hard tags/7.7.0 # or whatever latest release is<br />
cd ../libpqxx<br />
# this should already be done:<br />
# git remote add jvt-parent ../libpqxx-jvt-parent<br />
# git checkout -b jvt-parent<br />
# git fetch --all<br />
# git branch --set-upstream-to=jvt-parent/master jvt-parent<br />
git checkout jvt-parent && git pull<br />
git checkout master && git rebase jvt-parent<br />
# fix up the merge and commit and push -f!<br />
To force-push all the way back to github:<br />
🤔 m@morosoph [~/development/Libraries/c++/libpqxx.git] git push --set-upstream origin master -f<br />
<br />
==== Simple-Web-Server ====<br />
I keep my own fork on gitlab.<br />
I pull parent fork changes in on cast.<br />
<br />
To get a new release going:<br />
cd development/Libraries/c++/Simple-Web-Server<br />
git branch<br />
eidheim-parent<br />
master<br />
release/abt-0.0.3<br />
* release/abt-0.0.4<br />
git checkout -b release/abt-0.0.5<br />
# make sure development/Libraries/c++/Simple-Web-Server-eidheim has most recent commits pulled <br />
git checkout eidheim-parent<br />
git pull<br />
git checkout release/abt-0.0.5<br />
git rebase eidheim-parent<br />
git push --all<br />
<br />
Something like that, anyway :P<br />
<br />
==== Simple-WebSocket-Server ====<br />
Similar to SWS.<br />
<br />
=== Clone prod database ===<br />
You can easily pull a sanitized copy of prod down for local usage. It will use the dev account instead of prod. No reason not to do this OFTEN!<br />
<br />
Also, you probably don't need quotes! Those are HUGE. It's fast if you skip them.<br />
<br />
ssh positronic<br />
mh-add-postgres-db at_whatevs<br />
at dump noquotes<br />
at clone positronic-at_live-noquotes-2022-05-15-162423 at_whatevs<br />
# set up a launch.json block to use it - probably with "offline" too<br />
<br />
== [[Trading]] ==</div>Mhttps://bitpost.com/w/index.php?title=Software_Archive&diff=7457Software Archive2024-03-10T21:51:48Z<p>M: </p>
<hr />
<div>[[rtorrent]] - [[Automated torrent management in linux]]<br />
<br />
[[Phabricator]] - [[OpenElec]]<br />
<br />
[[Discourse]]<br />
<br />
[[Mconf]] - [[Hangouts]]<br />
<br />
[[CLion]] - [[IDEA]] - [[Eclipse]] - [[Atom]]<br />
<br />
[[Stadia]]<br />
<br />
[[OpenShift]] - [[CloudWatch]]<br />
<br />
[[vmware]] - [[virtualbox]] - [[x2go]]<br />
<br />
[[OpenWRT]]<br />
<br />
[[WoeUSB]]<br />
<br />
<!-- <br />
<br />
<br />
===========================================================================================================================================================================================================================================================================================<br />
<br />
<br />
--><br />
<br />
=== Old ===<br />
<br />
[[Clonezilla]] - [[Synaptic]]<br />
<br />
[[Robomongo]]<br />
<br />
[[CodeLite]] - [[Brackets]] - [[Sublime]] - [[Scite]] - <br />
<br />
[[Hipchat]] - [[TeamSpeak]] - [[Cisco Spark]] - [[Blink]]</div>Mhttps://bitpost.com/w/index.php?title=Software_reference&diff=7456Software reference2024-03-10T21:51:26Z<p>M: </p>
<hr />
<div><br />
== APPS ==<br />
<br />
A/V: [[Kodi]] - [[VLC]] - [[Blender]] - [[Gimp]] - [[Shotwell]] - [[Davinci Resolve]]<br />
<br />
Music: [[FL Studio]] - [[Reaper]] - [[Audacity]] - [[Ampache]] - [[Spotify]] - [[Strawberry]]<br />
<br />
Games: [[Steam]] - [[Minecraft]] - [[Twitch]]<br />
<br />
<br />
== TOOLS ==<br />
<br />
[[Mediawiki]] - [[Wordpress]]<br />
<br />
[[LibreOffice]] - [[qBitTorrent]] - [[Cura]]<br />
<br />
[[Visual Studio Code|vscode]] - [[Qt Creator]] - [[Emacs]] - [[GitLab]]<br />
<br />
[[irc]] - [[slack]]- [[pidgin]] - [[XMPP]] - [[Rocket.Chat]] - [[zoom]]<br />
<br />
[[i3]] - [[UnixPorn]] - [[terminal]] - [[kitty]] - [[screen]] - [[albert]]<br />
<br />
[[maim]] - [[copyq]]<br />
<br />
[[mame]] - [[Simon]] - [[Kaldi]] - [http://www.question2answer.org/ Q2A]<br />
<br />
[[Chrome]] - [[Firefox]] - [[Brave]] - [[Vivaldi]] - [[Tor]] - [[Okular]]<br />
<br />
[[DBeaver]] - [[pgadmin4]] - [[Studio 3T]] - [[Sqlite Explorer]]<br />
<br />
[[postgres]] - [[sqlite]] - [[mongodb]] - [[mysql]] - [[SQL Server]]<br />
<br />
[[ninja]] - [[gcc]] - [[git]] - [[eslint]]<br />
<br />
[[TrueNAS]] - [[Linux software raid]] - [[Wireshark]] - [[Apache]]<br />
<br />
[[ssh]] - [[gpg]] - [[haproxy]] - [[dnsmasq]] - [[geth]]<br />
<br />
[[proxmox]] - [[SPICE]] - [[Docker]] - [[OpenVPN]] - [[vnc]] - [[Remote Desktop]]<br />
<br />
[[GCP]] - [[AWS]]<br />
<br />
[[systemd]] - [[xrandr]] - [[samba]] - [[fail2ban]] - [[ntp]]<br />
<br />
'''[[Software Under Review]]'''<br />
<br />
'''[[Software Archive]]'''<br />
<br />
<!-- <br />
<br />
<br />
===========================================================================================================================================================================================================================================================================================<br />
<br />
<br />
--><br />
{| class="mw-collapsible mw-collapsed wikitable"<br />
! OS installation<br />
|-<br />
| <br />
{| class="wikitable"<br />
! [[Ubuntu 22.04 upgrade]]<br />
|}<br />
{| class="wikitable"<br />
! [[Linux barebones quickstart]]<br />
|}<br />
{| class="wikitable"<br />
! [[Ubuntu quickstart]]<br />
|}<br />
{| class="wikitable"<br />
! [[Ventoy|Ventoy ISO boot disk]]<br />
|}<br />
{| class="wikitable"<br />
! [[Raspberry Pi]]<br />
|}<br />
{| class="wikitable"<br />
! [[Kali quickstart]]<br />
|}<br />
{| class="wikitable"<br />
! [[Centos quickstart]]<br />
|}<br />
{| class="wikitable"<br />
! [[Cygwin quickstart]]<br />
|}<br />
{| class="wikitable"<br />
! [[OS X]]<br />
|}<br />
{| class="wikitable"<br />
! [[Update gentoo kernel]]<br />
|}<br />
{| class="wikitable"<br />
! [[Upgrade gentoo]]<br />
|}<br />
{| class="mw-collapsible mw-collapsed wikitable"<br />
! Windows 10 quickstart<br />
|-<br />
| <br />
* [[Cygwin quickstart|Install cygwin]]<br />
* Disable automatic restarts<br />
Install Group Policy Editor from an admin Powershell console:<br />
@echo off <br />
pushd "%~dp0" <br />
<br />
dir /b %SystemRoot%\servicing\Packages\Microsoft-Windows-GroupPolicy-ClientExtensions-Package~3*.mum >List.txt <br />
dir /b %SystemRoot%\servicing\Packages\Microsoft-Windows-GroupPolicy-ClientTools-Package~3*.mum >>List.txt <br />
<br />
for /f %%i in ('findstr /i . List.txt 2^>nul') do dism /online /norestart /add-package:"%SystemRoot%\servicing\Packages\%%i" <br />
pause<br />
Run Group Policy Editor to disable restarts:<br />
Computer Configuration > Administrative Templates > Windows Components > Windows Update > Configure Automatic Updates<br />
(o) Enabled<br />
[2] Notify for download and auto install? Or [3] Auto download and notify for install? Going with [3], we'll see.<br />
(or...) (o) Enabled: No auto-restart with logged on users for scheduled automatic updates installations<br />
---<br />
No auto-restart with logged on users for scheduled automatic updates installation (just in case)<br />
(o) Enabled<br />
---<br />
(reboot if you had to change it? or will that wipe it out? tbd...) <br />
In a corporate environment, you should quit your job - I mean, you will likely have to redo this after ANY f'in reboot.<br />
|}<br />
{| class="mw-collapsible mw-collapsed wikitable"<br />
! Memtest boot disk<br />
|-<br />
| It should be on red-on-black flash drive. Or, [https://www.memtest86.com/download.htm get a fresh download] of USB zip, it includes a Windows exe to create the boot. Or use the ISO.<br />
|}<br />
{| class="wikitable"<br />
! [[Ubuntu upgrade / reinstall notes]]<br />
|}<br />
{| class="mw-collapsible mw-collapsed wikitable"<br />
! Create and boot from Ubuntu USB<br />
|-<br />
| There should always be a boot USB for this in my set, but it needs recreation on new Ubuntu versions...<br />
# Download the latest 64-bit Ubuntu desktop iso<br />
# Format a USB drive as FAT (NOT exFAT or NTFS)<br />
# Burn the iso to the USB, providing a GB of space (we want to add the nvidia driver once booted)<br />
sudo usb-creator-gtk<br />
# Boot with it<br />
# On startup, select the USB EFI boot option in refind, select "Try Ubuntu", (on MBPro, hit e and add [ nouveau.noaccel=1] to grub line), hit F10 to start<br />
# Once it is running, start System Settings, select Software, enable proprietary drivers<br />
# Install, checking the [download as you go] and [install 3rd party stuff] boxes.<br />
|}<br />
|}<br />
<!-- <br />
<br />
<br />
===========================================================================================================================================================================================================================================================================================<br />
<br />
<br />
--><br />
{| class="mw-collapsible mw-collapsed wikitable"<br />
! Ubuntu set up networking<br />
|-<br />
| Install NetworkManager, as the wpagui UI sucks<br />
* sudo apt-get install network-manager-gnome<br />
* YOU MUST remove interfaces from /etc/network/interfaces so wpa gives them up to nm-applet<br />
* add nm-applet to startup if needed - i don't think it is needed as it seems to start up automatically now - try rebooting first<br />
|}</div>Mhttps://bitpost.com/w/index.php?title=Software_reference&diff=7455Software reference2024-03-07T16:40:29Z<p>M: </p>
<hr />
<div><br />
== APPS ==<br />
<br />
A/V: [[Kodi]] - [[VLC]] - [[Blender]] - [[Gimp]] - [[Shotwell]] - [[Davinci Resolve]]<br />
<br />
Music: [[FL Studio]] - [[Reaper]] - [[Audacity]] - [[Ampache]] - [[Spotify]] - [[Strawberry]]<br />
<br />
Games: [[Steam]] - [[Stadia]] - [[Minecraft]] - [[Twitch]]<br />
<br />
<br />
== TOOLS ==<br />
<br />
[[Mediawiki]] - [[Wordpress]]<br />
<br />
[[LibreOffice]] - [[qBitTorrent]] - [[Cura]]<br />
<br />
[[Visual Studio Code|vscode]] - [[Qt Creator]] - [[Emacs]] - [[GitLab]]<br />
<br />
[[irc]] - [[slack]]- [[pidgin]] - [[XMPP]] - [[Rocket.Chat]] - [[zoom]]<br />
<br />
[[i3]] - [[UnixPorn]] - [[terminal]] - [[kitty]] - [[screen]] - [[albert]]<br />
<br />
[[maim]] - [[copyq]]<br />
<br />
[[mame]] - [[Simon]] - [[Kaldi]] - [http://www.question2answer.org/ Q2A]<br />
<br />
[[Chrome]] - [[Firefox]] - [[Brave]] - [[Vivaldi]] - [[Tor]] - [[Okular]]<br />
<br />
[[DBeaver]] - [[pgadmin4]] - [[Studio 3T]] - [[Sqlite Explorer]]<br />
<br />
[[postgres]] - [[sqlite]] - [[mongodb]] - [[mysql]] - [[SQL Server]]<br />
<br />
[[ninja]] - [[gcc]] - [[git]] - [[eslint]]<br />
<br />
[[TrueNAS]] - [[Linux software raid]] - [[Wireshark]] - [[Apache]]<br />
<br />
[[ssh]] - [[gpg]] - [[haproxy]] - [[dnsmasq]] - [[geth]]<br />
<br />
[[proxmox]] - [[SPICE]] - [[Docker]] - [[OpenVPN]] - [[vnc]] - [[Remote Desktop]]<br />
<br />
[[GCP]] - [[AWS]]<br />
<br />
[[systemd]] - [[xrandr]] - [[samba]] - [[fail2ban]] - [[ntp]]<br />
<br />
'''[[Software Under Review]]'''<br />
<br />
'''[[Software Archive]]'''<br />
<br />
<!-- <br />
<br />
<br />
===========================================================================================================================================================================================================================================================================================<br />
<br />
<br />
--><br />
{| class="mw-collapsible mw-collapsed wikitable"<br />
! OS installation<br />
|-<br />
| <br />
{| class="wikitable"<br />
! [[Ubuntu 22.04 upgrade]]<br />
|}<br />
{| class="wikitable"<br />
! [[Linux barebones quickstart]]<br />
|}<br />
{| class="wikitable"<br />
! [[Ubuntu quickstart]]<br />
|}<br />
{| class="wikitable"<br />
! [[Ventoy|Ventoy ISO boot disk]]<br />
|}<br />
{| class="wikitable"<br />
! [[Raspberry Pi]]<br />
|}<br />
{| class="wikitable"<br />
! [[Kali quickstart]]<br />
|}<br />
{| class="wikitable"<br />
! [[Centos quickstart]]<br />
|}<br />
{| class="wikitable"<br />
! [[Cygwin quickstart]]<br />
|}<br />
{| class="wikitable"<br />
! [[OS X]]<br />
|}<br />
{| class="wikitable"<br />
! [[Update gentoo kernel]]<br />
|}<br />
{| class="wikitable"<br />
! [[Upgrade gentoo]]<br />
|}<br />
{| class="mw-collapsible mw-collapsed wikitable"<br />
! Windows 10 quickstart<br />
|-<br />
| <br />
* [[Cygwin quickstart|Install cygwin]]<br />
* Disable automatic restarts<br />
Install Group Policy Editor from an admin Powershell console:<br />
@echo off <br />
pushd "%~dp0" <br />
<br />
dir /b %SystemRoot%\servicing\Packages\Microsoft-Windows-GroupPolicy-ClientExtensions-Package~3*.mum >List.txt <br />
dir /b %SystemRoot%\servicing\Packages\Microsoft-Windows-GroupPolicy-ClientTools-Package~3*.mum >>List.txt <br />
<br />
for /f %%i in ('findstr /i . List.txt 2^>nul') do dism /online /norestart /add-package:"%SystemRoot%\servicing\Packages\%%i" <br />
pause<br />
Run Group Policy Editor to disable restarts:<br />
Computer Configuration > Administrative Templates > Windows Components > Windows Update > Configure Automatic Updates<br />
(o) Enabled<br />
[2] Notify for download and auto install? Or [3] Auto download and notify for install? Going with [3], we'll see.<br />
(or...) (o) Enabled: No auto-restart with logged on users for scheduled automatic updates installations<br />
---<br />
No auto-restart with logged on users for scheduled automatic updates installation (just in case)<br />
(o) Enabled<br />
---<br />
(reboot if you had to change it? or will that wipe it out? tbd...) <br />
In a corporate environment, you should quit your job - I mean, you will likely have to redo this after ANY f'in reboot.<br />
|}<br />
{| class="mw-collapsible mw-collapsed wikitable"<br />
! Memtest boot disk<br />
|-<br />
| It should be on red-on-black flash drive. Or, [https://www.memtest86.com/download.htm get a fresh download] of USB zip, it includes a Windows exe to create the boot. Or use the ISO.<br />
|}<br />
{| class="wikitable"<br />
! [[Ubuntu upgrade / reinstall notes]]<br />
|}<br />
{| class="mw-collapsible mw-collapsed wikitable"<br />
! Create and boot from Ubuntu USB<br />
|-<br />
| There should always be a boot USB for this in my set, but it needs recreation on new Ubuntu versions...<br />
# Download the latest 64-bit Ubuntu desktop iso<br />
# Format a USB drive as FAT (NOT exFAT or NTFS)<br />
# Burn the iso to the USB, providing a GB of space (we want to add the nvidia driver once booted)<br />
sudo usb-creator-gtk<br />
# Boot with it<br />
# On startup, select the USB EFI boot option in refind, select "Try Ubuntu", (on MBPro, hit e and add [ nouveau.noaccel=1] to grub line), hit F10 to start<br />
# Once it is running, start System Settings, select Software, enable proprietary drivers<br />
# Install, checking the [download as you go] and [install 3rd party stuff] boxes.<br />
|}<br />
|}<br />
<!-- <br />
<br />
<br />
===========================================================================================================================================================================================================================================================================================<br />
<br />
<br />
--><br />
{| class="mw-collapsible mw-collapsed wikitable"<br />
! Ubuntu set up networking<br />
|-<br />
| Install NetworkManager, as the wpagui UI sucks<br />
* sudo apt-get install network-manager-gnome<br />
* YOU MUST remove interfaces from /etc/network/interfaces so wpa gives them up to nm-applet<br />
* add nm-applet to startup if needed - i don't think it is needed as it seems to start up automatically now - try rebooting first<br />
|}</div>Mhttps://bitpost.com/w/index.php?title=Brave&diff=7454Brave2024-02-20T15:57:22Z<p>M: /* Install */</p>
<hr />
<div>== Configure ==<br />
<br />
=== Settings ===<br />
Immediately change these Settings on any new profile:<br />
* Get started > Profile name and icon > customize it!<br />
* Get started > New Tab Page > shows > Homepage<br />
* Appearance > Toolbar > Show home button ON, set to bitpost.com<br />
* Appearance > Toolbar > Brave Rewards, Brave Wallet, Sidebar > OFF<br />
* Appearance > Toolbar > Show wide address bar, show full urls > ON<br />
* Appearance > Sidebar > Show: never<br />
* (NO) Appearance > Content > (very bottom) Cycle through the most recently used tabs with Ctrl-Tab<br />
* Sync > I have a chain > paste, toggle Sync everything ON<br />
<br />
TODO: consider playing with vertical tabs! It's a mindfuck tho...<br />
<br />
=== Profiles ===<br />
Profiles in Brave are AWESOME:<br />
* Brave seems to remember what desktop the windows were on!!! FINALLY<br />
* For each profile, you get a name and customizable icon in the top-right of the toolbar, SO NICE<br />
* Bookmarks etc. are all profile-specific, as expected<br />
* You can sync a profile across machines (see #Sync)<br />
<br />
==== Finding Profile folder ====<br />
You can start brave with a specific profile, like this:<br />
brave-browser --profile-directory='Profile 4'&<br />
This is a bit dicey as the folder you must provide does not actually match the profile name! I used this bit of ugliness to get a map of name to folder:<br />
<br />
💉 m@cast [~/.config/BraveSoftware/Brave-Browser] grep -Po "\"name\":\"[^\"]*?\",\"(pa|usi)" * -R<br />
Default/Preferences:"name":"moodboom","pa<br />
Profile 4/Preferences:"name":"es-1","pa<br />
Profile 5/Preferences:"name":"mdmdev-1","pa<br />
Profile 6/Preferences:"name":"Profile 2","pa<br />
Profile 7/Preferences:"name":"ig-1","usi<br />
<br />
==== Assign VSCode Launch to (re)use a Brave profile ====<br />
When vscode starts its Javascript debugger, it uses a custom "user-data-dir" within its own config folder. This means you start with a clean profile upon first use. However, (after MUCH trial-and-horror), you can specify that it uses a precise profile within this custom user-data-dir.<br />
<br />
If you use a launch.json stanza like this, vscode will always use that specific profile. You can then start customizing the profile, and your changes will stick across debugging sessions. Awesome!<br />
"configurations": [<br />
{<br />
"name": "Launch Brave",<br />
"runtimeExecutable": "/usr/bin/brave-browser",<br />
// NOTE: This directory will be within .code/Code/User; specifying it allows reuse of a specific Profile there, nice.<br />
"runtimeArgs": [ "--profile-directory=\"Profile 4\"" ],<br />
"type": "chrome",<br />
"request": "launch",<br />
"timeout": 5000,<br />
"url": "http://localhost:8008",<br />
"webRoot": "${workspaceFolder}/src"<br />
}<br />
]<br />
<br />
That should be all you need to get usage of a consistent profile across sessions. But... say you want to sync it across development environments... why not! Read on...<br />
<br />
===== More Details =====<br />
Again, we dig to get a map of profile names to folders. First, run the debugger, and look for the process and params:<br />
ps ax|grep -Po brave.*config/Code/User/workspaceStorage/.*?profile<br />
From that, you can extract a user-data-dir:<br />
--user-data-dir=/home/m/development/config/common/home/m/.config/Code/User/workspaceStorage/436f0a4c50203656e625d634df9e0ca9/ms-vscode.js-debug/.profile<br />
NOTE that it seems that each vscode project has its own workspaceStorage! Which is pretty crazy. That means every dev project can have its own set of profiles.<br />
<br />
From the user-data-dir folder, you can get a map of profile name to folder:<br />
cd #your-user-data-dir#<br />
grep -Po "\"name\":\"[^\"]*?\",\"(pa|usi)" * -R<br />
"Profile 4"/Preferences:"name":"es-2","usi<br />
<br />
Now that you know what is where.... Just like with "normal" profiles, you can sync them across machines.<br />
<br />
=== Set up sync ===<br />
Do this for each profile:<br />
* Settings > Sync > Start a new Sync Chain > Computer > copy Sync Chain Code > save in private<br />
* Make sure Sync everything is toggled ON<br />
* You can now paste this chain into another computer to sync this profile there<br />
<br />
== [https://brave.com/linux/ Install] ==<br />
Copy and paste this all in one shot!<br />
sudo curl -fsSLo /usr/share/keyrings/brave-browser-archive-keyring.gpg https://brave-browser-apt-release.s3.brave.com/brave-browser-archive-keyring.gpg<br />
echo "deb [signed-by=/usr/share/keyrings/brave-browser-archive-keyring.gpg] https://brave-browser-apt-release.s3.brave.com/ stable main"|sudo tee /etc/apt/sources.list.d/brave-browser-release.list<br />
sudo apt update<br />
sudo apt install brave-browser<br />
<br />
One annoying thing... the apt file searches all architectures and you'll get i386 errors, until you add "arch=amd64" like this:<br />
sudo emacs /etc/apt/sources.list.d/brave-browser-release.list<br />
<br />
deb [signed-by=/usr/share/keyrings/brave-browser-archive-keyring.gpg arch=amd64] https://brave-browser-apt-release.s3.brave.com/ stable main</div>Mhttps://bitpost.com/w/index.php?title=Brave&diff=7453Brave2024-02-20T15:57:02Z<p>M: /* Install */</p>
<hr />
<div>== Configure ==<br />
<br />
=== Settings ===<br />
Immediately change these Settings on any new profile:<br />
* Get started > Profile name and icon > customize it!<br />
* Get started > New Tab Page > shows > Homepage<br />
* Appearance > Toolbar > Show home button ON, set to bitpost.com<br />
* Appearance > Toolbar > Brave Rewards, Brave Wallet, Sidebar > OFF<br />
* Appearance > Toolbar > Show wide address bar, show full urls > ON<br />
* Appearance > Sidebar > Show: never<br />
* (NO) Appearance > Content > (very bottom) Cycle through the most recently used tabs with Ctrl-Tab<br />
* Sync > I have a chain > paste, toggle Sync everything ON<br />
<br />
TODO: consider playing with vertical tabs! It's a mindfuck tho...<br />
<br />
=== Profiles ===<br />
Profiles in Brave are AWESOME:<br />
* Brave seems to remember what desktop the windows were on!!! FINALLY<br />
* For each profile, you get a name and customizable icon in the top-right of the toolbar, SO NICE<br />
* Bookmarks etc. are all profile-specific, as expected<br />
* You can sync a profile across machines (see #Sync)<br />
<br />
==== Finding Profile folder ====<br />
You can start brave with a specific profile, like this:<br />
brave-browser --profile-directory='Profile 4'&<br />
This is a bit dicey as the folder you must provide does not actually match the profile name! I used this bit of ugliness to get a map of name to folder:<br />
<br />
💉 m@cast [~/.config/BraveSoftware/Brave-Browser] grep -Po "\"name\":\"[^\"]*?\",\"(pa|usi)" * -R<br />
Default/Preferences:"name":"moodboom","pa<br />
Profile 4/Preferences:"name":"es-1","pa<br />
Profile 5/Preferences:"name":"mdmdev-1","pa<br />
Profile 6/Preferences:"name":"Profile 2","pa<br />
Profile 7/Preferences:"name":"ig-1","usi<br />
<br />
==== Assign VSCode Launch to (re)use a Brave profile ====<br />
When vscode starts its Javascript debugger, it uses a custom "user-data-dir" within its own config folder. This means you start with a clean profile upon first use. However, (after MUCH trial-and-horror), you can specify that it uses a precise profile within this custom user-data-dir.<br />
<br />
If you use a launch.json stanza like this, vscode will always use that specific profile. You can then start customizing the profile, and your changes will stick across debugging sessions. Awesome!<br />
"configurations": [<br />
{<br />
"name": "Launch Brave",<br />
"runtimeExecutable": "/usr/bin/brave-browser",<br />
// NOTE: This directory will be within .code/Code/User; specifying it allows reuse of a specific Profile there, nice.<br />
"runtimeArgs": [ "--profile-directory=\"Profile 4\"" ],<br />
"type": "chrome",<br />
"request": "launch",<br />
"timeout": 5000,<br />
"url": "http://localhost:8008",<br />
"webRoot": "${workspaceFolder}/src"<br />
}<br />
]<br />
<br />
That should be all you need to get usage of a consistent profile across sessions. But... say you want to sync it across development environments... why not! Read on...<br />
<br />
===== More Details =====<br />
Again, we dig to get a map of profile names to folders. First, run the debugger, and look for the process and params:<br />
ps ax|grep -Po brave.*config/Code/User/workspaceStorage/.*?profile<br />
From that, you can extract a user-data-dir:<br />
--user-data-dir=/home/m/development/config/common/home/m/.config/Code/User/workspaceStorage/436f0a4c50203656e625d634df9e0ca9/ms-vscode.js-debug/.profile<br />
NOTE that it seems that each vscode project has its own workspaceStorage! Which is pretty crazy. That means every dev project can have its own set of profiles.<br />
<br />
From the user-data-dir folder, you can get a map of profile name to folder:<br />
cd #your-user-data-dir#<br />
grep -Po "\"name\":\"[^\"]*?\",\"(pa|usi)" * -R<br />
"Profile 4"/Preferences:"name":"es-2","usi<br />
<br />
Now that you know what is where.... Just like with "normal" profiles, you can sync them across machines.<br />
<br />
=== Set up sync ===<br />
Do this for each profile:<br />
* Settings > Sync > Start a new Sync Chain > Computer > copy Sync Chain Code > save in private<br />
* Make sure Sync everything is toggled ON<br />
* You can now paste this chain into another computer to sync this profile there<br />
<br />
== [https://brave.com/linux/ Install] ==<br />
Copy and paste this all in one shot!<br />
sudo curl -fsSLo /usr/share/keyrings/brave-browser-archive-keyring.gpg https://brave-browser-apt-release.s3.brave.com/brave-browser-archive-keyring.gpg<br />
echo "deb [signed-by=/usr/share/keyrings/brave-browser-archive-keyring.gpg] https://brave-browser-apt-release.s3.brave.com/ stable main"|sudo tee /etc/apt/sources.list.d/brave-browser-release.list<br />
sudo apt update<br />
sudo apt install brave-browser<br />
<br />
One annoying thing... the apt file searches all architectures and you'll get i386 errors, until you add "arch=amd64" like this:<br />
sudo emacs /etc/apt/sources.list.d/brave-browser-release.list <br />
deb [signed-by=/usr/share/keyrings/brave-browser-archive-keyring.gpg arch=amd64] https://brave-browser-apt-release.s3.brave.com/ stable main</div>Mhttps://bitpost.com/w/index.php?title=Matryoshka&diff=7452Matryoshka2024-01-22T13:09:49Z<p>M: </p>
<hr />
<div>Matryoshka is my primary docker container host.<br />
<br />
It also runs my only gitlab runner. So it is required to be running for gitlab jobs to run.<br />
<br />
== Jaws support ==<br />
Jaws is a RocketChat set of containers (rocketchat+mongo). Some images like Mongo 5.0 require more-advanced CPU capabilities than Proxmox grants by default. Specifically, Mongo 5.0 requires AVX cpu instructions. To enable them:<br />
Proxmox > VM > Edit > Processor > Type: "host"<br />
<br />
== History ==<br />
=== Grow disk from 70GM to 100GB ===<br />
So many containers... i grew the disk, using up almost ALL OF THE 1TB proxmox NME DRIVE!<br />
<br />
I really really really need to upsize that bastard!!! LIFE IS HARD!!<br />
<br />
=== Moving data from old jaws rocketchat VM to matroyshka docker container ===<br />
<br />
I moved mongo data in from old jaws VM:<br />
🦈 m@jaws [~] mongodump --host=localhost --db rocketchat<br />
🦈 m@jaws [~] scp -rp dump matryoshka:<br />
---<br />
🪆 m@matryoshka [~/apps/RocketChat] docker stop rocketchat-rocketchat-1<br />
🪆 m@matryoshka [~/apps/RocketChat] docker cp dump rocketchat-mongodb-1:/tmp/<br />
🪆 m@matryoshka [~/apps/RocketChat] docker exec rocketchat-mongodb-1 bash -c 'mongo rocketchat --eval "db.dropDatabase()"'<br />
🪆 m@matryoshka [~/apps/RocketChat] docker exec rocketchat-mongodb-1 bash -c 'mongorestore --db rocketchat /tmp/dump/rocketchat'<br />
🪆 m@matryoshka [~/apps/RocketChat] docker compose up -d<br />
<br />
It has a [https://github.com/RocketChat/Rocket.Chat/issues/26089#issuecomment-1172211607 bug]! had to do this...<br />
🪆 m@matryoshka [~/apps/RocketChat]<br />
docker exec -it rocketchat-mongodb-1 'mongo'<br />
use rocketchat<br />
db.rocketchat_settings.insert({ <br />
"_id" : "Rate_Limiter_Limit_RegisterUser",<br />
"createdAt" : ISODate("2022-07-01T05:32:48.791Z"),<br />
"value" : 1,<br />
"packageValue" : 1,<br />
"valueSource" : "packageValue",<br />
"secret" : false,<br />
"enterprise" : false,<br />
"i18nDescription" : "Rate_Limiter_Limit_RegisterUser_Description",<br />
"autocomplete" : true,<br />
"sorter" : 0,<br />
"ts" : ISODate("2022-07-01T05:32:48.791Z"),<br />
"type" : "int",<br />
"group" : "Rate Limiter",<br />
"section" : "Feature_Limiting",<br />
"enableQuery" : "{\"_id\":\"API_Enable_Rate_Limiter\",\"value\":true}",<br />
"i18nLabel" : "Rate_Limiter_Limit_RegisterUser",<br />
"hidden" : false,<br />
"blocked" : false,<br />
"requiredOnWizard" : false,<br />
"env" : false,<br />
"public" : false,<br />
"_updatedAt" : ISODate("2022-07-01T05:38:12.246Z")<br />
})<br />
exit<br />
<br />
docker compose down<br />
docker compose up -d</div>Mhttps://bitpost.com/w/index.php?title=Matryoshka&diff=7451Matryoshka2024-01-22T13:08:51Z<p>M: /* History */</p>
<hr />
<div>Matryoshka is a docker container host.<br />
<br />
It also runs my only gitlab runner. So it is required to be running for gitlab jobs to run.<br />
<br />
== Jaws support ==<br />
Jaws is a RocketChat set of containers (rocketchat+mongo). Some images like Mongo 5.0 require more-advanced CPU capabilities than Proxmox grants by default. Specifically, Mongo 5.0 requires AVX cpu instructions. To enable them:<br />
Proxmox > VM > Edit > Processor > Type: "host"<br />
<br />
== History ==<br />
=== Grow disk from 70GM to 100GB ===<br />
So many containers... i grew the disk, using up almost ALL OF THE 1TB proxmox NME DRIVE!<br />
<br />
I really really really need to upsize that bastard!!! LIFE IS HARD!!<br />
<br />
=== Moving data from old jaws rocketchat VM to matroyshka docker container ===<br />
<br />
I moved mongo data in from old jaws VM:<br />
🦈 m@jaws [~] mongodump --host=localhost --db rocketchat<br />
🦈 m@jaws [~] scp -rp dump matryoshka:<br />
---<br />
🪆 m@matryoshka [~/apps/RocketChat] docker stop rocketchat-rocketchat-1<br />
🪆 m@matryoshka [~/apps/RocketChat] docker cp dump rocketchat-mongodb-1:/tmp/<br />
🪆 m@matryoshka [~/apps/RocketChat] docker exec rocketchat-mongodb-1 bash -c 'mongo rocketchat --eval "db.dropDatabase()"'<br />
🪆 m@matryoshka [~/apps/RocketChat] docker exec rocketchat-mongodb-1 bash -c 'mongorestore --db rocketchat /tmp/dump/rocketchat'<br />
🪆 m@matryoshka [~/apps/RocketChat] docker compose up -d<br />
<br />
It has a [https://github.com/RocketChat/Rocket.Chat/issues/26089#issuecomment-1172211607 bug]! had to do this...<br />
🪆 m@matryoshka [~/apps/RocketChat]<br />
docker exec -it rocketchat-mongodb-1 'mongo'<br />
use rocketchat<br />
db.rocketchat_settings.insert({ <br />
"_id" : "Rate_Limiter_Limit_RegisterUser",<br />
"createdAt" : ISODate("2022-07-01T05:32:48.791Z"),<br />
"value" : 1,<br />
"packageValue" : 1,<br />
"valueSource" : "packageValue",<br />
"secret" : false,<br />
"enterprise" : false,<br />
"i18nDescription" : "Rate_Limiter_Limit_RegisterUser_Description",<br />
"autocomplete" : true,<br />
"sorter" : 0,<br />
"ts" : ISODate("2022-07-01T05:32:48.791Z"),<br />
"type" : "int",<br />
"group" : "Rate Limiter",<br />
"section" : "Feature_Limiting",<br />
"enableQuery" : "{\"_id\":\"API_Enable_Rate_Limiter\",\"value\":true}",<br />
"i18nLabel" : "Rate_Limiter_Limit_RegisterUser",<br />
"hidden" : false,<br />
"blocked" : false,<br />
"requiredOnWizard" : false,<br />
"env" : false,<br />
"public" : false,<br />
"_updatedAt" : ISODate("2022-07-01T05:38:12.246Z")<br />
})<br />
exit<br />
<br />
docker compose down<br />
docker compose up -d</div>Mhttps://bitpost.com/w/index.php?title=TrueNAS&diff=7450TrueNAS2024-01-16T18:00:34Z<p>M: /* Set up alert emails */</p>
<hr />
<div>== Overview ==<br />
<br />
=== Pools ===<br />
<br />
TrueNAS provides storage via Pools. A pool is a bunch of raw drives gathered and managed as a set. My pools are one of these:<br />
{| class="wikitable"<br />
! Pool type<br />
! Description<br />
|-<br />
| style="color:red" |single drive<br />
|no TrueNAS advantage other than health checks<br />
|-<br />
| style="color:red" |raid1 pair<br />
|mirrored drives give normal write speeds, fast reads, single-fail redundancy, costs half of storage potential<br />
|-<br />
| style="color:red" |raid0 pair<br />
|striped drives gives fast writes, normal reads, no redundancy, no storage cost<br />
|-<br />
| style="color:green" |raid of multiple drives<br />
|'''raidz''': optimization of read/write speed, redundancy, storage potential<br />
|}<br />
<br />
The three levels of raidz are:<br />
* raidz: one drive is consumed just for parity (no data storage, ie you only get (n-1) storage total), and one drive can be lost without losing any data; fastest; very dangerous to recover from lost drive ("resilver" process is brutal on remaining drives - don't wait)<br />
* raidz2: two drives for parity, two can be lost<br />
* raidz3: three drives for parity, three can be lost; slowest<br />
<br />
=== Datasets ===<br />
<br />
Every pool should have one child dataset. This is where we set the permissions, important for SAMBA access. We could have more than one child dataset, but I haven't had the need.<br />
<br />
==== Adding ====<br />
hive > Storage > Pools > mine (or any newly created pool) > Add Dataset<br />
<br />
Dataset settings:<br />
name #pool#-ds<br />
share type SMB<br />
<br />
Save, then continue...<br />
hive > Storage > Pools > mine (or any newly created pool) > mine-ds > Edit ACL<br />
user m<br />
group m<br />
ACL<br />
who everyone@<br />
type Allow<br />
Perm type Basic (NOTE: "Perm type Basic" is important!)<br />
Perm Full control (NOTE: this is not the default, you will need to change it)<br />
Flags type Basic<br />
Flags Inherit (NOTE: this is not the default, you will need to change it)<br />
(REMOVE on all other blocks)<br />
SAVE<br />
<br />
=== Windows SMB Shares ===<br />
<br />
Share each dataset as a Samba share under:<br />
Sharing > Windows Shares (SMB)<br />
<br />
* Use the pool name for the share name.<br />
* Use the same ACL as for the dataset.<br />
* Purpose: No presets<br />
<br />
WARNING I had to set these Auxiliary parameters in the SMB config so that symlinks would be followed.<br />
<br />
* Services > SMB > Actions > configuration > Auxiliary Parameters:<br />
unix extensions = no<br />
follow symlinks = yes<br />
wide links = yes<br />
* Stop and restart SMB service<br />
<br />
== Maintenance ==<br />
<br />
=== Burn in a new drive ===<br />
[https://www.truenas.com/community/resources/hard-drive-burn-in-testing.92/ ALWAYS do this] even tho it's a PITA. Less pain than not doing it.<br />
<br />
I didn't do it for my 7-8TB-drive zraid. Murphy said FUCK YOU and one of the eight went bad. So... do the test, dumbass.<br />
<br />
But of course I found a way to stay lazy... TrueNAS has the ability to run SMART tests directly on a drive, so do it there. Or just wait for SMART failures to show up. God damn, laziness rules. Maybe. Fool.<br />
<br />
=== Regularly do SMART, scrub, resilver ===<br />
<br />
YOU MUST DO THIS REGULARLY!<br />
<br />
From [https://www.truenas.com/community/threads/self-healed-hard-drive.81138/ here]:<br />
A drive, vdev or pool is declared degraded if ZFS detects problems with the data. If you reboot the error count is reset. A resilver will heal the data errors if there is sufficient redundancy. ZFS will only spot the data issues on read, that’s why we have scrubs, a forced read of all the data to try and determine if there are any errors. So schedule regular scrubs are important. This will not tell you why the data is corrupted, for this you have S.M.A.R.T tests, you need to schedule those as well, both long and short.<br />
<br />
to get a handle on the situation as is, you need to trigger a scrub and long smart tests.<br />
<br />
Never do more than one of these at a time, and never do any of them during heavy disk usage (backups, eg).<br />
<br />
SMART can be done weekly (not too often or it will contribute to early wear-out of SSDs).<br />
<br />
Same for scrub.<br />
<br />
Resilver happens when a drive issue requires the data to be rebalanced or redistributed. Buckle up for this one!<br />
<br />
=== Pool speed check ===<br />
<br />
CAST to SAFE: ~114MB/s write (compressed) on 60MB/s network<br />
<br />
Do this to test raw write speed from anywhere on the LAN to the [safe] pool:<br />
dd if=/dev/zero of=/mnt/safe/safe-dd/speedtest.data bs=4M count=10000<br />
# on hive: 4GB transferred in ~15sec at ~2.9GB/sec, WOW<br />
# on cast: 42GB copied in 371sec at 114MB/s - that seems in line with my network speed (see below)<br />
<br />
To test the network bandwidth limit:<br />
# on hive<br />
iperf -s -w 2m # to run in server mode, looking for 2MB transfers<br />
# on another LAN machine<br />
iperf -c hive -w 2m -t 30s -i 1s<br />
# on cast: 1.51 GB at 477Mbits/sec aka 60MB/sec<br />
# I have a 1Gb switch, i guess that's all we get out of it?<br />
<br />
=== Replace a bad disk in a raidz pool ===<br />
My 7-drive raidz arrays can only lose ONE drive before they go boom, so you MUST replace bad disks immediately. raidz2 uses 2 drives, raidz3 uses three, but SSD raidz you-can-lose-one-drive is, to me, a sweet spot.<br />
<br />
* Watch TrueNAS for '''CRITICAL''' alerts that indicate a drive is failing its self-tests.<br />
* Make note of its serial number.<br />
* Find the drive in the pool, make note of its drive id (not needed but no harm).<br />
* Change the pool drive status from FAULTED to OFFLINE<br />
Storage > Pools > badpool > triple-dot Status > baddrive > triple-dot-status > FAULTED to OFFLINE<br />
* Power down the whole fucking PROXMOX machine<br />
* Pull it, and swap out bad drive for good<br />
* Replace it<br />
Storage > Pools > badpool > baddrive > triple-dot-status > REPLACE<br />
<br />
=== Remove a bad pool ===<br />
<br />
* Make note of which drives use the pool; likely some are bad and some are good and perhaps worth reusing elsewhere.<br />
* Disconnect SMB connections to the pool<br />
** Update valid shares in mh-setup-samba-shares<br />
** Rerun mh-setup-samba-shares everywhere (eventually anyway)<br />
** One possible easier way to get SMB disconnected from the pool is to stop SMB service in TrueNAS<br />
** Sadly, to get through this for my splat pool, I had to remove pool, fail, restart hive, remove pool.<br />
* Pool > (gear) > Export/disconnect<br />
** [x] Delete configuration of shares that use this pool (to remove the associated SMB share)<br />
** [x] Destroy data on this pool (you MUST select this or the silly thing will attempt to export the data)<br />
<br />
=== Update TrueNAS ===<br />
Updating is baked into the UI, nice! And I have auto-updates enabled. So nice.<br />
<br />
These guys work hard on this, to make sure releases are well tested. Watch for alerts about newly available updates. Do not update past the current release!<br />
<br />
System > Update > [Train] (ensure you have a good one selected; on occasion, you'll want to CHANGE it to select a newer stable release!)<br />
Give the system a minute to load available updates...<br />
Press Download available updates > The modal will ask if you want to apply and restart > Say yes<br />
<br />
That's about it!<br />
<br />
== Configuration ==<br />
<br />
=== Set up user ===<br />
I set up m user (1000) and m group (1000)<br />
<br />
=== Set up alert emails ===<br />
Go to one of your google accounts to get an App password. It has to be an account that has 2fa turned on, bleh, so don't use moodboom@gmail.com. I went with abettersoftwaretrader@gmail.com.<br />
<br />
Accounts > Users > root > edit password > abettersoftwaretrader@gmail.com<br />
System > Email > from email > abettersoftwaretrader@gmail.com, smtp.gmail.com 465 Implicit SSL, SMTP auth: (email/API password)<br />
<br />
Then you can test it here:<br />
System > Email > (at bottom, next to Save...) Send Test Email<br />
<br />
=== Set up user ssh ===<br />
This was not fun.<br />
* Set up user<br />
* You have to set password ON and make sure to check [x] Allow sudo<br />
* Make sure to allow Samba Authentication for m user that is used for samba<br />
* Add public key to user<br />
* Create a valid folder on the /mnt NAS shares for the user's home; you can mkdir using samba; I created:<br />
/mnt/safe/safe-ds/software/apps/hive-home<br />
* set the user's home to that ^; turn off password auth<br />
* Turn on SSH service<br />
* System > SSH Keypairs > Add SSH keypair for main user m<br />
* System > SSH Connections > Add, use localhost, keypair from prev step<br />
It should work but it does not!<br />
* Open a TrueNas prompt via proxmox console<br />
* Go to the home dir, there should be an .ssh there now<br />
* Reduce permissions on both HOME DIR (700) and .ssh/KEY (400)<br />
* Get a shell and run `sudo visudo` and add this line:<br />
m ALL=(ALL) NOPASSWD: ALL<br />
<br />
Finally! It works!<br />
<br />
== Troubleshooting ==<br />
<br />
SOME of my shares were throwing '''Permission Denied''' errors on mv. Solutions:<br />
* I applied permissions again, recursively, then restarted the SMB service on hive and the problem went away.<br />
* You can also always go to the melange hive console, request a shell, and things always seem to work from there (but you're in FreeBSD world and don't have any beauty scripts like mh-move-torrent!)</div>Mhttps://bitpost.com/w/index.php?title=GitLab&diff=7449GitLab2024-01-16T00:32:02Z<p>M: /* Configure */</p>
<hr />
<div>Yes it's Ruby. And Go. Yes it's time for yet another shitty "solution to everything" framework... sigh...<br />
<br />
GLAM hosts the GitLab website.<br />
<br />
MATRYOSHKA hosts the gitlab-runner that performs GitLab jobs.<br />
<br />
== Configure ==<br />
<br />
* [https://shitcutter.com admin console]<br />
<br />
* [https://docs.gitlab.com/ee/ docs]<br />
<br />
* To turn on/off registration:<br />
Admin > Settings > General > Signup restrictions<br />
<br />
=== Server ===<br />
<br />
See the alias list on glam for a few gitlab commands available as shortcuts.<br />
<br />
* Much of the server configuration (eg SMTP) is in this file:<br />
👠 m@glam [~] sudo emacs /etc/gitlab/gitlab.rb<br />
Change it, then reload it:<br />
👠 m@glam [~] sudo gitlab-ctl reconfigure<br />
<br />
* tail gitlab log<br />
sudo tail -f /var/log/gitlab/gitlab-rails/production_json.log<br />
<br />
* tail gitlab nginx<br />
sudo tail -f /var/log/gitlab/nginx/gitlab_access.log<br />
<br />
* service<br />
sudo gitlab-ctl # to see commands<br />
sudo gitlab-ctl restart nginx<br />
sudo gitlab-ctl restart<br />
ok: run: alertmanager: (pid 463302) 1s<br />
ok: run: gitaly: (pid 463311) 0s<br />
ok: run: gitlab-exporter: (pid 463336) 0s<br />
ok: run: gitlab-workhorse: (pid 463338) 0s<br />
ok: run: grafana: (pid 463351) 1s<br />
ok: run: logrotate: (pid 463440) 0s<br />
ok: run: nginx: (pid 463446) 1s<br />
ok: run: node-exporter: (pid 463454) 0s<br />
ok: run: postgres-exporter: (pid 463461) 1s<br />
ok: run: postgresql: (pid 463475) 0s<br />
ok: run: prometheus: (pid 463484) 0s<br />
ok: run: puma: (pid 463499) 0s<br />
ok: run: redis: (pid 463504) 0s<br />
ok: run: redis-exporter: (pid 463510) 1s<br />
ok: run: sidekiq: (pid 463519) 0s<br />
sudo gitlab-ctl stop<br />
sudo gitlab-ctl tail<br />
<br />
* to get to a rails console:<br />
sudo gitlab-rails console<br />
<br />
From there, you can do things like send a test email:<br />
irb(main):010:0><br />
irb(main):010:0> Notify.test_email('m@bitpost.com', 'Message Subject', 'Message Body').deliver_now<br />
<br />
==== Push Notifications ====<br />
* I have email working. Each user can decide when they want to receive email notifications on events, by group and project.<br />
* Consider coupling with RocketChat, see [https://stackoverflow.com/questions/51788306/push-notifications-to-rocket-chat-in-gitlab-ci/51794086 here]<br />
<br />
=== Runner ===<br />
* to work with runners, use gitlab-runner cmd, eg:<br />
gitlab-runner list<br />
sudo gitlab-runner status<br />
<br />
=== Upgrade ===<br />
<br />
Like so many other software packages, they are totally lazy and dont support version jumping. Check what you need to do [https://docs.gitlab.com/ee/update/index.html here].<br />
<br />
==== 14.9 to 15 ====<br />
It is puking going from 14.9.3 to 15, even though it is supposedly supported. [https://dev.to/konung/upgrading-to-gitlab-150-ce-from-gitlab-1493-if5 This] helped.<br />
sudo apt upgrade -y gitlab-ce=14.10.0-ce.0<br />
Configuration backup archive complete: /etc/gitlab/config_backup/gitlab_config_1656455183_2022_06_28.tar<br />
<br />
Now you can jump to 15.0. What fun.<br />
sudo apt upgrade -y gitlab-ce=15.0.0-ce.0<br />
<br />
And finally, to 15.1, the latest as of 2022/07.<br />
sudo apt upgrade -y gitlab-ce<br />
<br />
I pledge to NEVER EVER be this lazy with any software I release. It's just. Sad.<br />
<br />
=== Install ===<br />
* set up shitcutter.com in domains.google.com and certbot<br />
<br />
* Set up haproxy redirection; see haproxy.cfg for details. Note that you will be redirecting shitcutter.com https to glam:8095 http.<br />
<br />
* [https://computingforgeeks.com/how-to-install-gitlab-ce-on-ubuntu-linux/ Install] up to the point where you configure - basics:<br />
curl -sS https://packages.gitlab.com/install/repositories/gitlab/gitlab-ce/script.deb.sh | sudo bash<br />
sudo apt update<br />
sudo apt -y install gitlab-ce<br />
<br />
* You SHOULD IMMEDIATELY INSTALL the SAME VERSION of gitlab-runner but on a different machine - pita - see details below...<br />
<br />
* configure; see MDM comments<br />
sudo emacs /etc/gitlab/gitlab.rb<br />
# set external_url, registry_external_url (to enable docker registry), nginx port, etc.<br />
<br />
* build<br />
sudo gitlab-ctl reconfigure # whoa, this builds/bakes everything<br />
<br />
* fix root pw RIGHT AWAY<br />
sudo gitlab-rake "gitlab:password:reset[root]"<br />
<br />
* browse to [https://shitcutter.com admin console] and get configuring; for now, turn off sign-up (if anyone wants in later, we can turn it on as it has admin approval turned on)<br />
<br />
==== SSH ====<br />
First, each user needs to add their [.ssh/id_ed25519.pub] public key to their GitLab profile to use git with GitLab.<br />
<br />
Once you add your [.ssh/id_ed25519.pub] key to your GitLab profile, this is the test to make sure GitLab has your ssh key:<br />
ssh -T git@shitcutter.com<br />
<br />
Being able to ssh in this specific way is essential to host code. If you have any problems, debug it!<br />
[glam] sudo tail -f /var/log/auth.log<br />
---<br />
[client] ssh -vvv git@shitcutter.com<br />
<br />
WARNING: It took me a while to realize THERE'S NO DIRECT SSH PATHWAY to to my GitLab host machine (shitcutter.com), as it's on proxmox VM glam. I had to update [.ssh/config] to use bitpost.com as a jump server to get to glam from shitcutter.com ssh requests, like I do with morosoph. NICE!<br />
# Allow shitcutter-via-bitpost for gitlab<br />
Host shitcutter.com sc shit<br />
ProxyCommand ssh -q bitpost.com nc -q0 glam 22<br />
<br />
The next problem was that on glam, because I had set git up previously, the git user was "locked" (it had a password). Fix:<br />
sudo passwd -d git<br />
<br />
Next, I needed to add git to ssh AllowUsers. Done in the common file, so this should be good into the future.<br />
sudo emacs ~/develop/config/common/etc/ssh/sshd_config<br />
sudo service sshd restart<br />
<br />
And FINALLY, it works:<br />
ssh -T git@shitcutter.com<br />
Welcome to GitLab, @moodboom!<br />
<br />
==== SMTP ====<br />
<br />
See /etc/gitlab/gitlab.rb<br />
<br />
==== Runners ====<br />
You have to install and config runners, to actually perform jobs, and CI. "Don't run them on the same host as GitLab". "You must ensure your GitLab and Runner versions match". Wtf. Pita. Whatevs.<br />
<br />
* Follow [https://docs.gitlab.com/runner/install/linux-repository.html instructions] to install the latest runner via apt.<br />
[matryoshka]<br />
curl -L "https://packages.gitlab.com/install/repositories/runner/gitlab-runner/script.deb.sh" | sudo bash<br />
sudo apt install gitlab-runner<br />
<br />
NOTE it seems Debian bullseye (11) repo is out, but empty. You can use the Debian buster (10) repo on 11, which is reported to work fine:<br />
curl -L "https://packages.gitlab.com/install/repositories/runner/gitlab-runner/script.deb.sh" | sudo os=debian dist=10 bash<br />
<br />
* Then follow the somewhat byzantine instructions in GitLab, provided on the Admin > Runners page.<br />
<br />
WARNING you have to do this with your specially-provided unique token... and you have to provide a name... and all this executor shit...<br />
<br />
Command to register a runner:<br />
[matryoshka] sudo gitlab-runner register --url https://shitcutter.com/ --registration-token ##########<br />
<br />
enter the executor: docker<br />
enter the gitlab-ci description: glam gitlab runner ("glam" was default, maybe should have used that...)<br />
enter the gitlab-ci tags: (none)<br />
enter the default Docker image: node:17-slim</div>Mhttps://bitpost.com/w/index.php?title=GitLab&diff=7448GitLab2024-01-16T00:29:02Z<p>M: /* Tools */</p>
<hr />
<div>Yes it's Ruby. And Go. Yes it's time for yet another shitty "solution to everything" framework... sigh...<br />
<br />
GLAM hosts the GitLab website.<br />
<br />
MATRYOSHKA hosts the gitlab-runner that performs GitLab jobs.<br />
<br />
== Configure ==<br />
<br />
* [https://shitcutter.com admin console]<br />
<br />
* [https://docs.gitlab.com/ee/ docs]<br />
<br />
* To turn on/off registration:<br />
Admin > Settings > General > Signup restrictions<br />
<br />
=== Server ===<br />
<br />
See the alias list on glam for a few gitlab commands available as shortcuts.<br />
<br />
* Much of the server configuration (eg SMTP) is in this file:<br />
👠 m@glam [~] sudo emacs /etc/gitlab/gitlab.rb<br />
Change it, then reload it:<br />
👠 m@glam [~] sudo gitlab-ctl reconfigure<br />
<br />
* tail gitlab log<br />
sudo tail -f /var/log/gitlab/gitlab-rails/production_json.log<br />
<br />
* tail gitlab nginx<br />
sudo tail -f /var/log/gitlab/nginx/gitlab_access.log<br />
<br />
* service<br />
sudo gitlab-ctl # to see commands<br />
sudo gitlab-ctl restart nginx<br />
sudo gitlab-ctl restart<br />
ok: run: alertmanager: (pid 463302) 1s<br />
ok: run: gitaly: (pid 463311) 0s<br />
ok: run: gitlab-exporter: (pid 463336) 0s<br />
ok: run: gitlab-workhorse: (pid 463338) 0s<br />
ok: run: grafana: (pid 463351) 1s<br />
ok: run: logrotate: (pid 463440) 0s<br />
ok: run: nginx: (pid 463446) 1s<br />
ok: run: node-exporter: (pid 463454) 0s<br />
ok: run: postgres-exporter: (pid 463461) 1s<br />
ok: run: postgresql: (pid 463475) 0s<br />
ok: run: prometheus: (pid 463484) 0s<br />
ok: run: puma: (pid 463499) 0s<br />
ok: run: redis: (pid 463504) 0s<br />
ok: run: redis-exporter: (pid 463510) 1s<br />
ok: run: sidekiq: (pid 463519) 0s<br />
sudo gitlab-ctl stop<br />
sudo gitlab-ctl tail<br />
<br />
* to get to a rails console:<br />
sudo gitlab-rails console<br />
<br />
From there, you can do things like send a test email:<br />
irb(main):010:0><br />
irb(main):010:0> Notify.test_email('m@bitpost.com', 'Message Subject', 'Message Body').deliver_now<br />
<br />
=== Runner ===<br />
* to work with runners, use gitlab-runner cmd, eg:<br />
gitlab-runner list<br />
sudo gitlab-runner status<br />
<br />
=== Upgrade ===<br />
<br />
Like so many other software packages, they are totally lazy and dont support version jumping. Check what you need to do [https://docs.gitlab.com/ee/update/index.html here].<br />
<br />
==== 14.9 to 15 ====<br />
It is puking going from 14.9.3 to 15, even though it is supposedly supported. [https://dev.to/konung/upgrading-to-gitlab-150-ce-from-gitlab-1493-if5 This] helped.<br />
sudo apt upgrade -y gitlab-ce=14.10.0-ce.0<br />
Configuration backup archive complete: /etc/gitlab/config_backup/gitlab_config_1656455183_2022_06_28.tar<br />
<br />
Now you can jump to 15.0. What fun.<br />
sudo apt upgrade -y gitlab-ce=15.0.0-ce.0<br />
<br />
And finally, to 15.1, the latest as of 2022/07.<br />
sudo apt upgrade -y gitlab-ce<br />
<br />
I pledge to NEVER EVER be this lazy with any software I release. It's just. Sad.<br />
<br />
=== Install ===<br />
* set up shitcutter.com in domains.google.com and certbot<br />
<br />
* Set up haproxy redirection; see haproxy.cfg for details. Note that you will be redirecting shitcutter.com https to glam:8095 http.<br />
<br />
* [https://computingforgeeks.com/how-to-install-gitlab-ce-on-ubuntu-linux/ Install] up to the point where you configure - basics:<br />
curl -sS https://packages.gitlab.com/install/repositories/gitlab/gitlab-ce/script.deb.sh | sudo bash<br />
sudo apt update<br />
sudo apt -y install gitlab-ce<br />
<br />
* You SHOULD IMMEDIATELY INSTALL the SAME VERSION of gitlab-runner but on a different machine - pita - see details below...<br />
<br />
* configure; see MDM comments<br />
sudo emacs /etc/gitlab/gitlab.rb<br />
# set external_url, registry_external_url (to enable docker registry), nginx port, etc.<br />
<br />
* build<br />
sudo gitlab-ctl reconfigure # whoa, this builds/bakes everything<br />
<br />
* fix root pw RIGHT AWAY<br />
sudo gitlab-rake "gitlab:password:reset[root]"<br />
<br />
* browse to [https://shitcutter.com admin console] and get configuring; for now, turn off sign-up (if anyone wants in later, we can turn it on as it has admin approval turned on)<br />
<br />
==== SSH ====<br />
First, each user needs to add their [.ssh/id_ed25519.pub] public key to their GitLab profile to use git with GitLab.<br />
<br />
Once you add your [.ssh/id_ed25519.pub] key to your GitLab profile, this is the test to make sure GitLab has your ssh key:<br />
ssh -T git@shitcutter.com<br />
<br />
Being able to ssh in this specific way is essential to host code. If you have any problems, debug it!<br />
[glam] sudo tail -f /var/log/auth.log<br />
---<br />
[client] ssh -vvv git@shitcutter.com<br />
<br />
WARNING: It took me a while to realize THERE'S NO DIRECT SSH PATHWAY to to my GitLab host machine (shitcutter.com), as it's on proxmox VM glam. I had to update [.ssh/config] to use bitpost.com as a jump server to get to glam from shitcutter.com ssh requests, like I do with morosoph. NICE!<br />
# Allow shitcutter-via-bitpost for gitlab<br />
Host shitcutter.com sc shit<br />
ProxyCommand ssh -q bitpost.com nc -q0 glam 22<br />
<br />
The next problem was that on glam, because I had set git up previously, the git user was "locked" (it had a password). Fix:<br />
sudo passwd -d git<br />
<br />
Next, I needed to add git to ssh AllowUsers. Done in the common file, so this should be good into the future.<br />
sudo emacs ~/develop/config/common/etc/ssh/sshd_config<br />
sudo service sshd restart<br />
<br />
And FINALLY, it works:<br />
ssh -T git@shitcutter.com<br />
Welcome to GitLab, @moodboom!<br />
<br />
==== SMTP ====<br />
<br />
See /etc/gitlab/gitlab.rb<br />
<br />
==== Runners ====<br />
You have to install and config runners, to actually perform jobs, and CI. "Don't run them on the same host as GitLab". "You must ensure your GitLab and Runner versions match". Wtf. Pita. Whatevs.<br />
<br />
* Follow [https://docs.gitlab.com/runner/install/linux-repository.html instructions] to install the latest runner via apt.<br />
[matryoshka]<br />
curl -L "https://packages.gitlab.com/install/repositories/runner/gitlab-runner/script.deb.sh" | sudo bash<br />
sudo apt install gitlab-runner<br />
<br />
NOTE it seems Debian bullseye (11) repo is out, but empty. You can use the Debian buster (10) repo on 11, which is reported to work fine:<br />
curl -L "https://packages.gitlab.com/install/repositories/runner/gitlab-runner/script.deb.sh" | sudo os=debian dist=10 bash<br />
<br />
* Then follow the somewhat byzantine instructions in GitLab, provided on the Admin > Runners page.<br />
<br />
WARNING you have to do this with your specially-provided unique token... and you have to provide a name... and all this executor shit...<br />
<br />
Command to register a runner:<br />
[matryoshka] sudo gitlab-runner register --url https://shitcutter.com/ --registration-token ##########<br />
<br />
enter the executor: docker<br />
enter the gitlab-ci description: glam gitlab runner ("glam" was default, maybe should have used that...)<br />
enter the gitlab-ci tags: (none)<br />
enter the default Docker image: node:17-slim</div>Mhttps://bitpost.com/w/index.php?title=GitLab&diff=7447GitLab2024-01-16T00:28:22Z<p>M: /* Configure */</p>
<hr />
<div>Yes it's Ruby. And Go. Yes it's time for yet another shitty "solution to everything" framework... sigh...<br />
<br />
GLAM hosts the GitLab website.<br />
<br />
MATRYOSHKA hosts the gitlab-runner that performs GitLab jobs.<br />
<br />
== Configure ==<br />
<br />
* [https://shitcutter.com admin console]<br />
<br />
* [https://docs.gitlab.com/ee/ docs]<br />
<br />
* To turn on/off registration:<br />
Admin > Settings > General > Signup restrictions<br />
<br />
=== Tools ===<br />
<br />
==== Server ====<br />
<br />
See the alias list on glam for a few gitlab commands available as shortcuts.<br />
<br />
* Much of the server configuration (eg SMTP) is in this file:<br />
👠 m@glam [~] sudo emacs /etc/gitlab/gitlab.rb<br />
Change it, then reload it:<br />
👠 m@glam [~] sudo gitlab-ctl reconfigure<br />
<br />
* tail gitlab log<br />
sudo tail -f /var/log/gitlab/gitlab-rails/production_json.log<br />
<br />
* tail gitlab nginx<br />
sudo tail -f /var/log/gitlab/nginx/gitlab_access.log<br />
<br />
* service<br />
sudo gitlab-ctl # to see commands<br />
sudo gitlab-ctl restart nginx<br />
sudo gitlab-ctl restart<br />
ok: run: alertmanager: (pid 463302) 1s<br />
ok: run: gitaly: (pid 463311) 0s<br />
ok: run: gitlab-exporter: (pid 463336) 0s<br />
ok: run: gitlab-workhorse: (pid 463338) 0s<br />
ok: run: grafana: (pid 463351) 1s<br />
ok: run: logrotate: (pid 463440) 0s<br />
ok: run: nginx: (pid 463446) 1s<br />
ok: run: node-exporter: (pid 463454) 0s<br />
ok: run: postgres-exporter: (pid 463461) 1s<br />
ok: run: postgresql: (pid 463475) 0s<br />
ok: run: prometheus: (pid 463484) 0s<br />
ok: run: puma: (pid 463499) 0s<br />
ok: run: redis: (pid 463504) 0s<br />
ok: run: redis-exporter: (pid 463510) 1s<br />
ok: run: sidekiq: (pid 463519) 0s<br />
sudo gitlab-ctl stop<br />
sudo gitlab-ctl tail<br />
<br />
* to get to a rails console:<br />
sudo gitlab-rails console<br />
<br />
From there, you can do things like send a test email:<br />
irb(main):010:0><br />
irb(main):010:0> Notify.test_email('m@bitpost.com', 'Message Subject', 'Message Body').deliver_now<br />
<br />
==== Runner ====<br />
* to work with runners, use gitlab-runner cmd, eg:<br />
gitlab-runner list<br />
sudo gitlab-runner status<br />
<br />
=== Upgrade ===<br />
<br />
Like so many other software packages, they are totally lazy and dont support version jumping. Check what you need to do [https://docs.gitlab.com/ee/update/index.html here].<br />
<br />
==== 14.9 to 15 ====<br />
It is puking going from 14.9.3 to 15, even though it is supposedly supported. [https://dev.to/konung/upgrading-to-gitlab-150-ce-from-gitlab-1493-if5 This] helped.<br />
sudo apt upgrade -y gitlab-ce=14.10.0-ce.0<br />
Configuration backup archive complete: /etc/gitlab/config_backup/gitlab_config_1656455183_2022_06_28.tar<br />
<br />
Now you can jump to 15.0. What fun.<br />
sudo apt upgrade -y gitlab-ce=15.0.0-ce.0<br />
<br />
And finally, to 15.1, the latest as of 2022/07.<br />
sudo apt upgrade -y gitlab-ce<br />
<br />
I pledge to NEVER EVER be this lazy with any software I release. It's just. Sad.<br />
<br />
=== Install ===<br />
* set up shitcutter.com in domains.google.com and certbot<br />
<br />
* Set up haproxy redirection; see haproxy.cfg for details. Note that you will be redirecting shitcutter.com https to glam:8095 http.<br />
<br />
* [https://computingforgeeks.com/how-to-install-gitlab-ce-on-ubuntu-linux/ Install] up to the point where you configure - basics:<br />
curl -sS https://packages.gitlab.com/install/repositories/gitlab/gitlab-ce/script.deb.sh | sudo bash<br />
sudo apt update<br />
sudo apt -y install gitlab-ce<br />
<br />
* You SHOULD IMMEDIATELY INSTALL the SAME VERSION of gitlab-runner but on a different machine - pita - see details below...<br />
<br />
* configure; see MDM comments<br />
sudo emacs /etc/gitlab/gitlab.rb<br />
# set external_url, registry_external_url (to enable docker registry), nginx port, etc.<br />
<br />
* build<br />
sudo gitlab-ctl reconfigure # whoa, this builds/bakes everything<br />
<br />
* fix root pw RIGHT AWAY<br />
sudo gitlab-rake "gitlab:password:reset[root]"<br />
<br />
* browse to [https://shitcutter.com admin console] and get configuring; for now, turn off sign-up (if anyone wants in later, we can turn it on as it has admin approval turned on)<br />
<br />
==== SSH ====<br />
First, each user needs to add their [.ssh/id_ed25519.pub] public key to their GitLab profile to use git with GitLab.<br />
<br />
Once you add your [.ssh/id_ed25519.pub] key to your GitLab profile, this is the test to make sure GitLab has your ssh key:<br />
ssh -T git@shitcutter.com<br />
<br />
Being able to ssh in this specific way is essential to host code. If you have any problems, debug it!<br />
[glam] sudo tail -f /var/log/auth.log<br />
---<br />
[client] ssh -vvv git@shitcutter.com<br />
<br />
WARNING: It took me a while to realize THERE'S NO DIRECT SSH PATHWAY to to my GitLab host machine (shitcutter.com), as it's on proxmox VM glam. I had to update [.ssh/config] to use bitpost.com as a jump server to get to glam from shitcutter.com ssh requests, like I do with morosoph. NICE!<br />
# Allow shitcutter-via-bitpost for gitlab<br />
Host shitcutter.com sc shit<br />
ProxyCommand ssh -q bitpost.com nc -q0 glam 22<br />
<br />
The next problem was that on glam, because I had set git up previously, the git user was "locked" (it had a password). Fix:<br />
sudo passwd -d git<br />
<br />
Next, I needed to add git to ssh AllowUsers. Done in the common file, so this should be good into the future.<br />
sudo emacs ~/develop/config/common/etc/ssh/sshd_config<br />
sudo service sshd restart<br />
<br />
And FINALLY, it works:<br />
ssh -T git@shitcutter.com<br />
Welcome to GitLab, @moodboom!<br />
<br />
==== SMTP ====<br />
<br />
See /etc/gitlab/gitlab.rb<br />
<br />
==== Runners ====<br />
You have to install and config runners, to actually perform jobs, and CI. "Don't run them on the same host as GitLab". "You must ensure your GitLab and Runner versions match". Wtf. Pita. Whatevs.<br />
<br />
* Follow [https://docs.gitlab.com/runner/install/linux-repository.html instructions] to install the latest runner via apt.<br />
[matryoshka]<br />
curl -L "https://packages.gitlab.com/install/repositories/runner/gitlab-runner/script.deb.sh" | sudo bash<br />
sudo apt install gitlab-runner<br />
<br />
NOTE it seems Debian bullseye (11) repo is out, but empty. You can use the Debian buster (10) repo on 11, which is reported to work fine:<br />
curl -L "https://packages.gitlab.com/install/repositories/runner/gitlab-runner/script.deb.sh" | sudo os=debian dist=10 bash<br />
<br />
* Then follow the somewhat byzantine instructions in GitLab, provided on the Admin > Runners page.<br />
<br />
WARNING you have to do this with your specially-provided unique token... and you have to provide a name... and all this executor shit...<br />
<br />
Command to register a runner:<br />
[matryoshka] sudo gitlab-runner register --url https://shitcutter.com/ --registration-token ##########<br />
<br />
enter the executor: docker<br />
enter the gitlab-ci description: glam gitlab runner ("glam" was default, maybe should have used that...)<br />
enter the gitlab-ci tags: (none)<br />
enter the default Docker image: node:17-slim</div>Mhttps://bitpost.com/w/index.php?title=Emacs&diff=7446Emacs2024-01-16T00:26:06Z<p>M: /* Configuration */</p>
<hr />
<div>== Intro ==<br />
Emacs is great and sucks too. NEVER USE IT when you have a GUI!<br />
<br />
== Installation ==<br />
Fuck the emacs operating system, we just want a terminal editor. Always install it without X support:<br />
sudo apt install emacs-nox<br />
<br />
== Configuration ==<br />
* Use CUA mode<br />
* This common config should work well in both terminal and UI:<br />
/home/m/development/config/common/home/m/.emacs<br />
NOTE that you need some other things to be configured properly:<br />
* the terminal must use a light light blue background since it will be carried over into [emacs -nw]<br />
* the terminal must have 256-color support; set this in .bashrc:<br />
export TERM=xterm-256color<br />
* Make sure you check out undo support via [ctrl-x u]<br />
=== Colors ===<br />
God the entire command line world is stuck in the 60s... hopefully the terminal colors stick when editing, but sometimes (RUBY cough couhg) emacs tries to "help" with garish bullshit colors that prevent reading code... try this if you're desperate...<br />
Alt-X set-background-color [Enter] #111111 [enter]<br />
<br />
== GPG ==<br />
Emacs can auto-en/decrypt GPG files during load/save, which is EXCELLENT since there is never plaintext on disk.<br />
* You should have gpg installed by default, but apt install if needed. Create a strong key with passphrase; see [[gpg]].<br />
* Configure emacs to prompt for PIN, otherwise gpg-agent prompt and emacs interfere with each other and lock out keyboard:<br />
emacs ~/.emacs.d/init.el<br />
(setq epa-pinentry-mode 'loopback)<br />
* Use files with .gpg extention to inform emacs to attempt decryption.<br />
* To create a newly encrypted file, just tell emacs to open it, and select your gpg key as encryption target (hit M on it, then hit Enter on the OK "button"). When you save, it will be encrypted.</div>Mhttps://bitpost.com/w/index.php?title=GitLab&diff=7445GitLab2024-01-15T22:35:37Z<p>M: /* Server */</p>
<hr />
<div>Yes it's Ruby. And Go. Yes it's time for yet another shitty "solution to everything" framework... sigh...<br />
<br />
GLAM hosts the GitLab website.<br />
<br />
MATRYOSHKA hosts the gitlab-runner that performs GitLab jobs.<br />
<br />
=== Configure ===<br />
<br />
* [https://shitcutter.com admin console]<br />
<br />
* [https://docs.gitlab.com/ee/ docs]<br />
<br />
* To turn on/off registration:<br />
Admin > Settings > General > Signup restrictions<br />
<br />
=== Tools ===<br />
<br />
==== Server ====<br />
<br />
See the alias list on glam for a few gitlab commands available as shortcuts.<br />
<br />
* Much of the server configuration (eg SMTP) is in this file:<br />
👠 m@glam [~] sudo emacs /etc/gitlab/gitlab.rb<br />
Change it, then reload it:<br />
👠 m@glam [~] sudo gitlab-ctl reconfigure<br />
<br />
* tail gitlab log<br />
sudo tail -f /var/log/gitlab/gitlab-rails/production_json.log<br />
<br />
* tail gitlab nginx<br />
sudo tail -f /var/log/gitlab/nginx/gitlab_access.log<br />
<br />
* service<br />
sudo gitlab-ctl # to see commands<br />
sudo gitlab-ctl restart nginx<br />
sudo gitlab-ctl restart<br />
ok: run: alertmanager: (pid 463302) 1s<br />
ok: run: gitaly: (pid 463311) 0s<br />
ok: run: gitlab-exporter: (pid 463336) 0s<br />
ok: run: gitlab-workhorse: (pid 463338) 0s<br />
ok: run: grafana: (pid 463351) 1s<br />
ok: run: logrotate: (pid 463440) 0s<br />
ok: run: nginx: (pid 463446) 1s<br />
ok: run: node-exporter: (pid 463454) 0s<br />
ok: run: postgres-exporter: (pid 463461) 1s<br />
ok: run: postgresql: (pid 463475) 0s<br />
ok: run: prometheus: (pid 463484) 0s<br />
ok: run: puma: (pid 463499) 0s<br />
ok: run: redis: (pid 463504) 0s<br />
ok: run: redis-exporter: (pid 463510) 1s<br />
ok: run: sidekiq: (pid 463519) 0s<br />
sudo gitlab-ctl stop<br />
sudo gitlab-ctl tail<br />
<br />
* to get to a rails console:<br />
sudo gitlab-rails console<br />
<br />
From there, you can do things like send a test email:<br />
irb(main):010:0><br />
irb(main):010:0> Notify.test_email('m@bitpost.com', 'Message Subject', 'Message Body').deliver_now<br />
<br />
==== Runner ====<br />
* to work with runners, use gitlab-runner cmd, eg:<br />
gitlab-runner list<br />
sudo gitlab-runner status<br />
<br />
=== Upgrade ===<br />
<br />
Like so many other software packages, they are totally lazy and dont support version jumping. Check what you need to do [https://docs.gitlab.com/ee/update/index.html here].<br />
<br />
==== 14.9 to 15 ====<br />
It is puking going from 14.9.3 to 15, even though it is supposedly supported. [https://dev.to/konung/upgrading-to-gitlab-150-ce-from-gitlab-1493-if5 This] helped.<br />
sudo apt upgrade -y gitlab-ce=14.10.0-ce.0<br />
Configuration backup archive complete: /etc/gitlab/config_backup/gitlab_config_1656455183_2022_06_28.tar<br />
<br />
Now you can jump to 15.0. What fun.<br />
sudo apt upgrade -y gitlab-ce=15.0.0-ce.0<br />
<br />
And finally, to 15.1, the latest as of 2022/07.<br />
sudo apt upgrade -y gitlab-ce<br />
<br />
I pledge to NEVER EVER be this lazy with any software I release. It's just. Sad.<br />
<br />
=== Install ===<br />
* set up shitcutter.com in domains.google.com and certbot<br />
<br />
* Set up haproxy redirection; see haproxy.cfg for details. Note that you will be redirecting shitcutter.com https to glam:8095 http.<br />
<br />
* [https://computingforgeeks.com/how-to-install-gitlab-ce-on-ubuntu-linux/ Install] up to the point where you configure - basics:<br />
curl -sS https://packages.gitlab.com/install/repositories/gitlab/gitlab-ce/script.deb.sh | sudo bash<br />
sudo apt update<br />
sudo apt -y install gitlab-ce<br />
<br />
* You SHOULD IMMEDIATELY INSTALL the SAME VERSION of gitlab-runner but on a different machine - pita - see details below...<br />
<br />
* configure; see MDM comments<br />
sudo emacs /etc/gitlab/gitlab.rb<br />
# set external_url, registry_external_url (to enable docker registry), nginx port, etc.<br />
<br />
* build<br />
sudo gitlab-ctl reconfigure # whoa, this builds/bakes everything<br />
<br />
* fix root pw RIGHT AWAY<br />
sudo gitlab-rake "gitlab:password:reset[root]"<br />
<br />
* browse to [https://shitcutter.com admin console] and get configuring; for now, turn off sign-up (if anyone wants in later, we can turn it on as it has admin approval turned on)<br />
<br />
==== SSH ====<br />
First, each user needs to add their [.ssh/id_ed25519.pub] public key to their GitLab profile to use git with GitLab.<br />
<br />
Once you add your [.ssh/id_ed25519.pub] key to your GitLab profile, this is the test to make sure GitLab has your ssh key:<br />
ssh -T git@shitcutter.com<br />
<br />
Being able to ssh in this specific way is essential to host code. If you have any problems, debug it!<br />
[glam] sudo tail -f /var/log/auth.log<br />
---<br />
[client] ssh -vvv git@shitcutter.com<br />
<br />
WARNING: It took me a while to realize THERE'S NO DIRECT SSH PATHWAY to to my GitLab host machine (shitcutter.com), as it's on proxmox VM glam. I had to update [.ssh/config] to use bitpost.com as a jump server to get to glam from shitcutter.com ssh requests, like I do with morosoph. NICE!<br />
# Allow shitcutter-via-bitpost for gitlab<br />
Host shitcutter.com sc shit<br />
ProxyCommand ssh -q bitpost.com nc -q0 glam 22<br />
<br />
The next problem was that on glam, because I had set git up previously, the git user was "locked" (it had a password). Fix:<br />
sudo passwd -d git<br />
<br />
Next, I needed to add git to ssh AllowUsers. Done in the common file, so this should be good into the future.<br />
sudo emacs ~/develop/config/common/etc/ssh/sshd_config<br />
sudo service sshd restart<br />
<br />
And FINALLY, it works:<br />
ssh -T git@shitcutter.com<br />
Welcome to GitLab, @moodboom!<br />
<br />
==== SMTP ====<br />
<br />
See /etc/gitlab/gitlab.rb<br />
<br />
==== Runners ====<br />
You have to install and config runners, to actually perform jobs, and CI. "Don't run them on the same host as GitLab". "You must ensure your GitLab and Runner versions match". Wtf. Pita. Whatevs.<br />
<br />
* Follow [https://docs.gitlab.com/runner/install/linux-repository.html instructions] to install the latest runner via apt.<br />
[matryoshka]<br />
curl -L "https://packages.gitlab.com/install/repositories/runner/gitlab-runner/script.deb.sh" | sudo bash<br />
sudo apt install gitlab-runner<br />
<br />
NOTE it seems Debian bullseye (11) repo is out, but empty. You can use the Debian buster (10) repo on 11, which is reported to work fine:<br />
curl -L "https://packages.gitlab.com/install/repositories/runner/gitlab-runner/script.deb.sh" | sudo os=debian dist=10 bash<br />
<br />
* Then follow the somewhat byzantine instructions in GitLab, provided on the Admin > Runners page.<br />
<br />
WARNING you have to do this with your specially-provided unique token... and you have to provide a name... and all this executor shit...<br />
<br />
Command to register a runner:<br />
[matryoshka] sudo gitlab-runner register --url https://shitcutter.com/ --registration-token ##########<br />
<br />
enter the executor: docker<br />
enter the gitlab-ci description: glam gitlab runner ("glam" was default, maybe should have used that...)<br />
enter the gitlab-ci tags: (none)<br />
enter the default Docker image: node:17-slim</div>Mhttps://bitpost.com/w/index.php?title=GitLab&diff=7444GitLab2024-01-15T22:32:52Z<p>M: /* Configure */</p>
<hr />
<div>Yes it's Ruby. And Go. Yes it's time for yet another shitty "solution to everything" framework... sigh...<br />
<br />
GLAM hosts the GitLab website.<br />
<br />
MATRYOSHKA hosts the gitlab-runner that performs GitLab jobs.<br />
<br />
=== Configure ===<br />
<br />
* [https://shitcutter.com admin console]<br />
<br />
* [https://docs.gitlab.com/ee/ docs]<br />
<br />
* To turn on/off registration:<br />
Admin > Settings > General > Signup restrictions<br />
<br />
=== Tools ===<br />
<br />
==== Server ====<br />
<br />
See the alias list on glam for a few gitlab commands available as shortcuts.<br />
<br />
* edit the server config<br />
sudo emacs /etc/gitlab/gitlab.rb<br />
<br />
* tail gitlab log<br />
sudo tail -f /var/log/gitlab/gitlab-rails/production_json.log<br />
<br />
* tail gitlab nginx<br />
sudo tail -f /var/log/gitlab/nginx/gitlab_access.log<br />
<br />
* service<br />
sudo gitlab-ctl # to see commands<br />
sudo gitlab-ctl restart nginx<br />
sudo gitlab-ctl restart<br />
ok: run: alertmanager: (pid 463302) 1s<br />
ok: run: gitaly: (pid 463311) 0s<br />
ok: run: gitlab-exporter: (pid 463336) 0s<br />
ok: run: gitlab-workhorse: (pid 463338) 0s<br />
ok: run: grafana: (pid 463351) 1s<br />
ok: run: logrotate: (pid 463440) 0s<br />
ok: run: nginx: (pid 463446) 1s<br />
ok: run: node-exporter: (pid 463454) 0s<br />
ok: run: postgres-exporter: (pid 463461) 1s<br />
ok: run: postgresql: (pid 463475) 0s<br />
ok: run: prometheus: (pid 463484) 0s<br />
ok: run: puma: (pid 463499) 0s<br />
ok: run: redis: (pid 463504) 0s<br />
ok: run: redis-exporter: (pid 463510) 1s<br />
ok: run: sidekiq: (pid 463519) 0s<br />
sudo gitlab-ctl stop<br />
sudo gitlab-ctl tail<br />
<br />
* to get to a rails console:<br />
sudo gitlab-rails console<br />
<br />
==== Runner ====<br />
* to work with runners, use gitlab-runner cmd, eg:<br />
gitlab-runner list<br />
sudo gitlab-runner status<br />
<br />
=== Upgrade ===<br />
<br />
Like so many other software packages, they are totally lazy and dont support version jumping. Check what you need to do [https://docs.gitlab.com/ee/update/index.html here].<br />
<br />
==== 14.9 to 15 ====<br />
It is puking going from 14.9.3 to 15, even though it is supposedly supported. [https://dev.to/konung/upgrading-to-gitlab-150-ce-from-gitlab-1493-if5 This] helped.<br />
sudo apt upgrade -y gitlab-ce=14.10.0-ce.0<br />
Configuration backup archive complete: /etc/gitlab/config_backup/gitlab_config_1656455183_2022_06_28.tar<br />
<br />
Now you can jump to 15.0. What fun.<br />
sudo apt upgrade -y gitlab-ce=15.0.0-ce.0<br />
<br />
And finally, to 15.1, the latest as of 2022/07.<br />
sudo apt upgrade -y gitlab-ce<br />
<br />
I pledge to NEVER EVER be this lazy with any software I release. It's just. Sad.<br />
<br />
=== Install ===<br />
* set up shitcutter.com in domains.google.com and certbot<br />
<br />
* Set up haproxy redirection; see haproxy.cfg for details. Note that you will be redirecting shitcutter.com https to glam:8095 http.<br />
<br />
* [https://computingforgeeks.com/how-to-install-gitlab-ce-on-ubuntu-linux/ Install] up to the point where you configure - basics:<br />
curl -sS https://packages.gitlab.com/install/repositories/gitlab/gitlab-ce/script.deb.sh | sudo bash<br />
sudo apt update<br />
sudo apt -y install gitlab-ce<br />
<br />
* You SHOULD IMMEDIATELY INSTALL the SAME VERSION of gitlab-runner but on a different machine - pita - see details below...<br />
<br />
* configure; see MDM comments<br />
sudo emacs /etc/gitlab/gitlab.rb<br />
# set external_url, registry_external_url (to enable docker registry), nginx port, etc.<br />
<br />
* build<br />
sudo gitlab-ctl reconfigure # whoa, this builds/bakes everything<br />
<br />
* fix root pw RIGHT AWAY<br />
sudo gitlab-rake "gitlab:password:reset[root]"<br />
<br />
* browse to [https://shitcutter.com admin console] and get configuring; for now, turn off sign-up (if anyone wants in later, we can turn it on as it has admin approval turned on)<br />
<br />
==== SSH ====<br />
First, each user needs to add their [.ssh/id_ed25519.pub] public key to their GitLab profile to use git with GitLab.<br />
<br />
Once you add your [.ssh/id_ed25519.pub] key to your GitLab profile, this is the test to make sure GitLab has your ssh key:<br />
ssh -T git@shitcutter.com<br />
<br />
Being able to ssh in this specific way is essential to host code. If you have any problems, debug it!<br />
[glam] sudo tail -f /var/log/auth.log<br />
---<br />
[client] ssh -vvv git@shitcutter.com<br />
<br />
WARNING: It took me a while to realize THERE'S NO DIRECT SSH PATHWAY to to my GitLab host machine (shitcutter.com), as it's on proxmox VM glam. I had to update [.ssh/config] to use bitpost.com as a jump server to get to glam from shitcutter.com ssh requests, like I do with morosoph. NICE!<br />
# Allow shitcutter-via-bitpost for gitlab<br />
Host shitcutter.com sc shit<br />
ProxyCommand ssh -q bitpost.com nc -q0 glam 22<br />
<br />
The next problem was that on glam, because I had set git up previously, the git user was "locked" (it had a password). Fix:<br />
sudo passwd -d git<br />
<br />
Next, I needed to add git to ssh AllowUsers. Done in the common file, so this should be good into the future.<br />
sudo emacs ~/develop/config/common/etc/ssh/sshd_config<br />
sudo service sshd restart<br />
<br />
And FINALLY, it works:<br />
ssh -T git@shitcutter.com<br />
Welcome to GitLab, @moodboom!<br />
<br />
==== SMTP ====<br />
<br />
See /etc/gitlab/gitlab.rb<br />
<br />
==== Runners ====<br />
You have to install and config runners, to actually perform jobs, and CI. "Don't run them on the same host as GitLab". "You must ensure your GitLab and Runner versions match". Wtf. Pita. Whatevs.<br />
<br />
* Follow [https://docs.gitlab.com/runner/install/linux-repository.html instructions] to install the latest runner via apt.<br />
[matryoshka]<br />
curl -L "https://packages.gitlab.com/install/repositories/runner/gitlab-runner/script.deb.sh" | sudo bash<br />
sudo apt install gitlab-runner<br />
<br />
NOTE it seems Debian bullseye (11) repo is out, but empty. You can use the Debian buster (10) repo on 11, which is reported to work fine:<br />
curl -L "https://packages.gitlab.com/install/repositories/runner/gitlab-runner/script.deb.sh" | sudo os=debian dist=10 bash<br />
<br />
* Then follow the somewhat byzantine instructions in GitLab, provided on the Admin > Runners page.<br />
<br />
WARNING you have to do this with your specially-provided unique token... and you have to provide a name... and all this executor shit...<br />
<br />
Command to register a runner:<br />
[matryoshka] sudo gitlab-runner register --url https://shitcutter.com/ --registration-token ##########<br />
<br />
enter the executor: docker<br />
enter the gitlab-ci description: glam gitlab runner ("glam" was default, maybe should have used that...)<br />
enter the gitlab-ci tags: (none)<br />
enter the default Docker image: node:17-slim</div>Mhttps://bitpost.com/w/index.php?title=GitLab&diff=7443GitLab2024-01-15T22:31:59Z<p>M: /* Configure */</p>
<hr />
<div>Yes it's Ruby. And Go. Yes it's time for yet another shitty "solution to everything" framework... sigh...<br />
<br />
GLAM hosts the GitLab website.<br />
<br />
MATRYOSHKA hosts the gitlab-runner that performs GitLab jobs.<br />
<br />
=== Configure ===<br />
<br />
* [https://shitcutter.com admin console]<br />
<br />
* [https://docs.gitlab.com/ee/ docs]<br />
<br />
* To turn on/off registration:<br />
Admin > Settings > General > Signup restrictions<br />
<br />
* Much of the configuration (eg SMTP) is in this file:<br />
👠 m@glam [~] sudo emacs /etc/gitlab/gitlab.rb<br />
Change it, then reload it:<br />
👠 m@glam [~] sudo gitlab-ctl reconfigure<br />
<br />
=== Tools ===<br />
<br />
==== Server ====<br />
<br />
See the alias list on glam for a few gitlab commands available as shortcuts.<br />
<br />
* edit the server config<br />
sudo emacs /etc/gitlab/gitlab.rb<br />
<br />
* tail gitlab log<br />
sudo tail -f /var/log/gitlab/gitlab-rails/production_json.log<br />
<br />
* tail gitlab nginx<br />
sudo tail -f /var/log/gitlab/nginx/gitlab_access.log<br />
<br />
* service<br />
sudo gitlab-ctl # to see commands<br />
sudo gitlab-ctl restart nginx<br />
sudo gitlab-ctl restart<br />
ok: run: alertmanager: (pid 463302) 1s<br />
ok: run: gitaly: (pid 463311) 0s<br />
ok: run: gitlab-exporter: (pid 463336) 0s<br />
ok: run: gitlab-workhorse: (pid 463338) 0s<br />
ok: run: grafana: (pid 463351) 1s<br />
ok: run: logrotate: (pid 463440) 0s<br />
ok: run: nginx: (pid 463446) 1s<br />
ok: run: node-exporter: (pid 463454) 0s<br />
ok: run: postgres-exporter: (pid 463461) 1s<br />
ok: run: postgresql: (pid 463475) 0s<br />
ok: run: prometheus: (pid 463484) 0s<br />
ok: run: puma: (pid 463499) 0s<br />
ok: run: redis: (pid 463504) 0s<br />
ok: run: redis-exporter: (pid 463510) 1s<br />
ok: run: sidekiq: (pid 463519) 0s<br />
sudo gitlab-ctl stop<br />
sudo gitlab-ctl tail<br />
<br />
* to get to a rails console:<br />
sudo gitlab-rails console<br />
<br />
==== Runner ====<br />
* to work with runners, use gitlab-runner cmd, eg:<br />
gitlab-runner list<br />
sudo gitlab-runner status<br />
<br />
=== Upgrade ===<br />
<br />
Like so many other software packages, they are totally lazy and dont support version jumping. Check what you need to do [https://docs.gitlab.com/ee/update/index.html here].<br />
<br />
==== 14.9 to 15 ====<br />
It is puking going from 14.9.3 to 15, even though it is supposedly supported. [https://dev.to/konung/upgrading-to-gitlab-150-ce-from-gitlab-1493-if5 This] helped.<br />
sudo apt upgrade -y gitlab-ce=14.10.0-ce.0<br />
Configuration backup archive complete: /etc/gitlab/config_backup/gitlab_config_1656455183_2022_06_28.tar<br />
<br />
Now you can jump to 15.0. What fun.<br />
sudo apt upgrade -y gitlab-ce=15.0.0-ce.0<br />
<br />
And finally, to 15.1, the latest as of 2022/07.<br />
sudo apt upgrade -y gitlab-ce<br />
<br />
I pledge to NEVER EVER be this lazy with any software I release. It's just. Sad.<br />
<br />
=== Install ===<br />
* set up shitcutter.com in domains.google.com and certbot<br />
<br />
* Set up haproxy redirection; see haproxy.cfg for details. Note that you will be redirecting shitcutter.com https to glam:8095 http.<br />
<br />
* [https://computingforgeeks.com/how-to-install-gitlab-ce-on-ubuntu-linux/ Install] up to the point where you configure - basics:<br />
curl -sS https://packages.gitlab.com/install/repositories/gitlab/gitlab-ce/script.deb.sh | sudo bash<br />
sudo apt update<br />
sudo apt -y install gitlab-ce<br />
<br />
* You SHOULD IMMEDIATELY INSTALL the SAME VERSION of gitlab-runner but on a different machine - pita - see details below...<br />
<br />
* configure; see MDM comments<br />
sudo emacs /etc/gitlab/gitlab.rb<br />
# set external_url, registry_external_url (to enable docker registry), nginx port, etc.<br />
<br />
* build<br />
sudo gitlab-ctl reconfigure # whoa, this builds/bakes everything<br />
<br />
* fix root pw RIGHT AWAY<br />
sudo gitlab-rake "gitlab:password:reset[root]"<br />
<br />
* browse to [https://shitcutter.com admin console] and get configuring; for now, turn off sign-up (if anyone wants in later, we can turn it on as it has admin approval turned on)<br />
<br />
==== SSH ====<br />
First, each user needs to add their [.ssh/id_ed25519.pub] public key to their GitLab profile to use git with GitLab.<br />
<br />
Once you add your [.ssh/id_ed25519.pub] key to your GitLab profile, this is the test to make sure GitLab has your ssh key:<br />
ssh -T git@shitcutter.com<br />
<br />
Being able to ssh in this specific way is essential to host code. If you have any problems, debug it!<br />
[glam] sudo tail -f /var/log/auth.log<br />
---<br />
[client] ssh -vvv git@shitcutter.com<br />
<br />
WARNING: It took me a while to realize THERE'S NO DIRECT SSH PATHWAY to to my GitLab host machine (shitcutter.com), as it's on proxmox VM glam. I had to update [.ssh/config] to use bitpost.com as a jump server to get to glam from shitcutter.com ssh requests, like I do with morosoph. NICE!<br />
# Allow shitcutter-via-bitpost for gitlab<br />
Host shitcutter.com sc shit<br />
ProxyCommand ssh -q bitpost.com nc -q0 glam 22<br />
<br />
The next problem was that on glam, because I had set git up previously, the git user was "locked" (it had a password). Fix:<br />
sudo passwd -d git<br />
<br />
Next, I needed to add git to ssh AllowUsers. Done in the common file, so this should be good into the future.<br />
sudo emacs ~/develop/config/common/etc/ssh/sshd_config<br />
sudo service sshd restart<br />
<br />
And FINALLY, it works:<br />
ssh -T git@shitcutter.com<br />
Welcome to GitLab, @moodboom!<br />
<br />
==== SMTP ====<br />
<br />
See /etc/gitlab/gitlab.rb<br />
<br />
==== Runners ====<br />
You have to install and config runners, to actually perform jobs, and CI. "Don't run them on the same host as GitLab". "You must ensure your GitLab and Runner versions match". Wtf. Pita. Whatevs.<br />
<br />
* Follow [https://docs.gitlab.com/runner/install/linux-repository.html instructions] to install the latest runner via apt.<br />
[matryoshka]<br />
curl -L "https://packages.gitlab.com/install/repositories/runner/gitlab-runner/script.deb.sh" | sudo bash<br />
sudo apt install gitlab-runner<br />
<br />
NOTE it seems Debian bullseye (11) repo is out, but empty. You can use the Debian buster (10) repo on 11, which is reported to work fine:<br />
curl -L "https://packages.gitlab.com/install/repositories/runner/gitlab-runner/script.deb.sh" | sudo os=debian dist=10 bash<br />
<br />
* Then follow the somewhat byzantine instructions in GitLab, provided on the Admin > Runners page.<br />
<br />
WARNING you have to do this with your specially-provided unique token... and you have to provide a name... and all this executor shit...<br />
<br />
Command to register a runner:<br />
[matryoshka] sudo gitlab-runner register --url https://shitcutter.com/ --registration-token ##########<br />
<br />
enter the executor: docker<br />
enter the gitlab-ci description: glam gitlab runner ("glam" was default, maybe should have used that...)<br />
enter the gitlab-ci tags: (none)<br />
enter the default Docker image: node:17-slim</div>Mhttps://bitpost.com/w/index.php?title=Kitty&diff=7442Kitty2024-01-15T19:07:43Z<p>M: </p>
<hr />
<div>OpenGL hardware accelerated terminal that is SIGNIFICANTLY, IMPORTANTLY much faster than gnome-terminal on abtdev1 via SPICE.<br />
<br />
== Config ==<br />
All config (colors, keyboard, much more...) is here, "self-documented":<br />
development/config/common/home/m/.config/kitty/kitty.conf<br />
=== ssh terminfo ===<br />
When ssh'ing to a new machine, it will not "know" kitty by default.<br />
<br />
When you can use ssh directly, use a kitten:<br />
kitty +kitten ssh newplace<br />
Or, to teach an ubuntu machine about it:<br />
sudo apt install kitty-terminfo<br />
Or, go manual by following [https://sw.kovidgoyal.net/kitty/kittens/ssh/#manual-terminfo-copy this].<br />
infocmp -a xterm-kitty | ssh myserver tic -x -o \~/.terminfo /dev/stdin<br />
<br />
Now LOG OUT, log back in, and enjoy the speed of kitty.<br />
<br />
=== Colors ===<br />
Right now there are two solarized color blocks in the config, one for dark one for light. Red needs some work!<br />
<br />
To vet that they look good you can use:<br />
colortest-16<br />
Emacs seems to blow out its own shitty colors, eg for ruby comments in RED and quit-without-save warnings in BLUE. Deal with it for now.<br />
<br />
== Install ==<br />
* For now, I just install via ubuntu apt<br />
sudo apt install kitty<br />
It's an older version, and that sucks, as it is missing things like `kitty +kittens theme`. But latest is baked into Ubuntu 24.04 so maybe I go with that soon...<br />
<br />
* Add ctrl-shift-t shortcut in i3 config, starting it with correct path and using -1 (so one daemon is used for all shells, nice and fast).</div>Mhttps://bitpost.com/w/index.php?title=Brave&diff=7441Brave2024-01-02T15:52:56Z<p>M: /* Settings */</p>
<hr />
<div>== Configure ==<br />
<br />
=== Settings ===<br />
Immediately change these Settings on any new profile:<br />
* Get started > Profile name and icon > customize it!<br />
* Get started > New Tab Page > shows > Homepage<br />
* Appearance > Toolbar > Show home button ON, set to bitpost.com<br />
* Appearance > Toolbar > Brave Rewards, Brave Wallet, Sidebar > OFF<br />
* Appearance > Toolbar > Show wide address bar, show full urls > ON<br />
* Appearance > Sidebar > Show: never<br />
* (NO) Appearance > Content > (very bottom) Cycle through the most recently used tabs with Ctrl-Tab<br />
* Sync > I have a chain > paste, toggle Sync everything ON<br />
<br />
TODO: consider playing with vertical tabs! It's a mindfuck tho...<br />
<br />
=== Profiles ===<br />
Profiles in Brave are AWESOME:<br />
* Brave seems to remember what desktop the windows were on!!! FINALLY<br />
* For each profile, you get a name and customizable icon in the top-right of the toolbar, SO NICE<br />
* Bookmarks etc. are all profile-specific, as expected<br />
* You can sync a profile across machines (see #Sync)<br />
<br />
==== Finding Profile folder ====<br />
You can start brave with a specific profile, like this:<br />
brave-browser --profile-directory='Profile 4'&<br />
This is a bit dicey as the folder you must provide does not actually match the profile name! I used this bit of ugliness to get a map of name to folder:<br />
<br />
💉 m@cast [~/.config/BraveSoftware/Brave-Browser] grep -Po "\"name\":\"[^\"]*?\",\"(pa|usi)" * -R<br />
Default/Preferences:"name":"moodboom","pa<br />
Profile 4/Preferences:"name":"es-1","pa<br />
Profile 5/Preferences:"name":"mdmdev-1","pa<br />
Profile 6/Preferences:"name":"Profile 2","pa<br />
Profile 7/Preferences:"name":"ig-1","usi<br />
<br />
==== Assign VSCode Launch to (re)use a Brave profile ====<br />
When vscode starts its Javascript debugger, it uses a custom "user-data-dir" within its own config folder. This means you start with a clean profile upon first use. However, (after MUCH trial-and-horror), you can specify that it uses a precise profile within this custom user-data-dir.<br />
<br />
If you use a launch.json stanza like this, vscode will always use that specific profile. You can then start customizing the profile, and your changes will stick across debugging sessions. Awesome!<br />
"configurations": [<br />
{<br />
"name": "Launch Brave",<br />
"runtimeExecutable": "/usr/bin/brave-browser",<br />
// NOTE: This directory will be within .code/Code/User; specifying it allows reuse of a specific Profile there, nice.<br />
"runtimeArgs": [ "--profile-directory=\"Profile 4\"" ],<br />
"type": "chrome",<br />
"request": "launch",<br />
"timeout": 5000,<br />
"url": "http://localhost:8008",<br />
"webRoot": "${workspaceFolder}/src"<br />
}<br />
]<br />
<br />
That should be all you need to get usage of a consistent profile across sessions. But... say you want to sync it across development environments... why not! Read on...<br />
<br />
===== More Details =====<br />
Again, we dig to get a map of profile names to folders. First, run the debugger, and look for the process and params:<br />
ps ax|grep -Po brave.*config/Code/User/workspaceStorage/.*?profile<br />
From that, you can extract a user-data-dir:<br />
--user-data-dir=/home/m/development/config/common/home/m/.config/Code/User/workspaceStorage/436f0a4c50203656e625d634df9e0ca9/ms-vscode.js-debug/.profile<br />
NOTE that it seems that each vscode project has its own workspaceStorage! Which is pretty crazy. That means every dev project can have its own set of profiles.<br />
<br />
From the user-data-dir folder, you can get a map of profile name to folder:<br />
cd #your-user-data-dir#<br />
grep -Po "\"name\":\"[^\"]*?\",\"(pa|usi)" * -R<br />
"Profile 4"/Preferences:"name":"es-2","usi<br />
<br />
Now that you know what is where.... Just like with "normal" profiles, you can sync them across machines.<br />
<br />
=== Set up sync ===<br />
Do this for each profile:<br />
* Settings > Sync > Start a new Sync Chain > Computer > copy Sync Chain Code > save in private<br />
* Make sure Sync everything is toggled ON<br />
* You can now paste this chain into another computer to sync this profile there<br />
<br />
== [https://brave.com/linux/ Install] ==<br />
Copy and paste this all in one shot!<br />
sudo curl -fsSLo /usr/share/keyrings/brave-browser-archive-keyring.gpg https://brave-browser-apt-release.s3.brave.com/brave-browser-archive-keyring.gpg<br />
echo "deb [signed-by=/usr/share/keyrings/brave-browser-archive-keyring.gpg] https://brave-browser-apt-release.s3.brave.com/ stable main"|sudo tee /etc/apt/sources.list.d/brave-browser-release.list<br />
sudo apt update<br />
sudo apt install brave-browser</div>Mhttps://bitpost.com/w/index.php?title=Kitty&diff=7440Kitty2023-12-31T20:44:15Z<p>M: /* ssh terminfo */</p>
<hr />
<div>OpenGL hardware accelerated terminal that is SIGNIFICANTLY, IMPORTANTLY much faster than gnome-terminal on abtdev1 via SPICE.<br />
<br />
== Config ==<br />
All config (colors, keyboard, much more...) is here, "self-documented":<br />
development/config/common/home/m/.config/kitty/kitty.conf<br />
=== ssh terminfo ===<br />
When ssh'ing to a new machine, it will not "know" kitty by default.<br />
<br />
When you can use ssh directly, use a kitten:<br />
kitty +kitten ssh newplace<br />
Or, to teach an ubuntu machine about it:<br />
sudo apt install kitty-terminfo<br />
Or, go manual by following [https://sw.kovidgoyal.net/kitty/kittens/ssh/#manual-terminfo-copy this].<br />
infocmp -a xterm-kitty | ssh myserver tic -x -o \~/.terminfo /dev/stdin<br />
<br />
Now LOG OUT, log back in, and enjoy the speed of kitty.<br />
<br />
=== Colors ===<br />
Right now there are two solarized color blocks in the config, one for dark one for light. Red needs some work!<br />
<br />
== Install ==<br />
* For now, I just install via ubuntu apt<br />
sudo apt install kitty<br />
* Add ctrl-shift-t shortcut in i3 config, starting it with correct path and using -1 (so one daemon is used for all shells, nice and fast).</div>Mhttps://bitpost.com/w/index.php?title=Kitty&diff=7439Kitty2023-12-31T20:43:58Z<p>M: /* ssh terminfo */</p>
<hr />
<div>OpenGL hardware accelerated terminal that is SIGNIFICANTLY, IMPORTANTLY much faster than gnome-terminal on abtdev1 via SPICE.<br />
<br />
== Config ==<br />
All config (colors, keyboard, much more...) is here, "self-documented":<br />
development/config/common/home/m/.config/kitty/kitty.conf<br />
=== ssh terminfo ===<br />
When ssh'ing to a new machine, it will not "know" kitty by default.<br />
<br />
When you can use ssh directly, use a kitten:<br />
kitty +kittens ssh newplace<br />
Or, to teach an ubuntu machine about it:<br />
sudo apt install kitty-terminfo<br />
Or, go manual by following [https://sw.kovidgoyal.net/kitty/kittens/ssh/#manual-terminfo-copy this].<br />
infocmp -a xterm-kitty | ssh myserver tic -x -o \~/.terminfo /dev/stdin<br />
<br />
Now LOG OUT, log back in, and enjoy the speed of kitty.<br />
<br />
=== Colors ===<br />
Right now there are two solarized color blocks in the config, one for dark one for light. Red needs some work!<br />
<br />
== Install ==<br />
* For now, I just install via ubuntu apt<br />
sudo apt install kitty<br />
* Add ctrl-shift-t shortcut in i3 config, starting it with correct path and using -1 (so one daemon is used for all shells, nice and fast).</div>Mhttps://bitpost.com/w/index.php?title=Eslint&diff=7438Eslint2023-12-29T17:53:33Z<p>M: /* also */</p>
<hr />
<div>==== Intro ====<br />
eslint is a Javascript linter and format-enforcer. Love it or die. I'm following Google's rules, with allowance for longer lines. My job uses a very strict set of rules.<br />
==== Installation ====<br />
Do NOT USE APT, it is available and seems to work but this is a node module. Don't be stupid, use npm. Note the mind-bogglingly-insane number of dependencies. Node world, you scare me...<br />
* Install eslint node module<br />
# From anywhere...<br />
npm install -g eslint<br />
# Then from your project...<br />
npm install --save-dev eslint-config-google<br />
* Configure eslint (see below)<br />
* Install the eslint vscode extension.<br />
** Get the eslint extension<br />
** Restart vscode and open a project with JS files.<br />
** An eslint dialog should pop up, click Allow. Should be all you need.<br />
<br />
===== Custom formatting =====<br />
First do [https://www.digitalocean.com/community/tutorials/linting-and-formatting-with-eslint-in-vs-code Per-project setup].<br />
<br />
Then you can edit this in your project:<br />
.eslintrc.json<br />
<br />
==== Configuration ====<br />
If you have an existing .eslintrc.js that works well in another project, JUST COPY IT to the root of the new vscode folder-based project. Here's a good example:<br />
module.exports = {<br />
'env': {<br />
'browser': true,<br />
'es2020': true,<br />
},<br />
// this is insane: 'extends': ['eslint:recommended', 'google'],<br />
'extends': ['google'],<br />
'parserOptions': {<br />
'ecmaVersion': 11,<br />
'sourceType': 'module',<br />
},<br />
'rules': {<br />
// MDM they don't call me [Michael 4k] for nothing. I have to save SOME lint dignity. Is this too much to ask?<br />
'max-len': [1, {'code': 120}],<br />
},<br />
};<br />
Otherwise, use one of these to step you through it. Use the google format as a baseline:<br />
* Run the vscode command EsLint: Create EsLint configuration<br />
* Run the [eslint --init] command from the root of your project<br />
<br />
==== eslint rules ====<br />
eslint rules are generally ridiculously aggressive, truly for the OCD. But you get used to it, for better or worse. There are some pretty stupid ones, eg:<br />
* [https://eslint.org/docs/rules/no-mixed-operators No mixed operators]! Durrr... learn to program.<br />
* No trailing spaces, even in comments<br />
* String literals must use single quotes; JSX strings must use double quotes.<br />
I'm not even sure which rule is causing some of those restrictions. It gets complicated. So don't go too crazy.</div>Mhttps://bitpost.com/w/index.php?title=Eslint&diff=7437Eslint2023-12-29T17:52:45Z<p>M: /* Installation */</p>
<hr />
<div>==== Intro ====<br />
eslint is a Javascript linter and format-enforcer. Love it or die. I'm following Google's rules, with allowance for longer lines. My job uses a very strict set of rules.<br />
==== Installation ====<br />
Do NOT USE APT, it is available and seems to work but this is a node module. Don't be stupid, use npm. Note the mind-bogglingly-insane number of dependencies. Node world, you scare me...<br />
* Install eslint node module<br />
# From anywhere...<br />
npm install -g eslint<br />
# Then from your project...<br />
npm install --save-dev eslint-config-google<br />
* Configure eslint (see below)<br />
* Install the eslint vscode extension.<br />
** Get the eslint extension<br />
** Restart vscode and open a project with JS files.<br />
** An eslint dialog should pop up, click Allow. Should be all you need.<br />
<br />
===== also =====<br />
[https://www.digitalocean.com/community/tutorials/linting-and-formatting-with-eslint-in-vs-code Per-project setup]<br />
<br />
==== Configuration ====<br />
If you have an existing .eslintrc.js that works well in another project, JUST COPY IT to the root of the new vscode folder-based project. Here's a good example:<br />
module.exports = {<br />
'env': {<br />
'browser': true,<br />
'es2020': true,<br />
},<br />
// this is insane: 'extends': ['eslint:recommended', 'google'],<br />
'extends': ['google'],<br />
'parserOptions': {<br />
'ecmaVersion': 11,<br />
'sourceType': 'module',<br />
},<br />
'rules': {<br />
// MDM they don't call me [Michael 4k] for nothing. I have to save SOME lint dignity. Is this too much to ask?<br />
'max-len': [1, {'code': 120}],<br />
},<br />
};<br />
Otherwise, use one of these to step you through it. Use the google format as a baseline:<br />
* Run the vscode command EsLint: Create EsLint configuration<br />
* Run the [eslint --init] command from the root of your project<br />
<br />
==== eslint rules ====<br />
eslint rules are generally ridiculously aggressive, truly for the OCD. But you get used to it, for better or worse. There are some pretty stupid ones, eg:<br />
* [https://eslint.org/docs/rules/no-mixed-operators No mixed operators]! Durrr... learn to program.<br />
* No trailing spaces, even in comments<br />
* String literals must use single quotes; JSX strings must use double quotes.<br />
I'm not even sure which rule is causing some of those restrictions. It gets complicated. So don't go too crazy.</div>Mhttps://bitpost.com/w/index.php?title=Docker&diff=7436Docker2023-12-27T20:46:56Z<p>M: /* See files in an image */</p>
<hr />
<div>Thanks Keith for the intro!<br />
<br />
Keith: Alpine is a stripped down linux distro. Need to learn about how to handle persistent volumes, container secrets (don't put in container, but it can prompt for things). Dockerfile -v (volume). Container should output to stdin/out, then host can manage logging. Terraform can build your arch (can use a proxmox template), ansible is great for actual tasks. GCP has managed kubernetes (wait until you understand why you need it). Check out hashicorp vault FOSS version for awesome secret storage that is docker-compatible.<br />
<br />
=== Maintenance ===<br />
<br />
==== Restart on reboot ====<br />
If you are using <code>docker compose</code>, you should add this to your containers in compose.yml:<br />
restart: always<br />
<br />
To restart a single container on reboot, once it is running, update its config:<br />
docker update --restart unless-stopped container_id<br />
<br />
==== Prune regularly ====<br />
ALWAYS prune your host's containers and images! Or docker will eat your drive alive.<br />
Do this in crontab:<br />
0 3 * * * docker container prune -f && docker image prune -f<br />
On occasion you may also need to clean up strays, with this super-prune:<br />
docker system prune --all<br />
It will remove all unused images not just dangling ones. Make sure the ones you want to keep are running! But DO THIS whenever you've been dicking around for a while, you're sure to have splorched some schtumm!<br />
<br />
Also don't forget to [[Systemd#Log_limit|prune your system log]].<br />
<br />
=== Commands ===<br />
docker build -t name . # builds an image from curr dir Dockerfile<br />
docker images # lists images<br />
docker run --name cont-name image # to create and start a container from an image, which you can then stop and start<br />
# -it to run in a terminal, then Ctrl-C to stop it; use -d to run detached<br />
docker logs --follow cont-name # tail container logging<br />
docker logs -f cont --tail 100 # tail but clip log - REQUIRED on long-running containers!<br />
docker ps # to see what containers are running<br />
docker ps -a # to see what containers are running (including recently stopped containers)<br />
docker start|stop name # to start/stop a container<br />
docker exec -it cont-name cmd # to run cmd on running container<br />
docker exec -it cont-name bash # get bash prompt on running container<br />
docker exec -u root -it cont-name /bin/bash # to run bash as root (so eg you can `apt install ...`)<br />
docker exec -u 0 -it cont-name bash # similar to ^<br />
docker cp ./myfile mycont:/dest # copy file into container<br />
docker cp mycont:/src /home/ # copy file out of container<br />
docker cp $(docker create --rm ${imageBaseUrl}):/image/path/files /local/path # copy image files<br />
docker rm name # to remove a stopped container<br />
docker container prune # to remove all stopped containers<br />
docker images # lists images<br />
docker rmi REPOSITORY/TAG # to remove an image<br />
docker image prune # remove all dangling images<br />
<br />
docker push|pull # push to / pull from hub.docker.com (for subsequent pull elsewhere!)<br />
<br />
==== See files in an image ====<br />
You should REALLY peek at images before blindly running containers. Stupid docker doesn't make this OOTB-easy, but there's [https://stackoverflow.com/a/53481010/717274 always a way].<br />
docker create --name="tmp_$$" image:tag ls<br />
docker export tmp_$$ | tar t<br />
docker rm tmp_$$<br />
<br />
Or to just fucking untar them all:<br />
docker save nginx > nginx.tar<br />
tar -xvf nginx.tar<br />
<br />
==== Pretty ps ====<br />
Use this to show containers in a nice format (you can also add this as default, in ~/.docker/config.json):<br />
docker ps -a --format 'table {{.ID}}\t{{.Status}} \t{{.Names}}\t{{.Command}}'<br />
docker ps -a --format 'table {{.ID}}\t{{.Status}} \t{{.Names}}\t{{.Command}}' | grep #mycontainer#<br />
<br />
==== Restart container with new image ====<br />
<br />
This is best practice, especially for large containers that are hosted at another location. It removes the image retrieval time from the overall container downtime:<br />
docker pull mysql<br />
docker stop my-mysql-container<br />
docker rm my-mysql-container<br />
docker run --name=my-mysql-container --restart=always ...<br />
<br />
=== Containers ===<br />
<br />
Find nirvana [https://hub.docker.com/search?type=image here.]<br />
<br />
==== Debian slim ====<br />
<br />
Debian slim containers are much smaller than standard installs. They are stripped of things like documentation, while still maintaining a full linux kernel and C++ stack.<br />
<br />
You can use apt to bake in what you need from there. Nice!<br />
<br />
==== Node ====<br />
<br />
The official node container is huge (1GB), the alpine one is relatively tiny. See the list [https://hub.docker.com/_/node here.]<br />
<br />
==== alpine ====<br />
Alpine is the best TINY base linux container. But it runs BusyBox and musl so many things (nvm, meteor) won't work (at least without a TON of effort).<br />
<br />
===== Node on alpine =====<br />
<br />
Here's a good starting point for a node app, but remember meteor won't work:<br />
<br />
<pre><br />
FROM alpine/git<br />
RUN apk --update add curl bash tar sudo npm <br />
SHELL ["/bin/bash", "-c"]<br />
<br />
ENV NEWUSER='m'<br />
RUN adduser -g "$NEWUSER" -D -s /bin/bash $NEWUSER \<br />
&& echo "$NEWUSER ALL=(ALL) ALL" > /etc/sudoers.d/$NEWUSER && chmod 0440 /etc/sudoers.d/$NEWUSER<br />
<br />
USER m<br />
WORKDIR /home/m<br />
<br />
COPY --chown=m my-code /home/m/my-code<br />
<br />
RUN npm install -g whatevah<br />
<br />
EXPOSE 3000<br />
CMD [ "my_app", "param1" ]<br />
</pre><br />
<br />
=== More examples ===<br />
* Example dockerfile for [https://hub.docker.com/r/linuxserver/nextcloud nextcloud]<br />
<br />
==== LetsEncrypt SSL certificate generator ====<br />
docker pull zerossl/client<br />
# well that experiment went to shit... we tried to add a TXT domain record but it wasn't found<br />
# tom thought we needed a full resolving A record before TXT would work<br />
# either way, we can use a self-signed cert with gitlab and forego the constant need to renew<br />
<br />
=== Networking ===<br />
<br />
Bridge networking (the default) allows connections between containers running on the same docker host.<br />
<br />
docker network create my-nw # defaults to --driver bridge<br />
docker run (...) --network my-nw (...) # to create and start a container on the network<br />
docker network connect my-nw container-name # to attach a container to the network after it is started<br />
docker network inspect my-nw<br />
<br />
=== Install ===<br />
<br />
==== [https://docs.docker.com/engine/install/ubuntu/#install-using-the-repository Install docker with apt] ====<br />
<br />
This now includes docker compose. It should be all you need. I had to do this shit that is fucking always a problem and never documented - FU.<br />
* add yourself to docker group<br />
sudo usermod -aG docker ${USER}<br />
* fix the GOD DAMNED socket permission. assholes...<br />
sudo chmod 666 /var/run/docker.sock<br />
<br />
==== Proxmox CPU config ====<br />
<br />
Some images (like Meteor 5.0) require more-advanced CPU capabilities than Proxmox grants by default. Specifically, Mongo 5.0 requires AVX cpu instructions. To enable them:<br />
<br />
Proxmox > VM > Edit > Processor > Type: "host"<br />
<br />
Note that my Proxmox docker VM is called matryoshka.</div>Mhttps://bitpost.com/w/index.php?title=Docker&diff=7435Docker2023-12-27T20:40:11Z<p>M: /* Commands */</p>
<hr />
<div>Thanks Keith for the intro!<br />
<br />
Keith: Alpine is a stripped down linux distro. Need to learn about how to handle persistent volumes, container secrets (don't put in container, but it can prompt for things). Dockerfile -v (volume). Container should output to stdin/out, then host can manage logging. Terraform can build your arch (can use a proxmox template), ansible is great for actual tasks. GCP has managed kubernetes (wait until you understand why you need it). Check out hashicorp vault FOSS version for awesome secret storage that is docker-compatible.<br />
<br />
=== Maintenance ===<br />
<br />
==== Restart on reboot ====<br />
If you are using <code>docker compose</code>, you should add this to your containers in compose.yml:<br />
restart: always<br />
<br />
To restart a single container on reboot, once it is running, update its config:<br />
docker update --restart unless-stopped container_id<br />
<br />
==== Prune regularly ====<br />
ALWAYS prune your host's containers and images! Or docker will eat your drive alive.<br />
Do this in crontab:<br />
0 3 * * * docker container prune -f && docker image prune -f<br />
On occasion you may also need to clean up strays, with this super-prune:<br />
docker system prune --all<br />
It will remove all unused images not just dangling ones. Make sure the ones you want to keep are running! But DO THIS whenever you've been dicking around for a while, you're sure to have splorched some schtumm!<br />
<br />
Also don't forget to [[Systemd#Log_limit|prune your system log]].<br />
<br />
=== Commands ===<br />
docker build -t name . # builds an image from curr dir Dockerfile<br />
docker images # lists images<br />
docker run --name cont-name image # to create and start a container from an image, which you can then stop and start<br />
# -it to run in a terminal, then Ctrl-C to stop it; use -d to run detached<br />
docker logs --follow cont-name # tail container logging<br />
docker logs -f cont --tail 100 # tail but clip log - REQUIRED on long-running containers!<br />
docker ps # to see what containers are running<br />
docker ps -a # to see what containers are running (including recently stopped containers)<br />
docker start|stop name # to start/stop a container<br />
docker exec -it cont-name cmd # to run cmd on running container<br />
docker exec -it cont-name bash # get bash prompt on running container<br />
docker exec -u root -it cont-name /bin/bash # to run bash as root (so eg you can `apt install ...`)<br />
docker exec -u 0 -it cont-name bash # similar to ^<br />
docker cp ./myfile mycont:/dest # copy file into container<br />
docker cp mycont:/src /home/ # copy file out of container<br />
docker cp $(docker create --rm ${imageBaseUrl}):/image/path/files /local/path # copy image files<br />
docker rm name # to remove a stopped container<br />
docker container prune # to remove all stopped containers<br />
docker images # lists images<br />
docker rmi REPOSITORY/TAG # to remove an image<br />
docker image prune # remove all dangling images<br />
<br />
docker push|pull # push to / pull from hub.docker.com (for subsequent pull elsewhere!)<br />
<br />
==== See files in an image ====<br />
You should REALLY peek at images before blindly running containers. Stupid docker doesn't make this OOTB-easy, but there's [https://stackoverflow.com/a/53481010/717274 always a way].<br />
docker create --name="tmp_$$" image:tag<br />
docker export tmp_$$ | tar t<br />
docker rm tmp_$$<br />
<br />
==== Pretty ps ====<br />
Use this to show containers in a nice format (you can also add this as default, in ~/.docker/config.json):<br />
docker ps -a --format 'table {{.ID}}\t{{.Status}} \t{{.Names}}\t{{.Command}}'<br />
docker ps -a --format 'table {{.ID}}\t{{.Status}} \t{{.Names}}\t{{.Command}}' | grep #mycontainer#<br />
<br />
==== Restart container with new image ====<br />
<br />
This is best practice, especially for large containers that are hosted at another location. It removes the image retrieval time from the overall container downtime:<br />
docker pull mysql<br />
docker stop my-mysql-container<br />
docker rm my-mysql-container<br />
docker run --name=my-mysql-container --restart=always ...<br />
<br />
=== Containers ===<br />
<br />
Find nirvana [https://hub.docker.com/search?type=image here.]<br />
<br />
==== Debian slim ====<br />
<br />
Debian slim containers are much smaller than standard installs. They are stripped of things like documentation, while still maintaining a full linux kernel and C++ stack.<br />
<br />
You can use apt to bake in what you need from there. Nice!<br />
<br />
==== Node ====<br />
<br />
The official node container is huge (1GB), the alpine one is relatively tiny. See the list [https://hub.docker.com/_/node here.]<br />
<br />
==== alpine ====<br />
Alpine is the best TINY base linux container. But it runs BusyBox and musl so many things (nvm, meteor) won't work (at least without a TON of effort).<br />
<br />
===== Node on alpine =====<br />
<br />
Here's a good starting point for a node app, but remember meteor won't work:<br />
<br />
<pre><br />
FROM alpine/git<br />
RUN apk --update add curl bash tar sudo npm <br />
SHELL ["/bin/bash", "-c"]<br />
<br />
ENV NEWUSER='m'<br />
RUN adduser -g "$NEWUSER" -D -s /bin/bash $NEWUSER \<br />
&& echo "$NEWUSER ALL=(ALL) ALL" > /etc/sudoers.d/$NEWUSER && chmod 0440 /etc/sudoers.d/$NEWUSER<br />
<br />
USER m<br />
WORKDIR /home/m<br />
<br />
COPY --chown=m my-code /home/m/my-code<br />
<br />
RUN npm install -g whatevah<br />
<br />
EXPOSE 3000<br />
CMD [ "my_app", "param1" ]<br />
</pre><br />
<br />
=== More examples ===<br />
* Example dockerfile for [https://hub.docker.com/r/linuxserver/nextcloud nextcloud]<br />
<br />
==== LetsEncrypt SSL certificate generator ====<br />
docker pull zerossl/client<br />
# well that experiment went to shit... we tried to add a TXT domain record but it wasn't found<br />
# tom thought we needed a full resolving A record before TXT would work<br />
# either way, we can use a self-signed cert with gitlab and forego the constant need to renew<br />
<br />
=== Networking ===<br />
<br />
Bridge networking (the default) allows connections between containers running on the same docker host.<br />
<br />
docker network create my-nw # defaults to --driver bridge<br />
docker run (...) --network my-nw (...) # to create and start a container on the network<br />
docker network connect my-nw container-name # to attach a container to the network after it is started<br />
docker network inspect my-nw<br />
<br />
=== Install ===<br />
<br />
==== [https://docs.docker.com/engine/install/ubuntu/#install-using-the-repository Install docker with apt] ====<br />
<br />
This now includes docker compose. It should be all you need. I had to do this shit that is fucking always a problem and never documented - FU.<br />
* add yourself to docker group<br />
sudo usermod -aG docker ${USER}<br />
* fix the GOD DAMNED socket permission. assholes...<br />
sudo chmod 666 /var/run/docker.sock<br />
<br />
==== Proxmox CPU config ====<br />
<br />
Some images (like Meteor 5.0) require more-advanced CPU capabilities than Proxmox grants by default. Specifically, Mongo 5.0 requires AVX cpu instructions. To enable them:<br />
<br />
Proxmox > VM > Edit > Processor > Type: "host"<br />
<br />
Note that my Proxmox docker VM is called matryoshka.</div>Mhttps://bitpost.com/w/index.php?title=Visual_Studio_Code&diff=7434Visual Studio Code2023-12-27T15:12:37Z<p>M: /* Code formatting */</p>
<hr />
<div>=== Install ===<br />
* Add the repo<br />
curl https://packages.microsoft.com/keys/microsoft.asc | gpg --dearmor > microsoft.gpg<br />
sudo mv microsoft.gpg /etc/apt/trusted.gpg.d/microsoft.gpg<br />
sudo sh -c 'echo "deb [arch=amd64] https://packages.microsoft.com/repos/vscode stable main" > /etc/apt/sources.list.d/vscode.list'<br />
* Update as usual<br />
sudo apt-get update<br />
sudo apt-get install code # released monthly<br />
# or code-insiders (released daily)<br />
# use [apt search visualst]<br />
<br />
=== Settings sync ===<br />
<br />
Sync my settings on any new machine.<br />
(bottom-left gear) > Turn on settings sync<br />
Use my github account.<br />
<br />
=== Extensions ===<br />
<br />
* Debug > Install additional debuggers... > C++<br />
* C/C++<br />
* CMake, CMake Tools (one for project mgmt, one for cmake file editing)<br />
* [[eslint]]<br />
* Bookmarks; go F2, set ctrl-F2<br />
* Numbered bookmarks; go c-1, set c-a-sh-1<br />
* vscode-icons<br />
* change-case<br />
* overtype<br />
* [https://marketplace.visualstudio.com/items?itemName=neptunedesign.vs-sequential-number sequential-number]<br />
* [[MongoDB for VS Code]]<br />
* REST client<br />
Run API queries out of *.http files. If you get web services errors, restart code (all instances).<br />
* Docker, Dev Containers<br />
You can connect to a remote docker host with some settings foo.<br />
* Remote - SSH, SSH Tooling<br />
<br />
==== nah/old... ====<br />
* x GitLens<br />
* x sort-imports (for ES6)<br />
* Debugger for Firefox<br />
* Debugger for Chrome<br />
<br />
=== Config and Key Bindings ===<br />
<br />
Go to File > Preferences > Settings for a shitton of settings. Search for the name you want.<br />
<br />
Turn OFF these very-slow automatic checks in your launch.json file:<br />
"npm.autoDetect": "off",<br />
"gulp.autoDetect": "off",<br />
"grunt.autoDetect": "off",<br />
"jake.autoDetect": "off",<br />
"typescript.tsc.autoDetect": "off",<br />
<br />
Turn this shit off, it "reuses tabs" on files you open (unless you double-click to open, or edit the file). Nonsense.<br />
"workbench.editor.enablePreview": false<br />
<br />
Use the editor; changes are stored and shared across installs from here:<br />
ls ~/.config/Code/User/<br />
keybindings.json -> ../../../development/config/common/home/m/.config/Code/User/keybindings.json<br />
settings.json -> ../../../development/config/common/home/m/.config/Code/User/settings.json<br />
<br />
=== Debugging ===<br />
<br />
While debugging, you can use the Debug Console to print memory, including the content of strings that are clipped by default in the variables and watch windows.<br />
<br />
View > Open View > Debug Console<br />
<br />
From there, send gdb a command to print memory – 300 characters of a string in this example:<br />
<br />
-exec x/300sb Query.c_str()<br />
<br />
=== Code completion and linting ===<br />
<br />
==== Code formatting ====<br />
Auto-format your code, all day long! This literally saves hours of typing, not to mention keeping the code uniform. DO IT.<br />
<br />
Prefs > Settings > search > format on save > check the box!<br />
<br />
We let vscode use clang-format to format C++ code. You need to install it:<br />
sudo apt install clang-format<br />
<br />
Custom rules are defined here:<br />
[workspace-folder]/.clang-format<br />
<br />
See [https://clang.llvm.org/docs/ClangFormatStyleOptions.html the formatting rules] for help.<br />
<br />
If formatting errors out or does not happen, run [clang-format #filename#] in a command prompt and see if you have errors in your rules file.<br />
<br />
NOTE: eslint is the current way to enforce code style in Javascript. See [[eslint|the eslint page]] for more information.<br />
<br />
Some C++ ones:<br />
Standard: Cpp11<br />
BasedOnStyle: LLVM<br />
IndentWidth: 2<br />
ColumnLimit: 0<br />
AccessModifierOffset: -2<br />
NamespaceIndentation: All<br />
BreakBeforeBraces: Custom<br />
BraceWrapping:<br />
AfterEnum: true<br />
AfterStruct: true<br />
AfterClass: true<br />
SplitEmptyFunction: true<br />
AfterControlStatement: false<br />
AfterNamespace: false<br />
AfterFunction: true<br />
AfterUnion: true<br />
AfterExternBlock: false<br />
BeforeCatch: false<br />
BeforeElse: false<br />
SplitEmptyRecord: true<br />
SplitEmptyNamespace: true<br />
<br />
==== Intellisense ====<br />
<br />
Use CMake to generate json that includes the project headers, then import that into vscode settings, for a DRY way to set up header paths.<br />
* Add this to CMakeList.txt to generate compile_commands.json:<br />
# MDM This creates compile_commands.json, which can be imported by vscode to set include paths from here, w00t DRY<br />
set(CMAKE_EXPORT_COMPILE_COMMANDS ON)<br />
* Edit your project settings (eg...)<br />
/home/m/development/thedigitalage/AbetterTrader/server/.vscode/c_cpp_properties.json<br />
* Add a compileCommands directive:<br />
{<br />
"configurations": [<br />
{<br />
"name": "Linux",<br />
"includePath": [<br />
"${workspaceFolder}/**"<br />
],<br />
"defines": [],<br />
"compilerPath": "/usr/bin/clang",<br />
"cStandard": "c11",<br />
"cppStandard": "c++17",<br />
"intelliSenseMode": "clang-x64",<br />
"configurationProvider": "vector-of-bool.cmake-tools",<br />
"compileCommands": "${workspaceFolder}/cmake-debug/compile_commands.json"<br />
}<br />
],<br />
"version": 4<br />
}</div>Mhttps://bitpost.com/w/index.php?title=Visual_Studio_Code&diff=7433Visual Studio Code2023-12-27T15:06:19Z<p>M: /* Code formatting */</p>
<hr />
<div>=== Install ===<br />
* Add the repo<br />
curl https://packages.microsoft.com/keys/microsoft.asc | gpg --dearmor > microsoft.gpg<br />
sudo mv microsoft.gpg /etc/apt/trusted.gpg.d/microsoft.gpg<br />
sudo sh -c 'echo "deb [arch=amd64] https://packages.microsoft.com/repos/vscode stable main" > /etc/apt/sources.list.d/vscode.list'<br />
* Update as usual<br />
sudo apt-get update<br />
sudo apt-get install code # released monthly<br />
# or code-insiders (released daily)<br />
# use [apt search visualst]<br />
<br />
=== Settings sync ===<br />
<br />
Sync my settings on any new machine.<br />
(bottom-left gear) > Turn on settings sync<br />
Use my github account.<br />
<br />
=== Extensions ===<br />
<br />
* Debug > Install additional debuggers... > C++<br />
* C/C++<br />
* CMake, CMake Tools (one for project mgmt, one for cmake file editing)<br />
* [[eslint]]<br />
* Bookmarks; go F2, set ctrl-F2<br />
* Numbered bookmarks; go c-1, set c-a-sh-1<br />
* vscode-icons<br />
* change-case<br />
* overtype<br />
* [https://marketplace.visualstudio.com/items?itemName=neptunedesign.vs-sequential-number sequential-number]<br />
* [[MongoDB for VS Code]]<br />
* REST client<br />
Run API queries out of *.http files. If you get web services errors, restart code (all instances).<br />
* Docker, Dev Containers<br />
You can connect to a remote docker host with some settings foo.<br />
* Remote - SSH, SSH Tooling<br />
<br />
==== nah/old... ====<br />
* x GitLens<br />
* x sort-imports (for ES6)<br />
* Debugger for Firefox<br />
* Debugger for Chrome<br />
<br />
=== Config and Key Bindings ===<br />
<br />
Go to File > Preferences > Settings for a shitton of settings. Search for the name you want.<br />
<br />
Turn OFF these very-slow automatic checks in your launch.json file:<br />
"npm.autoDetect": "off",<br />
"gulp.autoDetect": "off",<br />
"grunt.autoDetect": "off",<br />
"jake.autoDetect": "off",<br />
"typescript.tsc.autoDetect": "off",<br />
<br />
Turn this shit off, it "reuses tabs" on files you open (unless you double-click to open, or edit the file). Nonsense.<br />
"workbench.editor.enablePreview": false<br />
<br />
Use the editor; changes are stored and shared across installs from here:<br />
ls ~/.config/Code/User/<br />
keybindings.json -> ../../../development/config/common/home/m/.config/Code/User/keybindings.json<br />
settings.json -> ../../../development/config/common/home/m/.config/Code/User/settings.json<br />
<br />
=== Debugging ===<br />
<br />
While debugging, you can use the Debug Console to print memory, including the content of strings that are clipped by default in the variables and watch windows.<br />
<br />
View > Open View > Debug Console<br />
<br />
From there, send gdb a command to print memory – 300 characters of a string in this example:<br />
<br />
-exec x/300sb Query.c_str()<br />
<br />
=== Code completion and linting ===<br />
<br />
==== Code formatting ====<br />
Auto-format your code, all day long! This literally saves hours of typing, not to mention keeping the code uniform. DO IT.<br />
<br />
We let vscode use clang-format to format C++ code. You need to install it:<br />
sudo apt install clang-format<br />
<br />
Custom rules are defined here:<br />
[workspace-folder]/.clang-format<br />
<br />
See [https://clang.llvm.org/docs/ClangFormatStyleOptions.html the formatting rules] for help.<br />
<br />
If formatting errors out or does not happen, run [clang-format #filename#] in a command prompt and see if you have errors in your rules file.<br />
<br />
NOTE: eslint is the current way to enforce code style in Javascript. See [[eslint|the eslint page]] for more information.<br />
<br />
Some C++ ones:<br />
Standard: Cpp11<br />
BasedOnStyle: LLVM<br />
IndentWidth: 2<br />
ColumnLimit: 0<br />
AccessModifierOffset: -2<br />
NamespaceIndentation: All<br />
BreakBeforeBraces: Custom<br />
BraceWrapping:<br />
AfterEnum: true<br />
AfterStruct: true<br />
AfterClass: true<br />
SplitEmptyFunction: true<br />
AfterControlStatement: false<br />
AfterNamespace: false<br />
AfterFunction: true<br />
AfterUnion: true<br />
AfterExternBlock: false<br />
BeforeCatch: false<br />
BeforeElse: false<br />
SplitEmptyRecord: true<br />
SplitEmptyNamespace: true<br />
<br />
==== Intellisense ====<br />
<br />
Use CMake to generate json that includes the project headers, then import that into vscode settings, for a DRY way to set up header paths.<br />
* Add this to CMakeList.txt to generate compile_commands.json:<br />
# MDM This creates compile_commands.json, which can be imported by vscode to set include paths from here, w00t DRY<br />
set(CMAKE_EXPORT_COMPILE_COMMANDS ON)<br />
* Edit your project settings (eg...)<br />
/home/m/development/thedigitalage/AbetterTrader/server/.vscode/c_cpp_properties.json<br />
* Add a compileCommands directive:<br />
{<br />
"configurations": [<br />
{<br />
"name": "Linux",<br />
"includePath": [<br />
"${workspaceFolder}/**"<br />
],<br />
"defines": [],<br />
"compilerPath": "/usr/bin/clang",<br />
"cStandard": "c11",<br />
"cppStandard": "c++17",<br />
"intelliSenseMode": "clang-x64",<br />
"configurationProvider": "vector-of-bool.cmake-tools",<br />
"compileCommands": "${workspaceFolder}/cmake-debug/compile_commands.json"<br />
}<br />
],<br />
"version": 4<br />
}</div>Mhttps://bitpost.com/w/index.php?title=Visual_Studio_Code&diff=7432Visual Studio Code2023-12-27T14:55:29Z<p>M: /* Code completion and linting */</p>
<hr />
<div>=== Install ===<br />
* Add the repo<br />
curl https://packages.microsoft.com/keys/microsoft.asc | gpg --dearmor > microsoft.gpg<br />
sudo mv microsoft.gpg /etc/apt/trusted.gpg.d/microsoft.gpg<br />
sudo sh -c 'echo "deb [arch=amd64] https://packages.microsoft.com/repos/vscode stable main" > /etc/apt/sources.list.d/vscode.list'<br />
* Update as usual<br />
sudo apt-get update<br />
sudo apt-get install code # released monthly<br />
# or code-insiders (released daily)<br />
# use [apt search visualst]<br />
<br />
=== Settings sync ===<br />
<br />
Sync my settings on any new machine.<br />
(bottom-left gear) > Turn on settings sync<br />
Use my github account.<br />
<br />
=== Extensions ===<br />
<br />
* Debug > Install additional debuggers... > C++<br />
* C/C++<br />
* CMake, CMake Tools (one for project mgmt, one for cmake file editing)<br />
* [[eslint]]<br />
* Bookmarks; go F2, set ctrl-F2<br />
* Numbered bookmarks; go c-1, set c-a-sh-1<br />
* vscode-icons<br />
* change-case<br />
* overtype<br />
* [https://marketplace.visualstudio.com/items?itemName=neptunedesign.vs-sequential-number sequential-number]<br />
* [[MongoDB for VS Code]]<br />
* REST client<br />
Run API queries out of *.http files. If you get web services errors, restart code (all instances).<br />
* Docker, Dev Containers<br />
You can connect to a remote docker host with some settings foo.<br />
* Remote - SSH, SSH Tooling<br />
<br />
==== nah/old... ====<br />
* x GitLens<br />
* x sort-imports (for ES6)<br />
* Debugger for Firefox<br />
* Debugger for Chrome<br />
<br />
=== Config and Key Bindings ===<br />
<br />
Go to File > Preferences > Settings for a shitton of settings. Search for the name you want.<br />
<br />
Turn OFF these very-slow automatic checks in your launch.json file:<br />
"npm.autoDetect": "off",<br />
"gulp.autoDetect": "off",<br />
"grunt.autoDetect": "off",<br />
"jake.autoDetect": "off",<br />
"typescript.tsc.autoDetect": "off",<br />
<br />
Turn this shit off, it "reuses tabs" on files you open (unless you double-click to open, or edit the file). Nonsense.<br />
"workbench.editor.enablePreview": false<br />
<br />
Use the editor; changes are stored and shared across installs from here:<br />
ls ~/.config/Code/User/<br />
keybindings.json -> ../../../development/config/common/home/m/.config/Code/User/keybindings.json<br />
settings.json -> ../../../development/config/common/home/m/.config/Code/User/settings.json<br />
<br />
=== Debugging ===<br />
<br />
While debugging, you can use the Debug Console to print memory, including the content of strings that are clipped by default in the variables and watch windows.<br />
<br />
View > Open View > Debug Console<br />
<br />
From there, send gdb a command to print memory – 300 characters of a string in this example:<br />
<br />
-exec x/300sb Query.c_str()<br />
<br />
=== Code completion and linting ===<br />
<br />
==== Code formatting ====<br />
Auto-format your code, all day long! This literally saves hours of typing, not to mention keeping the code uniform. DO IT.<br />
<br />
vscode uses clang-format to format C++ code. Custom rules are defined here:<br />
[workspace-folder]/.clang-format<br />
<br />
See [https://clang.llvm.org/docs/ClangFormatStyleOptions.html the formatting rules] for help.<br />
<br />
NOTE: eslint is the current way to enforce code style in Javascript. See [[eslint|the eslint page]] for more information.<br />
<br />
Some C++ ones:<br />
Standard: Cpp11<br />
BasedOnStyle: LLVM<br />
IndentWidth: 2<br />
ColumnLimit: 0<br />
AccessModifierOffset: -2<br />
NamespaceIndentation: All<br />
BreakBeforeBraces: Custom<br />
BraceWrapping:<br />
AfterEnum: true<br />
AfterStruct: true<br />
AfterClass: true<br />
SplitEmptyFunction: true<br />
AfterControlStatement: false<br />
AfterNamespace: false<br />
AfterFunction: true<br />
AfterUnion: true<br />
AfterExternBlock: false<br />
BeforeCatch: false<br />
BeforeElse: false<br />
SplitEmptyRecord: true<br />
SplitEmptyNamespace: true<br />
<br />
==== Intellisense ====<br />
<br />
Use CMake to generate json that includes the project headers, then import that into vscode settings, for a DRY way to set up header paths.<br />
* Add this to CMakeList.txt to generate compile_commands.json:<br />
# MDM This creates compile_commands.json, which can be imported by vscode to set include paths from here, w00t DRY<br />
set(CMAKE_EXPORT_COMPILE_COMMANDS ON)<br />
* Edit your project settings (eg...)<br />
/home/m/development/thedigitalage/AbetterTrader/server/.vscode/c_cpp_properties.json<br />
* Add a compileCommands directive:<br />
{<br />
"configurations": [<br />
{<br />
"name": "Linux",<br />
"includePath": [<br />
"${workspaceFolder}/**"<br />
],<br />
"defines": [],<br />
"compilerPath": "/usr/bin/clang",<br />
"cStandard": "c11",<br />
"cppStandard": "c++17",<br />
"intelliSenseMode": "clang-x64",<br />
"configurationProvider": "vector-of-bool.cmake-tools",<br />
"compileCommands": "${workspaceFolder}/cmake-debug/compile_commands.json"<br />
}<br />
],<br />
"version": 4<br />
}</div>Mhttps://bitpost.com/w/index.php?title=Visual_Studio_Code&diff=7431Visual Studio Code2023-12-27T14:54:19Z<p>M: </p>
<hr />
<div>=== Install ===<br />
* Add the repo<br />
curl https://packages.microsoft.com/keys/microsoft.asc | gpg --dearmor > microsoft.gpg<br />
sudo mv microsoft.gpg /etc/apt/trusted.gpg.d/microsoft.gpg<br />
sudo sh -c 'echo "deb [arch=amd64] https://packages.microsoft.com/repos/vscode stable main" > /etc/apt/sources.list.d/vscode.list'<br />
* Update as usual<br />
sudo apt-get update<br />
sudo apt-get install code # released monthly<br />
# or code-insiders (released daily)<br />
# use [apt search visualst]<br />
<br />
=== Settings sync ===<br />
<br />
Sync my settings on any new machine.<br />
(bottom-left gear) > Turn on settings sync<br />
Use my github account.<br />
<br />
=== Extensions ===<br />
<br />
* Debug > Install additional debuggers... > C++<br />
* C/C++<br />
* CMake, CMake Tools (one for project mgmt, one for cmake file editing)<br />
* [[eslint]]<br />
* Bookmarks; go F2, set ctrl-F2<br />
* Numbered bookmarks; go c-1, set c-a-sh-1<br />
* vscode-icons<br />
* change-case<br />
* overtype<br />
* [https://marketplace.visualstudio.com/items?itemName=neptunedesign.vs-sequential-number sequential-number]<br />
* [[MongoDB for VS Code]]<br />
* REST client<br />
Run API queries out of *.http files. If you get web services errors, restart code (all instances).<br />
* Docker, Dev Containers<br />
You can connect to a remote docker host with some settings foo.<br />
* Remote - SSH, SSH Tooling<br />
<br />
==== nah/old... ====<br />
* x GitLens<br />
* x sort-imports (for ES6)<br />
* Debugger for Firefox<br />
* Debugger for Chrome<br />
<br />
=== Config and Key Bindings ===<br />
<br />
Go to File > Preferences > Settings for a shitton of settings. Search for the name you want.<br />
<br />
Turn OFF these very-slow automatic checks in your launch.json file:<br />
"npm.autoDetect": "off",<br />
"gulp.autoDetect": "off",<br />
"grunt.autoDetect": "off",<br />
"jake.autoDetect": "off",<br />
"typescript.tsc.autoDetect": "off",<br />
<br />
Turn this shit off, it "reuses tabs" on files you open (unless you double-click to open, or edit the file). Nonsense.<br />
"workbench.editor.enablePreview": false<br />
<br />
Use the editor; changes are stored and shared across installs from here:<br />
ls ~/.config/Code/User/<br />
keybindings.json -> ../../../development/config/common/home/m/.config/Code/User/keybindings.json<br />
settings.json -> ../../../development/config/common/home/m/.config/Code/User/settings.json<br />
<br />
=== Debugging ===<br />
<br />
While debugging, you can use the Debug Console to print memory, including the content of strings that are clipped by default in the variables and watch windows.<br />
<br />
View > Open View > Debug Console<br />
<br />
From there, send gdb a command to print memory – 300 characters of a string in this example:<br />
<br />
-exec x/300sb Query.c_str()<br />
<br />
=== Code completion and linting ===<br />
<br />
==== Formatting ====<br />
Edit .clang-format in the root of your project. You can use different settings for different languages then. Use settings found [https://clang.llvm.org/docs/ClangFormatStyleOptions.html here].<br />
<br />
Some C++ ones:<br />
Standard: Cpp11<br />
BasedOnStyle: LLVM<br />
IndentWidth: 2<br />
ColumnLimit: 0<br />
AccessModifierOffset: -2<br />
NamespaceIndentation: All<br />
BreakBeforeBraces: Custom<br />
BraceWrapping:<br />
AfterEnum: true<br />
AfterStruct: true<br />
AfterClass: true<br />
SplitEmptyFunction: true<br />
AfterControlStatement: false<br />
AfterNamespace: false<br />
AfterFunction: true<br />
AfterUnion: true<br />
AfterExternBlock: false<br />
BeforeCatch: false<br />
BeforeElse: false<br />
SplitEmptyRecord: true<br />
SplitEmptyNamespace: true<br />
<br />
==== Intellisense ====<br />
<br />
Use CMake to generate json that includes the project headers, then import that into vscode settings, for a DRY way to set up header paths.<br />
* Add this to CMakeList.txt to generate compile_commands.json:<br />
# MDM This creates compile_commands.json, which can be imported by vscode to set include paths from here, w00t DRY<br />
set(CMAKE_EXPORT_COMPILE_COMMANDS ON)<br />
* Edit your project settings (eg...)<br />
/home/m/development/thedigitalage/AbetterTrader/server/.vscode/c_cpp_properties.json<br />
* Add a compileCommands directive:<br />
{<br />
"configurations": [<br />
{<br />
"name": "Linux",<br />
"includePath": [<br />
"${workspaceFolder}/**"<br />
],<br />
"defines": [],<br />
"compilerPath": "/usr/bin/clang",<br />
"cStandard": "c11",<br />
"cppStandard": "c++17",<br />
"intelliSenseMode": "clang-x64",<br />
"configurationProvider": "vector-of-bool.cmake-tools",<br />
"compileCommands": "${workspaceFolder}/cmake-debug/compile_commands.json"<br />
}<br />
],<br />
"version": 4<br />
}<br />
<br />
==== Code formatting ====<br />
Auto-format your code, all day long! This literally saves hours of typing, not to mention keeping the code uniform. DO IT.<br />
<br />
vscode uses clang-format to format C++ code. Custom rules are defined here:<br />
[workspace-folder]/.clang-format<br />
<br />
See [https://clang.llvm.org/docs/ClangFormatStyleOptions.html the formatting rules] for help.<br />
<br />
NOTE: eslint is the current way to enforce code style in Javascript. See [[eslint|the eslint page]] for more information.</div>Mhttps://bitpost.com/w/index.php?title=Git&diff=7430Git2023-12-22T13:52:06Z<p>M: /* LFS */</p>
<hr />
<div>=== TASKS ===<br />
{| class="mw-collapsible mw-collapsed wikitable"<br />
! New shared central bare repo<br />
|-<br />
| On central server (aka morosoph):<br />
cd development(...)<br />
git init --bare --shared mynewthang.git<br />
On development box<br />
cd development(...)<br />
git clone morosoph:development/mynewthang.git<br />
# this will create a new empty repo with no branch yet<br />
# TO CREATE MASTER BRANCH (this is the only way):<br />
# create files, git add, git commit, git push<br />
Back on bitpost<br />
git clone mynewthang.git # to create a working copy on server, if desired<br />
|}<br />
{| class="mw-collapsible mw-collapsed wikitable"<br />
! create shared central repo for existing code<br />
|-<br />
| Create a bare repo with .git suffix<br />
git init --bare --shared mything.git<br />
Go to existing code and clone the repo next to it, with a temp name. <br />
Move .git into the existing code.<br />
Add code, add a .gitignore as needed, and you're all set.<br />
cd mything/..<br />
git clone (bare-repo-host-and-path)mything.git mything-temp<br />
mv mything-temp/.git mything/<br />
rm -rf mything-temp<br />
cd mything<br />
subl .gitignore # as needed<br />
git add (whatever you want to track)<br />
git commit -a -m "init repo" && git push<br />
|}<br />
{| class="mw-collapsible mw-collapsed wikitable"<br />
! Fetch a branch from remote without checking it out<br />
|-<br />
| Good for when you are rebasing a feature branch.<br />
git fetch origin develop:develop<br />
You would think <code>git fetch --all</code> would do it but does not (it fetches the active branch from ''all origins'' - seriously wtf, who ever wants THAT??).<br />
|}<br />
{| class="mw-collapsible mw-collapsed wikitable"<br />
! Completely reset an out-of-sync branch to 100% match the remote<br />
|-<br />
| Sometimes some other idiot rebased the remote branch on you. Make sure you are on the right branch, locally. Then to completely force-reset it:<br />
git reset --hard origin/thebranchname<br />
|}<br />
{| class="mw-collapsible mw-collapsed wikitable"<br />
! Set default branch of a bare repo<br />
|-<br />
| If you want the default branch of a bare repo to be something other than master:<br />
git branch<br />
* master<br />
moodboom-quick-http<br />
git symbolic-ref HEAD refs/heads/moodboom-quick-http<br />
git branch<br />
master<br />
* moodboom-quick-http<br />
|}<br />
{| class="mw-collapsible mw-collapsed wikitable"<br />
! Find a file across branches<br />
|-<br />
| It's clunky, two steps, and you have to glob out the whole fucking name:<br />
# get the commits that involved the filename<br />
git log --all -- '**/*namebits*'<br />
# even better, get the filenames too, and see if it was added or removed:<br />
git log --all --stat -- '**/*namebits*'<br />
# now find the branch with one of those commits:<br />
git branch -a --contains #commithash#<br />
|}<br />
{| class="mw-collapsible mw-collapsed wikitable"<br />
! gitflow<br />
|-<br />
| Gitflow is awesome, using it myself and everywhere I work these days (2020).<br />
* Devs work out of develop branch<br />
* Devs create feature branches off develop for any decent-sized work<br />
* Once develop is stable and you are ready for a release:<br />
git tag -a -m "#MAJOR#.#MINOR#".0 #MAJOR#.#MINOR#.0<br />
git checkout -b release/release_#MAJOR#.#MINOR#<br />
git push --set-upstream origin release_#MAJOR#.#MINOR#<br />
git checkout master && git merge release/release_#MAJOR#.#MINOR# && git push<br />
git checkout develop # and get back to it!<br />
* Do hotfixes as needed in release branch, tagged #MAJOR#.#MINOR#.++, merged back into master and develop<br />
|}<br />
{| class="mw-collapsible mw-collapsed wikitable"<br />
! Interactive rebase with squash<br />
|-<br />
| Excellent to do when on your own feature branch. Illegal to do if branch is shared AT ALL!<br />
git rebase -i myparentbranch<br />
# work through squash and merge - gitlens may help with squash if you use vscode for EDITOR<br />
git push -f<br />
|}<br />
{| class="mw-collapsible mw-collapsed wikitable"<br />
! Push any branch from bare to origin<br />
|-<br />
| Good for when you are force-pushing a branch rebase.<br />
git push [-f] origin mybranch:mybranch<br />
|}<br />
{| class="mw-collapsible mw-collapsed wikitable"<br />
! Fetch from one origin (eg gitlab) and push to another<br />
|-<br />
| My GitLab is usually a mirror, and therefore a push target. But if you edit a file in GitLab, you may need to pull it from remote "gitlab" and push it to remote "sc", like this:<br />
git fetch gitlab develop:develop<br />
git push sc<br />
This assumes you have something like the following in your config:<br />
[remote "sc"]<br />
url = git@shitcutter.com:the-digital-age/rad-scripts.git<br />
fetch = +refs/heads/*:refs/remotes/origin/*<br />
[remote "github.com"]<br />
url = git@github.com:moodboom/rad-scripts.git<br />
fetch = +refs/heads/*:refs/remotes/origin/*<br />
[remote "gitlab.com"]<br />
url = git@gitlab.com:moodboom/rad-scripts.git<br />
fetch = +refs/heads/*:refs/remotes/origin/*<br />
|}<br />
{| class="mw-collapsible mw-collapsed wikitable"<br />
! Merging conflicts after diverging<br />
|-<br />
| Revert local changes in a file to HEAD<br />
git checkout -- path/to/file.txt<br />
Discard ALL LOCAL COMMITS and get the (possibly diverged) remote instead<br />
git reset --hard origin/master<br />
|}<br />
{| class="mw-collapsible mw-collapsed wikitable"<br />
! Create and push a feature branch<br />
|-<br />
| This will move recent commits AND uncommitted changes into a new branch (but you probably want to finish by cleaning out commits from starting branch, and repulling after you merge the feature).<br />
git checkout -b feature/whiz-bang<br />
<br />
# Do this ONCE: git config --global push.default current<br />
# From then on:<br />
git push -u<br />
<br />
# OR, if you don't want the config, you have to be more specific:<br />
# git push -u origin feature/whiz-bang<br />
|}<br />
{| class="mw-collapsible mw-collapsed wikitable"<br />
! getting upstream commits into your GitLab GITFLOW fork<br />
|-<br />
| NOTE: This doesn't work from a bitpost bare repo, so you need to make a new direct clone of your GitLab fork, first, if you don't have one yet.<br />
<br />
This is very easy if you have left the master branch alone for the parent's commits, and keep your add-on commits in a release-x.x branch, as we have for SWS and SWSS.<br />
<br />
cdl<br />
# Clone local repo directly from GitLab (already done on cobra)<br />
git clone git@gitlab.com:moodboom/Simple-Web-Server.git SWS-gitlab-moodboom<br />
cd SWS-gitlab-moodboom<br />
# make sure master is checked out<br />
git branch<br />
# add parent as remote<br />
git remote add ole-upstream git@gitlab.com:eidheim/Simple-Web-Server.git<br />
git fetch ole-upstream<br />
git rebase ole-upstream/master<br />
git push -f origin master<br />
You can now delete the fresh clone, it has done its job. Or leave it for ease-of-use for next rebase.<br />
<br />
Now update your bare repo on bitpost, to keep things in sync.<br />
git fetch origin master:master -f<br />
<br />
Next, go to dev repo, pull master. Check out release, create a new release from that, and rebase master. (or just create a new release branch off master if that's what you want) It is the gitflow way!<br />
<br />
Push your new branch to bare, then push bare to GitLab via something like:<br />
git push --set-upstream origin release/abt-0.0.3<br />
<br />
To complete SW(S)S rebase, update mh-install-sws to use the new branch. Then run it on all your dev boxes, whoop. (Then, code up any fixes, sigh... and push em.. .sigh... and get on with it!)<br />
|}<br />
{| class="mw-collapsible mw-collapsed wikitable"<br />
! getting upstream commits into your GitLab fork<br />
|-<br />
| NOTE: This doesn't work from a bitpost bare repo, so you need to make a new direct clone of your GitLab fork, first, if you don't have one yet.<br />
Add the remote, call it something specific:<br />
git remote add someauthor-upstream https://gitlab.com/someauthor/theprojectiforked<br />
Fetch all the branches of that remote into remote-tracking branches, such as upstream/master:<br />
git fetch someauthor-upstream<br />
Get on the branch where you are tracking the rebase. Typically your master branch but can be whatever:<br />
git checkout tlsv12 # or master or next or...<br />
Rewrite your branch so that any commits of yours that aren't already in upstream are replayed on top of that other branch (if you do a straight merge instead of rebase you'll screw up the upstream!):<br />
git rebase someauthor-upstream/master<br />
If you haven't done so, in GitLab, go to [https://gitlab.com/moodboom/Simple-WebSocket-Server/-/settings/repository#js-protected-branches-settings the "Protected Branch" settings] and remove protection from master - it's just a fact that you're going to need to force-push to master.<br />
<br />
IF the branch that was the target of the rebase existed, force the push in order to push it to your own forked repository on GitLab. You only need to use the -f the first time after you've rebased:<br />
git push -f origin master<br />
ELSE if the branch you merged into is a new creation, set its upstream when you push:<br />
git push --set-upstream origin tlsv12<br />
You will want to force-fetch to update the bare repo you may have on bitpost, DO THIS NOW or you will screw things up badly later:<br />
[Simple-WebSocket-Server.git] git fetch origin master:master -f<br />
You should also force-update ALL your dev repos, NOW, for the same reason:<br />
git reset --hard HEAD^^^^^^ && git pull<br />
NOTE that you may need to remove a remote-tracking branch if you don't need it any more. It's stupidly painful to get right, eg:<br />
[Simple-WebSocket-Server.git] git branch -rd eidheim/Simple-WebSocket-Server/master<br />
|}<br />
{| class="mw-collapsible mw-collapsed wikitable"<br />
! getting upstream commits into your GitHub fork<br />
|-<br />
| From [http://stackoverflow.com/a/7244456/717274 so...]<br />
Add the remote, call it something specific:<br />
git remote add someauthor-upstream https://github.com/someauthor/theprojectiforked.git<br />
Fetch all the branches of that remote into remote-tracking branches, such as upstream/master:<br />
git fetch someauthor-upstream<br />
Get on the branch where you are tracking the rebase. Typically your master branch but can be whatever:<br />
git checkout tlsv12 # or master or next or...<br />
Rewrite your branch so that any commits of yours that aren't already in upstream are replayed on top of that other branch (if you do a straight merge instead of rebase you'll screw up the upstream!):<br />
git rebase someauthor-upstream/master<br />
IF the branch that was the target of the rebase existed, force the push in order to push it to your own forked repository on GitHub. You only need to use the -f the first time after you've rebased:<br />
git push -f origin master<br />
ELSE if the branch you merged into is a new creation, set its upstream when you push:<br />
git push --set-upstream origin tlsv12<br />
Now master has the latest commits from the fork origin. You can rebase onto it, if you've been working in a branch (good):<br />
git checkout mybranch<br />
git rebase master<br />
|}<br />
{| class="mw-collapsible mw-collapsed wikitable"<br />
! Clone a bare repo (eg github, gitlab, bb) into a bare repo<br />
|-<br />
| Say you want a bare repo to be shared by all your dev environments, but you need to push/pull THAT from a central bare repo, too.<br />
git clone --bare --shared git@bitbucket.org:equityshift/es-demo.git<br />
cd es-demo.git<br />
git config remote.origin.fetch "+*:*"<br />
git fetch --all<br />
I was surprised that this was difficult at all, and may still have some lessons to learn...<br />
|}<br />
{| class="mw-collapsible mw-collapsed wikitable"<br />
! Create new branch on server, pull to client<br />
|-<br />
| <br />
# ON CENTRAL SERVER<br />
git checkout master # as needed; we are assuming that master is clean enough as a starting point<br />
git checkout -b mynewbranchy<br />
<br />
# HOWEVER, use this instead if you need a new "clean" repo and even master is dirty...<br />
# You need the rm because git "leaves your working folder intact".<br />
git checkout --orphan mynewbranchy<br />
git rm -rf .<br />
<br />
# ON CLIENT<br />
git pull<br />
git checkout -b mynewbranchy origin/mynewbranchy<br />
# if files are in the way from the previously checked-out branch, you can force it...<br />
git checkout -f -b mynewbranchy origin/mynewbranchy<br />
|}<br />
{| class="mw-collapsible mw-collapsed wikitable"<br />
! Merge changes in a single file<br />
|-<br />
| Explanation is [https://stackoverflow.com/a/11593308/717274 here]. <br />
git checkout mybranch<br />
git checkout --patch develop my/single/file.cpp<br />
|}<br />
{| class="mw-collapsible mw-collapsed wikitable"<br />
! Remove old branches<br />
|-<br />
| Explanation is [https://stackoverflow.com/a/23961231/717274 here]. <br />
Remote:<br />
git push origin --delete <branch><br />
Local:<br />
git branch -d <branch><br />
git fetch <remote> --prune # Delete multiple obsolete tracking branches<br />
|}<br />
{| class="mw-collapsible mw-collapsed wikitable"<br />
! Work with two local repos<br />
|-<br />
| Set up a remote, then fetch it as master.<br />
cd repoToChange<br />
git remote add otherplace ../../wherever/../gitrepo<br />
git ls-remote otherplace # verify it looks ok, figure out which branch you like (if not master)<br />
git fetch otherplace # gets it all<br />
git checkout --track otherplace/master # or other branch as needed; this creates the branch and sets remote in one step, cool<br />
Set up a remote, then fetch it into a non-master branch, and push it to the active origin.<br />
cd repoToChange<br />
git remote add otherplace ../../wherever/../gitrepo<br />
git ls-remote otherplace # verify it looks ok, figure out which branch you like (if not master)<br />
git fetch otherplace # gets it all<br />
git checkout otherplace/master # creates it detached, good because we need to name the new branch something other than master<br />
git checkout -b new_otherplace_branchname # creates new local branch with a good name<br />
git push --set-upstream origin new_otherplace_branchname # takes the branch from the OLD origin and pushes it to the ACTIVE origin, cool!<br />
|}<br />
{| class="mw-collapsible mw-collapsed wikitable"<br />
! Pull when untracked files are in the way<br />
|-<br />
| This will pull, forcing untracked files to be overwritten by newly tracked ones in the repo:<br />
git fetch --all<br />
git reset --hard origin/mymatchingbranch<br />
|}<br />
{| class="mw-collapsible mw-collapsed wikitable"<br />
! Create new branch when untracked files are in the way<br />
|-<br />
|<br />
git checkout -b bj143 origin/bj143<br />
git : error: The following untracked working tree files would be overwritten by checkout:<br />
(etc)<br />
<br />
TOTAL PURGE FIX (too much):<br />
git clean -d -fn ""<br />
-d dirs too<br />
-f force, required<br />
-x include ignored files (don't use this)<br />
-n dry run<br />
<br />
BEST FIX (just overwrite what is in the way):<br />
git checkout -f -b bj143 origin/bj143<br />
|}<br />
{| class="mw-collapsible mw-collapsed wikitable"<br />
! Recreate repo<br />
|-<br />
|<br />
git clone ssh://m@thedigitalmachine.com/home/m/development/thedigitalage/ampache-with-hangthedj-module<br />
cd ampache-with-hangthedj-module<br />
git checkout -b daily_grind origin/daily_grind<br />
If you already have the daily_grind branches and just need to connect them:<br />
git branch -u origin/daily_grind daily_grind<br />
|}<br />
<br />
{| class="mw-collapsible mw-collapsed wikitable"<br />
! Connect to origin after the fact<br />
|-<br />
|<br />
git remote add origin ssh:// m@bitpost.com/home/m/development/logs<br />
git fetch<br />
From ssh:// bitpost/home/m/development/logs<br />
* [new branch] daily_grind -> origin/daily_grind<br />
* [new branch] master -> origin/master<br />
git branch -u origin/daily_grind daily_grind<br />
git checkout master<br />
git branch -u origin/master master<br />
|}<br />
<br />
{| class="mw-collapsible mw-collapsed wikitable"<br />
! Ignore local and remote changes to a file <br />
|-<br />
| This is helpful for conf files that need local-specific modifications that shouldn't be pushed. You have to toggle it on/off as needed to get updates! See [http://stackoverflow.com/questions/4348590/how-can-i-make-git-ignore-future-revisions-to-a-file/39776107#39776107 my SO answer].<br />
<br />
PREVENT COMMIT OF CHANGES TO A LOCAL FILE<br />
-----------------------------------------<br />
git update-index --skip-worktree apps/views/_partials/jsIncludes.scala.html<br />
<br />
RESET TO GET CHANGES AGAIN<br />
--------------------------<br />
git update-index --no-skip-worktree apps/views/_partials/jsIncludes.scala.html<br />
<br />
LIST SKIPPED FILES<br />
------------------<br />
git ls-files -v . | grep ^S<br />
S app/views/_partials/jsIncludes.scala.html<br />
-----------------------------------------<br />
|}<br />
{| class="mw-collapsible mw-collapsed wikitable"<br />
! Replace name and email of last commit<br />
|-<br />
| Reset the name and email of the last commit, when you realize you forgot to set them first:<br />
git commit --amend --author="First Last <email>" --no-edit<br />
|}<br />
{| class="mw-collapsible mw-collapsed wikitable"<br />
! Simple tagging (when not using git-sync)<br />
|-<br />
| To add and push a tag to HEAD:<br />
git tag -a 1.2.0 -m "1.2.0"<br />
git push origin 1.2.0<br />
<br />
To add and push a tag attached to a commit with current time as timestamp:<br />
git tag -a 1.2.0 1bc92e2f -m "1.2.0"<br />
git push origin 1.2.0<br />
<br />
To tag a commit while ensuring the timestamp matches is slightly more complicated (but not bad). More details [https://stackoverflow.com/a/21759466/717274 here]:<br />
# Set the HEAD to the old commit that we want to tag<br />
git checkout 9fceb02<br />
<br />
# temporarily set the date to the date of the HEAD commit, and add the tag<br />
GIT_COMMITTER_DATE="$(git show --format=%aD | head -1)" \<br />
git tag -a v1.2 -m"v1.2"<br />
<br />
# push to origin<br />
git push origin --tags<br />
<br />
# set HEAD back to whatever you want it to be<br />
git checkout master<br />
|}<br />
{| class="mw-collapsible mw-collapsed wikitable"<br />
! Changing branches in a project with submodules<br />
|-<br />
|<br />
# always reset the @*$ submodules to proper commits<br />
git checkout develop && git submodule update<br />
|}<br />
{| class="mw-collapsible mw-collapsed wikitable"<br />
! Hard-reset a misbehaving submodule to parent commit version<br />
|-<br />
|<br />
git submodule deinit -f .<br />
git submodule update --init<br />
|}<br />
<br />
=== CONFIGURATION ===<br />
{| class="mw-collapsible mw-collapsed wikitable"<br />
! Set name and email<br />
|-<br />
| Globally on one machine (note the machine name at end of user name):<br />
git config --global user.email m@thedigitalmachine.com; git config user.name "Michael Behrns-Miller [cast]"<br />
Override for a repository:<br />
git config user.email mbm@equityshift.io; git config user.name "MBM [cast]"<br />
|}<br />
{| class="mw-collapsible mw-collapsed wikitable"<br />
! Visual difftool and mergetool setup<br />
|-<br />
| Meld is purdy, let's kick its tires. Hope it actually works...<br />
git config --global diff.tool meld<br />
git config --global merge.tool meld<br />
git config --global --add difftool.prompt false<br />
I used to set up kdiff3 manually, like this... (gross)<br />
* LINUX - put this in ~/.gitconfig<br />
[diff]<br />
tool = kdiff3<br />
<br />
[merge]<br />
tool = kdiff3<br />
* WINDOZE<br />
[difftool "kdiff3"]<br />
path = C:/Progra~1/KDiff3/kdiff3.exe<br />
trustExitCode = false<br />
[difftool]<br />
prompt = false<br />
[diff]<br />
tool = kdiff3<br />
[mergetool "kdiff3"]<br />
path = C:/Progra~1/KDiff3/kdiff3.exe<br />
trustExitCode = false<br />
[mergetool]<br />
keepBackup = false<br />
[merge]<br />
tool = kdiff3<br />
* LINUX Before - What a ridiculous pita... copy this into .git/config...<br />
[difftool "kdiff3"]<br />
path = /usr/bin/kdiff3<br />
trustExitCode = false<br />
[difftool]<br />
prompt = false<br />
[diff]<br />
tool = kdiff3<br />
[mergetool "kdiff3"]<br />
path = /usr/bin/kdiff3<br />
trustExitCode = false<br />
[mergetool]<br />
keepBackup = false<br />
[merge]<br />
tool = kdiff3<br />
|}<br />
{| class="mw-collapsible mw-collapsed wikitable"<br />
! Convert to a bare repo<br />
|-<br />
| Start with a normal git repo via [git init]; add your files, get it all set up. Then do this:<br />
cd repo<br />
Now you can copy-paste this...<br />
mv .git .. && rm -fr *<br />
mv ../.git .<br />
mv .git/* .<br />
rmdir .git<br />
git config --bool core.bare true<br />
cd ..<br />
Don't copy/paste these, you need to change repo name...<br />
mv repo repo.git # rename it for clarity<br />
git clone repo.git # (optional, if you want a live repo on the server where you have the bare repo)<br />
Then you can clean up old branches like daily and daily_grind, as needed.<br />
|}<br />
{| class="mw-collapsible mw-collapsed wikitable"<br />
! Convert bare to a mirror of remote (github, facebook, etc)<br />
|-<br />
| You need a bare mirror repo if you want to take someone else's repo and create your own bare to work from.<br />
If you did NOT specify --mirror when you first created the bare repo, you can convert to a mirror by adding these last two lines to config, underneath url:<br />
[remote "origin"]<br />
url = git@github.com:facebook/proxygen.git<br />
fetch = +refs/*:refs/*<br />
mirror = true<br />
Now you can fetch from the bare repo:<br />
git fetch<br />
|}<br />
{| class="mw-collapsible mw-collapsed wikitable"<br />
! Create merge-to command<br />
|-<br />
| Add this handy alias command to all git repos' .config file...<br />
[alias]<br />
merge-to = "!gitmergeto() { export tmp_branch=`git branch | grep '* ' | tr -d '* '` && git checkout $1 && git merge $tmp_branch && git checkout $tmp_branch; unset tmp_branch; }; gitmergeto"<br />
<br />
|}<br />
{| class="mw-collapsible mw-collapsed wikitable"<br />
! Fix github diverge from local bare repo following README.md edit<br />
|-<br />
| Yes editing the README.md file on github will FUCK UP your downstream bare repo if you meanwhile push to it before pulling.<br />
Fixing it is a PAIN in the ASS, you have to create a new local repo and pull github into that, pull in from your other local repo, push to github, pull to your bare... <br />
git clone git@github.com:moodboom/quick-http.git quick-http-with-readme-conflict<br />
git remote add local ../quick-http<br />
git fetch local<br />
git merge local/master # merge in changes, likely trivial<br />
git push # pushes back to github<br />
cd ..<br />
mv quick-http.git quick-http.git__gone-out-of-sync-fu-github-readme-editor<br />
git clone git@github.com:moodboom/quick-http.git --bare<br />
cp quick-http.git__gone-out-of-sync-fu-github-readme-editor/config quick-http.git/<br />
And that MIGHT get you on your way... but I would no longer trust ANY of your local repos...<br />
This is a serious pita.<br />
|}<br />
{| class="mw-collapsible mw-collapsed wikitable"<br />
! Windows configure notepad++ editor<br />
|-<br />
| <br />
git config --global core.editor "'C:/Program Files (x86)/Notepad++/notepad++.exe' -multiInst -notabbar -nosession -noPlugin"<br />
|}<br />
{| class="mw-collapsible mw-collapsed wikitable"<br />
! Fix push behavior - ONLY PUSH CURRENT doh<br />
|-<br />
|<br />
git config --global push.default current<br />
|}<br />
{| class="mw-collapsible mw-collapsed wikitable"<br />
! Multiple upstreams<br />
|-<br />
| Use this to cause AUTOMATIC push/pull to a second origin:<br />
git remote set-url origin --push --add user1@repo1<br />
git remote set-url origin --push --add user2@repo2<br />
git remote -v show<br />
Leave out --push if you want to pull as well... but I'd be careful, it's better if code is changed in one client with this config, and then pushed to the multiple origins from there. Otherwise, things are GOING TO GET SYNCY-STINKY.<br />
|}<br />
<br />
=== Git branching strategies ===<br />
==== Simplified Gitflow ====<br />
This is awesome, tight, and well-capable of handling any app with a single primary release (like a website).<br />
RELEASE TAG<br />
o----------------------------o-----------------o------------o------> MASTER<br />
\ / \ \----------/ HOTFIX<br />
\ / \ \<br />
\----------------------/ \--------------------o-----o------> DEVELOP<br />
\ /<br />
\----------------/ FEATURE<br />
<br />
Read more [https://medium.com/goodtogoat/simplified-git-flow-5dc37ba76ea8 here] and [https://gist.github.com/vxhviet/9c4a522921ad857406033c4125f343a5 here].<br />
<br />
==== Gitflow ====<br />
I was a die-hard believer in gitflow for a while. It's very capable. Too capable. You MIGHT need it if you are supporting multiple versions in production... but in all my cases, it is overkill, compared to Simplified Gitflow. The classic diagram, originally from [http://nvie.com/posts/a-successful-git-branching-model/ here]...<br />
[[File:Git for nice release planning.png]]<br />
<br />
=== LFS ===<br />
Just don't use it. It's shite. If you get stuck working with a repo that requires it, and you are using ssh on linux which just won't work with LFS... you will probably end up with a damaged repo. Fix it with this:<br />
git read-tree HEAD && GIT_LFS_SKIP_SMUDGE=1 git checkout -f HEAD<br />
After that, use '''GIT_LFS_SKIP_SMUDGE=1''' during any git command:<br />
GIT_LFS_SKIP_SMUDGE=1 git pull # etc.<br />
<br />
=== My git pages (older) ===<br />
[[Track your changes to an open-source project with git]]<br />
<br />
[[Using git on Windows]]<br />
<br />
[[Portable git]]</div>Mhttps://bitpost.com/w/index.php?title=TrueNAS&diff=7429TrueNAS2023-12-21T22:10:52Z<p>M: /* Regularly do SMART, scrub, resilver */</p>
<hr />
<div>== Overview ==<br />
<br />
=== Pools ===<br />
<br />
TrueNAS provides storage via Pools. A pool is a bunch of raw drives gathered and managed as a set. My pools are one of these:<br />
{| class="wikitable"<br />
! Pool type<br />
! Description<br />
|-<br />
| style="color:red" |single drive<br />
|no TrueNAS advantage other than health checks<br />
|-<br />
| style="color:red" |raid1 pair<br />
|mirrored drives give normal write speeds, fast reads, single-fail redundancy, costs half of storage potential<br />
|-<br />
| style="color:red" |raid0 pair<br />
|striped drives gives fast writes, normal reads, no redundancy, no storage cost<br />
|-<br />
| style="color:green" |raid of multiple drives<br />
|'''raidz''': optimization of read/write speed, redundancy, storage potential<br />
|}<br />
<br />
The three levels of raidz are:<br />
* raidz: one drive is consumed just for parity (no data storage, ie you only get (n-1) storage total), and one drive can be lost without losing any data; fastest; very dangerous to recover from lost drive ("resilver" process is brutal on remaining drives - don't wait)<br />
* raidz2: two drives for parity, two can be lost<br />
* raidz3: three drives for parity, three can be lost; slowest<br />
<br />
=== Datasets ===<br />
<br />
Every pool should have one child dataset. This is where we set the permissions, important for SAMBA access. We could have more than one child dataset, but I haven't had the need.<br />
<br />
==== Adding ====<br />
hive > Storage > Pools > mine (or any newly created pool) > Add Dataset<br />
<br />
Dataset settings:<br />
name #pool#-ds<br />
share type SMB<br />
<br />
Save, then continue...<br />
hive > Storage > Pools > mine (or any newly created pool) > mine-ds > Edit ACL<br />
user m<br />
group m<br />
ACL<br />
who everyone@<br />
type Allow<br />
Perm type Basic (NOTE: "Perm type Basic" is important!)<br />
Perm Full control (NOTE: this is not the default, you will need to change it)<br />
Flags type Basic<br />
Flags Inherit (NOTE: this is not the default, you will need to change it)<br />
(REMOVE on all other blocks)<br />
SAVE<br />
<br />
=== Windows SMB Shares ===<br />
<br />
Share each dataset as a Samba share under:<br />
Sharing > Windows Shares (SMB)<br />
<br />
* Use the pool name for the share name.<br />
* Use the same ACL as for the dataset.<br />
* Purpose: No presets<br />
<br />
WARNING I had to set these Auxiliary parameters in the SMB config so that symlinks would be followed.<br />
<br />
* Services > SMB > Actions > configuration > Auxiliary Parameters:<br />
unix extensions = no<br />
follow symlinks = yes<br />
wide links = yes<br />
* Stop and restart SMB service<br />
<br />
== Maintenance ==<br />
<br />
=== Burn in a new drive ===<br />
[https://www.truenas.com/community/resources/hard-drive-burn-in-testing.92/ ALWAYS do this] even tho it's a PITA. Less pain than not doing it.<br />
<br />
I didn't do it for my 7-8TB-drive zraid. Murphy said FUCK YOU and one of the eight went bad. So... do the test, dumbass.<br />
<br />
But of course I found a way to stay lazy... TrueNAS has the ability to run SMART tests directly on a drive, so do it there. Or just wait for SMART failures to show up. God damn, laziness rules. Maybe. Fool.<br />
<br />
=== Regularly do SMART, scrub, resilver ===<br />
<br />
YOU MUST DO THIS REGULARLY!<br />
<br />
From [https://www.truenas.com/community/threads/self-healed-hard-drive.81138/ here]:<br />
A drive, vdev or pool is declared degraded if ZFS detects problems with the data. If you reboot the error count is reset. A resilver will heal the data errors if there is sufficient redundancy. ZFS will only spot the data issues on read, that’s why we have scrubs, a forced read of all the data to try and determine if there are any errors. So schedule regular scrubs are important. This will not tell you why the data is corrupted, for this you have S.M.A.R.T tests, you need to schedule those as well, both long and short.<br />
<br />
to get a handle on the situation as is, you need to trigger a scrub and long smart tests.<br />
<br />
Never do more than one of these at a time, and never do any of them during heavy disk usage (backups, eg).<br />
<br />
SMART can be done weekly (not too often or it will contribute to early wear-out of SSDs).<br />
<br />
Same for scrub.<br />
<br />
Resilver happens when a drive issue requires the data to be rebalanced or redistributed. Buckle up for this one!<br />
<br />
=== Pool speed check ===<br />
<br />
CAST to SAFE: ~114MB/s write (compressed) on 60MB/s network<br />
<br />
Do this to test raw write speed from anywhere on the LAN to the [safe] pool:<br />
dd if=/dev/zero of=/mnt/safe/safe-dd/speedtest.data bs=4M count=10000<br />
# on hive: 4GB transferred in ~15sec at ~2.9GB/sec, WOW<br />
# on cast: 42GB copied in 371sec at 114MB/s - that seems in line with my network speed (see below)<br />
<br />
To test the network bandwidth limit:<br />
# on hive<br />
iperf -s -w 2m # to run in server mode, looking for 2MB transfers<br />
# on another LAN machine<br />
iperf -c hive -w 2m -t 30s -i 1s<br />
# on cast: 1.51 GB at 477Mbits/sec aka 60MB/sec<br />
# I have a 1Gb switch, i guess that's all we get out of it?<br />
<br />
=== Replace a bad disk in a raidz pool ===<br />
My 7-drive raidz arrays can only lose ONE drive before they go boom, so you MUST replace bad disks immediately. raidz2 uses 2 drives, raidz3 uses three, but SSD raidz you-can-lose-one-drive is, to me, a sweet spot.<br />
<br />
* Watch TrueNAS for '''CRITICAL''' alerts that indicate a drive is failing its self-tests.<br />
* Make note of its serial number.<br />
* Find the drive in the pool, make note of its drive id (not needed but no harm).<br />
* Change the pool drive status from FAULTED to OFFLINE<br />
Storage > Pools > badpool > triple-dot Status > baddrive > triple-dot-status > FAULTED to OFFLINE<br />
* Power down the whole fucking PROXMOX machine<br />
* Pull it, and swap out bad drive for good<br />
* Replace it<br />
Storage > Pools > badpool > baddrive > triple-dot-status > REPLACE<br />
<br />
=== Remove a bad pool ===<br />
<br />
* Make note of which drives use the pool; likely some are bad and some are good and perhaps worth reusing elsewhere.<br />
* Disconnect SMB connections to the pool<br />
** Update valid shares in mh-setup-samba-shares<br />
** Rerun mh-setup-samba-shares everywhere (eventually anyway)<br />
** One possible easier way to get SMB disconnected from the pool is to stop SMB service in TrueNAS<br />
** Sadly, to get through this for my splat pool, I had to remove pool, fail, restart hive, remove pool.<br />
* Pool > (gear) > Export/disconnect<br />
** [x] Delete configuration of shares that use this pool (to remove the associated SMB share)<br />
** [x] Destroy data on this pool (you MUST select this or the silly thing will attempt to export the data)<br />
<br />
=== Update TrueNAS ===<br />
Updating is baked into the UI, nice! And I have auto-updates enabled. So nice.<br />
<br />
These guys work hard on this, to make sure releases are well tested. Watch for alerts about newly available updates. Do not update past the current release!<br />
<br />
System > Update > [Train] (ensure you have a good one selected; on occasion, you'll want to CHANGE it to select a newer stable release!)<br />
Give the system a minute to load available updates...<br />
Press Download available updates > The modal will ask if you want to apply and restart > Say yes<br />
<br />
That's about it!<br />
<br />
== Configuration ==<br />
<br />
=== Set up user ===<br />
I set up m user (1000) and m group (1000)<br />
<br />
=== Set up alert emails ===<br />
Go to one of your google accounts to get an App password. It has to be an account that has 2fa turned on, bleh, so don't use moodboom@gmail.com. I went with abettersoftwaretrader@gmail.com.<br />
<br />
Accounts > Users > root > edit password > abettersoftwaretrader@gmail.com<br />
System > Email > from email > abettersoftwaretrader@gmail.com, smtp.gmail.com 465 Implicit SSL, SMTP auth: (email/API password)<br />
<br />
=== Set up user ssh ===<br />
This was not fun.<br />
* Set up user<br />
* You have to set password ON and make sure to check [x] Allow sudo<br />
* Make sure to allow Samba Authentication for m user that is used for samba<br />
* Add public key to user<br />
* Create a valid folder on the /mnt NAS shares for the user's home; you can mkdir using samba; I created:<br />
/mnt/safe/safe-ds/software/apps/hive-home<br />
* set the user's home to that ^; turn off password auth<br />
* Turn on SSH service<br />
* System > SSH Keypairs > Add SSH keypair for main user m<br />
* System > SSH Connections > Add, use localhost, keypair from prev step<br />
It should work but it does not!<br />
* Open a TrueNas prompt via proxmox console<br />
* Go to the home dir, there should be an .ssh there now<br />
* Reduce permissions on both HOME DIR (700) and .ssh/KEY (400)<br />
* Get a shell and run `sudo visudo` and add this line:<br />
m ALL=(ALL) NOPASSWD: ALL<br />
<br />
Finally! It works!<br />
<br />
== Troubleshooting ==<br />
<br />
SOME of my shares were throwing '''Permission Denied''' errors on mv. Solutions:<br />
* I applied permissions again, recursively, then restarted the SMB service on hive and the problem went away.<br />
* You can also always go to the melange hive console, request a shell, and things always seem to work from there (but you're in FreeBSD world and don't have any beauty scripts like mh-move-torrent!)</div>Mhttps://bitpost.com/w/index.php?title=Git&diff=7428Git2023-12-20T20:55:31Z<p>M: /* TASKS */</p>
<hr />
<div>=== TASKS ===<br />
{| class="mw-collapsible mw-collapsed wikitable"<br />
! New shared central bare repo<br />
|-<br />
| On central server (aka morosoph):<br />
cd development(...)<br />
git init --bare --shared mynewthang.git<br />
On development box<br />
cd development(...)<br />
git clone morosoph:development/mynewthang.git<br />
# this will create a new empty repo with no branch yet<br />
# TO CREATE MASTER BRANCH (this is the only way):<br />
# create files, git add, git commit, git push<br />
Back on bitpost<br />
git clone mynewthang.git # to create a working copy on server, if desired<br />
|}<br />
{| class="mw-collapsible mw-collapsed wikitable"<br />
! create shared central repo for existing code<br />
|-<br />
| Create a bare repo with .git suffix<br />
git init --bare --shared mything.git<br />
Go to existing code and clone the repo next to it, with a temp name. <br />
Move .git into the existing code.<br />
Add code, add a .gitignore as needed, and you're all set.<br />
cd mything/..<br />
git clone (bare-repo-host-and-path)mything.git mything-temp<br />
mv mything-temp/.git mything/<br />
rm -rf mything-temp<br />
cd mything<br />
subl .gitignore # as needed<br />
git add (whatever you want to track)<br />
git commit -a -m "init repo" && git push<br />
|}<br />
{| class="mw-collapsible mw-collapsed wikitable"<br />
! Fetch a branch from remote without checking it out<br />
|-<br />
| Good for when you are rebasing a feature branch.<br />
git fetch origin develop:develop<br />
You would think <code>git fetch --all</code> would do it but does not (it fetches the active branch from ''all origins'' - seriously wtf, who ever wants THAT??).<br />
|}<br />
{| class="mw-collapsible mw-collapsed wikitable"<br />
! Completely reset an out-of-sync branch to 100% match the remote<br />
|-<br />
| Sometimes some other idiot rebased the remote branch on you. Make sure you are on the right branch, locally. Then to completely force-reset it:<br />
git reset --hard origin/thebranchname<br />
|}<br />
{| class="mw-collapsible mw-collapsed wikitable"<br />
! Set default branch of a bare repo<br />
|-<br />
| If you want the default branch of a bare repo to be something other than master:<br />
git branch<br />
* master<br />
moodboom-quick-http<br />
git symbolic-ref HEAD refs/heads/moodboom-quick-http<br />
git branch<br />
master<br />
* moodboom-quick-http<br />
|}<br />
{| class="mw-collapsible mw-collapsed wikitable"<br />
! Find a file across branches<br />
|-<br />
| It's clunky, two steps, and you have to glob out the whole fucking name:<br />
# get the commits that involved the filename<br />
git log --all -- '**/*namebits*'<br />
# even better, get the filenames too, and see if it was added or removed:<br />
git log --all --stat -- '**/*namebits*'<br />
# now find the branch with one of those commits:<br />
git branch -a --contains #commithash#<br />
|}<br />
{| class="mw-collapsible mw-collapsed wikitable"<br />
! gitflow<br />
|-<br />
| Gitflow is awesome, using it myself and everywhere I work these days (2020).<br />
* Devs work out of develop branch<br />
* Devs create feature branches off develop for any decent-sized work<br />
* Once develop is stable and you are ready for a release:<br />
git tag -a -m "#MAJOR#.#MINOR#".0 #MAJOR#.#MINOR#.0<br />
git checkout -b release/release_#MAJOR#.#MINOR#<br />
git push --set-upstream origin release_#MAJOR#.#MINOR#<br />
git checkout master && git merge release/release_#MAJOR#.#MINOR# && git push<br />
git checkout develop # and get back to it!<br />
* Do hotfixes as needed in release branch, tagged #MAJOR#.#MINOR#.++, merged back into master and develop<br />
|}<br />
{| class="mw-collapsible mw-collapsed wikitable"<br />
! Interactive rebase with squash<br />
|-<br />
| Excellent to do when on your own feature branch. Illegal to do if branch is shared AT ALL!<br />
git rebase -i myparentbranch<br />
# work through squash and merge - gitlens may help with squash if you use vscode for EDITOR<br />
git push -f<br />
|}<br />
{| class="mw-collapsible mw-collapsed wikitable"<br />
! Push any branch from bare to origin<br />
|-<br />
| Good for when you are force-pushing a branch rebase.<br />
git push [-f] origin mybranch:mybranch<br />
|}<br />
{| class="mw-collapsible mw-collapsed wikitable"<br />
! Fetch from one origin (eg gitlab) and push to another<br />
|-<br />
| My GitLab is usually a mirror, and therefore a push target. But if you edit a file in GitLab, you may need to pull it from remote "gitlab" and push it to remote "sc", like this:<br />
git fetch gitlab develop:develop<br />
git push sc<br />
This assumes you have something like the following in your config:<br />
[remote "sc"]<br />
url = git@shitcutter.com:the-digital-age/rad-scripts.git<br />
fetch = +refs/heads/*:refs/remotes/origin/*<br />
[remote "github.com"]<br />
url = git@github.com:moodboom/rad-scripts.git<br />
fetch = +refs/heads/*:refs/remotes/origin/*<br />
[remote "gitlab.com"]<br />
url = git@gitlab.com:moodboom/rad-scripts.git<br />
fetch = +refs/heads/*:refs/remotes/origin/*<br />
|}<br />
{| class="mw-collapsible mw-collapsed wikitable"<br />
! Merging conflicts after diverging<br />
|-<br />
| Revert local changes in a file to HEAD<br />
git checkout -- path/to/file.txt<br />
Discard ALL LOCAL COMMITS and get the (possibly diverged) remote instead<br />
git reset --hard origin/master<br />
|}<br />
{| class="mw-collapsible mw-collapsed wikitable"<br />
! Create and push a feature branch<br />
|-<br />
| This will move recent commits AND uncommitted changes into a new branch (but you probably want to finish by cleaning out commits from starting branch, and repulling after you merge the feature).<br />
git checkout -b feature/whiz-bang<br />
<br />
# Do this ONCE: git config --global push.default current<br />
# From then on:<br />
git push -u<br />
<br />
# OR, if you don't want the config, you have to be more specific:<br />
# git push -u origin feature/whiz-bang<br />
|}<br />
{| class="mw-collapsible mw-collapsed wikitable"<br />
! getting upstream commits into your GitLab GITFLOW fork<br />
|-<br />
| NOTE: This doesn't work from a bitpost bare repo, so you need to make a new direct clone of your GitLab fork, first, if you don't have one yet.<br />
<br />
This is very easy if you have left the master branch alone for the parent's commits, and keep your add-on commits in a release-x.x branch, as we have for SWS and SWSS.<br />
<br />
cdl<br />
# Clone local repo directly from GitLab (already done on cobra)<br />
git clone git@gitlab.com:moodboom/Simple-Web-Server.git SWS-gitlab-moodboom<br />
cd SWS-gitlab-moodboom<br />
# make sure master is checked out<br />
git branch<br />
# add parent as remote<br />
git remote add ole-upstream git@gitlab.com:eidheim/Simple-Web-Server.git<br />
git fetch ole-upstream<br />
git rebase ole-upstream/master<br />
git push -f origin master<br />
You can now delete the fresh clone, it has done its job. Or leave it for ease-of-use for next rebase.<br />
<br />
Now update your bare repo on bitpost, to keep things in sync.<br />
git fetch origin master:master -f<br />
<br />
Next, go to dev repo, pull master. Check out release, create a new release from that, and rebase master. (or just create a new release branch off master if that's what you want) It is the gitflow way!<br />
<br />
Push your new branch to bare, then push bare to GitLab via something like:<br />
git push --set-upstream origin release/abt-0.0.3<br />
<br />
To complete SW(S)S rebase, update mh-install-sws to use the new branch. Then run it on all your dev boxes, whoop. (Then, code up any fixes, sigh... and push em.. .sigh... and get on with it!)<br />
|}<br />
{| class="mw-collapsible mw-collapsed wikitable"<br />
! getting upstream commits into your GitLab fork<br />
|-<br />
| NOTE: This doesn't work from a bitpost bare repo, so you need to make a new direct clone of your GitLab fork, first, if you don't have one yet.<br />
Add the remote, call it something specific:<br />
git remote add someauthor-upstream https://gitlab.com/someauthor/theprojectiforked<br />
Fetch all the branches of that remote into remote-tracking branches, such as upstream/master:<br />
git fetch someauthor-upstream<br />
Get on the branch where you are tracking the rebase. Typically your master branch but can be whatever:<br />
git checkout tlsv12 # or master or next or...<br />
Rewrite your branch so that any commits of yours that aren't already in upstream are replayed on top of that other branch (if you do a straight merge instead of rebase you'll screw up the upstream!):<br />
git rebase someauthor-upstream/master<br />
If you haven't done so, in GitLab, go to [https://gitlab.com/moodboom/Simple-WebSocket-Server/-/settings/repository#js-protected-branches-settings the "Protected Branch" settings] and remove protection from master - it's just a fact that you're going to need to force-push to master.<br />
<br />
IF the branch that was the target of the rebase existed, force the push in order to push it to your own forked repository on GitLab. You only need to use the -f the first time after you've rebased:<br />
git push -f origin master<br />
ELSE if the branch you merged into is a new creation, set its upstream when you push:<br />
git push --set-upstream origin tlsv12<br />
You will want to force-fetch to update the bare repo you may have on bitpost, DO THIS NOW or you will screw things up badly later:<br />
[Simple-WebSocket-Server.git] git fetch origin master:master -f<br />
You should also force-update ALL your dev repos, NOW, for the same reason:<br />
git reset --hard HEAD^^^^^^ && git pull<br />
NOTE that you may need to remove a remote-tracking branch if you don't need it any more. It's stupidly painful to get right, eg:<br />
[Simple-WebSocket-Server.git] git branch -rd eidheim/Simple-WebSocket-Server/master<br />
|}<br />
{| class="mw-collapsible mw-collapsed wikitable"<br />
! getting upstream commits into your GitHub fork<br />
|-<br />
| From [http://stackoverflow.com/a/7244456/717274 so...]<br />
Add the remote, call it something specific:<br />
git remote add someauthor-upstream https://github.com/someauthor/theprojectiforked.git<br />
Fetch all the branches of that remote into remote-tracking branches, such as upstream/master:<br />
git fetch someauthor-upstream<br />
Get on the branch where you are tracking the rebase. Typically your master branch but can be whatever:<br />
git checkout tlsv12 # or master or next or...<br />
Rewrite your branch so that any commits of yours that aren't already in upstream are replayed on top of that other branch (if you do a straight merge instead of rebase you'll screw up the upstream!):<br />
git rebase someauthor-upstream/master<br />
IF the branch that was the target of the rebase existed, force the push in order to push it to your own forked repository on GitHub. You only need to use the -f the first time after you've rebased:<br />
git push -f origin master<br />
ELSE if the branch you merged into is a new creation, set its upstream when you push:<br />
git push --set-upstream origin tlsv12<br />
Now master has the latest commits from the fork origin. You can rebase onto it, if you've been working in a branch (good):<br />
git checkout mybranch<br />
git rebase master<br />
|}<br />
{| class="mw-collapsible mw-collapsed wikitable"<br />
! Clone a bare repo (eg github, gitlab, bb) into a bare repo<br />
|-<br />
| Say you want a bare repo to be shared by all your dev environments, but you need to push/pull THAT from a central bare repo, too.<br />
git clone --bare --shared git@bitbucket.org:equityshift/es-demo.git<br />
cd es-demo.git<br />
git config remote.origin.fetch "+*:*"<br />
git fetch --all<br />
I was surprised that this was difficult at all, and may still have some lessons to learn...<br />
|}<br />
{| class="mw-collapsible mw-collapsed wikitable"<br />
! Create new branch on server, pull to client<br />
|-<br />
| <br />
# ON CENTRAL SERVER<br />
git checkout master # as needed; we are assuming that master is clean enough as a starting point<br />
git checkout -b mynewbranchy<br />
<br />
# HOWEVER, use this instead if you need a new "clean" repo and even master is dirty...<br />
# You need the rm because git "leaves your working folder intact".<br />
git checkout --orphan mynewbranchy<br />
git rm -rf .<br />
<br />
# ON CLIENT<br />
git pull<br />
git checkout -b mynewbranchy origin/mynewbranchy<br />
# if files are in the way from the previously checked-out branch, you can force it...<br />
git checkout -f -b mynewbranchy origin/mynewbranchy<br />
|}<br />
{| class="mw-collapsible mw-collapsed wikitable"<br />
! Merge changes in a single file<br />
|-<br />
| Explanation is [https://stackoverflow.com/a/11593308/717274 here]. <br />
git checkout mybranch<br />
git checkout --patch develop my/single/file.cpp<br />
|}<br />
{| class="mw-collapsible mw-collapsed wikitable"<br />
! Remove old branches<br />
|-<br />
| Explanation is [https://stackoverflow.com/a/23961231/717274 here]. <br />
Remote:<br />
git push origin --delete <branch><br />
Local:<br />
git branch -d <branch><br />
git fetch <remote> --prune # Delete multiple obsolete tracking branches<br />
|}<br />
{| class="mw-collapsible mw-collapsed wikitable"<br />
! Work with two local repos<br />
|-<br />
| Set up a remote, then fetch it as master.<br />
cd repoToChange<br />
git remote add otherplace ../../wherever/../gitrepo<br />
git ls-remote otherplace # verify it looks ok, figure out which branch you like (if not master)<br />
git fetch otherplace # gets it all<br />
git checkout --track otherplace/master # or other branch as needed; this creates the branch and sets remote in one step, cool<br />
Set up a remote, then fetch it into a non-master branch, and push it to the active origin.<br />
cd repoToChange<br />
git remote add otherplace ../../wherever/../gitrepo<br />
git ls-remote otherplace # verify it looks ok, figure out which branch you like (if not master)<br />
git fetch otherplace # gets it all<br />
git checkout otherplace/master # creates it detached, good because we need to name the new branch something other than master<br />
git checkout -b new_otherplace_branchname # creates new local branch with a good name<br />
git push --set-upstream origin new_otherplace_branchname # takes the branch from the OLD origin and pushes it to the ACTIVE origin, cool!<br />
|}<br />
{| class="mw-collapsible mw-collapsed wikitable"<br />
! Pull when untracked files are in the way<br />
|-<br />
| This will pull, forcing untracked files to be overwritten by newly tracked ones in the repo:<br />
git fetch --all<br />
git reset --hard origin/mymatchingbranch<br />
|}<br />
{| class="mw-collapsible mw-collapsed wikitable"<br />
! Create new branch when untracked files are in the way<br />
|-<br />
|<br />
git checkout -b bj143 origin/bj143<br />
git : error: The following untracked working tree files would be overwritten by checkout:<br />
(etc)<br />
<br />
TOTAL PURGE FIX (too much):<br />
git clean -d -fn ""<br />
-d dirs too<br />
-f force, required<br />
-x include ignored files (don't use this)<br />
-n dry run<br />
<br />
BEST FIX (just overwrite what is in the way):<br />
git checkout -f -b bj143 origin/bj143<br />
|}<br />
{| class="mw-collapsible mw-collapsed wikitable"<br />
! Recreate repo<br />
|-<br />
|<br />
git clone ssh://m@thedigitalmachine.com/home/m/development/thedigitalage/ampache-with-hangthedj-module<br />
cd ampache-with-hangthedj-module<br />
git checkout -b daily_grind origin/daily_grind<br />
If you already have the daily_grind branches and just need to connect them:<br />
git branch -u origin/daily_grind daily_grind<br />
|}<br />
<br />
{| class="mw-collapsible mw-collapsed wikitable"<br />
! Connect to origin after the fact<br />
|-<br />
|<br />
git remote add origin ssh:// m@bitpost.com/home/m/development/logs<br />
git fetch<br />
From ssh:// bitpost/home/m/development/logs<br />
* [new branch] daily_grind -> origin/daily_grind<br />
* [new branch] master -> origin/master<br />
git branch -u origin/daily_grind daily_grind<br />
git checkout master<br />
git branch -u origin/master master<br />
|}<br />
<br />
{| class="mw-collapsible mw-collapsed wikitable"<br />
! Ignore local and remote changes to a file <br />
|-<br />
| This is helpful for conf files that need local-specific modifications that shouldn't be pushed. You have to toggle it on/off as needed to get updates! See [http://stackoverflow.com/questions/4348590/how-can-i-make-git-ignore-future-revisions-to-a-file/39776107#39776107 my SO answer].<br />
<br />
PREVENT COMMIT OF CHANGES TO A LOCAL FILE<br />
-----------------------------------------<br />
git update-index --skip-worktree apps/views/_partials/jsIncludes.scala.html<br />
<br />
RESET TO GET CHANGES AGAIN<br />
--------------------------<br />
git update-index --no-skip-worktree apps/views/_partials/jsIncludes.scala.html<br />
<br />
LIST SKIPPED FILES<br />
------------------<br />
git ls-files -v . | grep ^S<br />
S app/views/_partials/jsIncludes.scala.html<br />
-----------------------------------------<br />
|}<br />
{| class="mw-collapsible mw-collapsed wikitable"<br />
! Replace name and email of last commit<br />
|-<br />
| Reset the name and email of the last commit, when you realize you forgot to set them first:<br />
git commit --amend --author="First Last <email>" --no-edit<br />
|}<br />
{| class="mw-collapsible mw-collapsed wikitable"<br />
! Simple tagging (when not using git-sync)<br />
|-<br />
| To add and push a tag to HEAD:<br />
git tag -a 1.2.0 -m "1.2.0"<br />
git push origin 1.2.0<br />
<br />
To add and push a tag attached to a commit with current time as timestamp:<br />
git tag -a 1.2.0 1bc92e2f -m "1.2.0"<br />
git push origin 1.2.0<br />
<br />
To tag a commit while ensuring the timestamp matches is slightly more complicated (but not bad). More details [https://stackoverflow.com/a/21759466/717274 here]:<br />
# Set the HEAD to the old commit that we want to tag<br />
git checkout 9fceb02<br />
<br />
# temporarily set the date to the date of the HEAD commit, and add the tag<br />
GIT_COMMITTER_DATE="$(git show --format=%aD | head -1)" \<br />
git tag -a v1.2 -m"v1.2"<br />
<br />
# push to origin<br />
git push origin --tags<br />
<br />
# set HEAD back to whatever you want it to be<br />
git checkout master<br />
|}<br />
{| class="mw-collapsible mw-collapsed wikitable"<br />
! Changing branches in a project with submodules<br />
|-<br />
|<br />
# always reset the @*$ submodules to proper commits<br />
git checkout develop && git submodule update<br />
|}<br />
{| class="mw-collapsible mw-collapsed wikitable"<br />
! Hard-reset a misbehaving submodule to parent commit version<br />
|-<br />
|<br />
git submodule deinit -f .<br />
git submodule update --init<br />
|}<br />
<br />
=== CONFIGURATION ===<br />
{| class="mw-collapsible mw-collapsed wikitable"<br />
! Set name and email<br />
|-<br />
| Globally on one machine (note the machine name at end of user name):<br />
git config --global user.email m@thedigitalmachine.com; git config user.name "Michael Behrns-Miller [cast]"<br />
Override for a repository:<br />
git config user.email mbm@equityshift.io; git config user.name "MBM [cast]"<br />
|}<br />
{| class="mw-collapsible mw-collapsed wikitable"<br />
! Visual difftool and mergetool setup<br />
|-<br />
| Meld is purdy, let's kick its tires. Hope it actually works...<br />
git config --global diff.tool meld<br />
git config --global merge.tool meld<br />
git config --global --add difftool.prompt false<br />
I used to set up kdiff3 manually, like this... (gross)<br />
* LINUX - put this in ~/.gitconfig<br />
[diff]<br />
tool = kdiff3<br />
<br />
[merge]<br />
tool = kdiff3<br />
* WINDOZE<br />
[difftool "kdiff3"]<br />
path = C:/Progra~1/KDiff3/kdiff3.exe<br />
trustExitCode = false<br />
[difftool]<br />
prompt = false<br />
[diff]<br />
tool = kdiff3<br />
[mergetool "kdiff3"]<br />
path = C:/Progra~1/KDiff3/kdiff3.exe<br />
trustExitCode = false<br />
[mergetool]<br />
keepBackup = false<br />
[merge]<br />
tool = kdiff3<br />
* LINUX Before - What a ridiculous pita... copy this into .git/config...<br />
[difftool "kdiff3"]<br />
path = /usr/bin/kdiff3<br />
trustExitCode = false<br />
[difftool]<br />
prompt = false<br />
[diff]<br />
tool = kdiff3<br />
[mergetool "kdiff3"]<br />
path = /usr/bin/kdiff3<br />
trustExitCode = false<br />
[mergetool]<br />
keepBackup = false<br />
[merge]<br />
tool = kdiff3<br />
|}<br />
{| class="mw-collapsible mw-collapsed wikitable"<br />
! Convert to a bare repo<br />
|-<br />
| Start with a normal git repo via [git init]; add your files, get it all set up. Then do this:<br />
cd repo<br />
Now you can copy-paste this...<br />
mv .git .. && rm -fr *<br />
mv ../.git .<br />
mv .git/* .<br />
rmdir .git<br />
git config --bool core.bare true<br />
cd ..<br />
Don't copy/paste these, you need to change repo name...<br />
mv repo repo.git # rename it for clarity<br />
git clone repo.git # (optional, if you want a live repo on the server where you have the bare repo)<br />
Then you can clean up old branches like daily and daily_grind, as needed.<br />
|}<br />
{| class="mw-collapsible mw-collapsed wikitable"<br />
! Convert bare to a mirror of remote (github, facebook, etc)<br />
|-<br />
| You need a bare mirror repo if you want to take someone else's repo and create your own bare to work from.<br />
If you did NOT specify --mirror when you first created the bare repo, you can convert to a mirror by adding these last two lines to config, underneath url:<br />
[remote "origin"]<br />
url = git@github.com:facebook/proxygen.git<br />
fetch = +refs/*:refs/*<br />
mirror = true<br />
Now you can fetch from the bare repo:<br />
git fetch<br />
|}<br />
{| class="mw-collapsible mw-collapsed wikitable"<br />
! Create merge-to command<br />
|-<br />
| Add this handy alias command to all git repos' .config file...<br />
[alias]<br />
merge-to = "!gitmergeto() { export tmp_branch=`git branch | grep '* ' | tr -d '* '` && git checkout $1 && git merge $tmp_branch && git checkout $tmp_branch; unset tmp_branch; }; gitmergeto"<br />
<br />
|}<br />
{| class="mw-collapsible mw-collapsed wikitable"<br />
! Fix github diverge from local bare repo following README.md edit<br />
|-<br />
| Yes editing the README.md file on github will FUCK UP your downstream bare repo if you meanwhile push to it before pulling.<br />
Fixing it is a PAIN in the ASS, you have to create a new local repo and pull github into that, pull in from your other local repo, push to github, pull to your bare... <br />
git clone git@github.com:moodboom/quick-http.git quick-http-with-readme-conflict<br />
git remote add local ../quick-http<br />
git fetch local<br />
git merge local/master # merge in changes, likely trivial<br />
git push # pushes back to github<br />
cd ..<br />
mv quick-http.git quick-http.git__gone-out-of-sync-fu-github-readme-editor<br />
git clone git@github.com:moodboom/quick-http.git --bare<br />
cp quick-http.git__gone-out-of-sync-fu-github-readme-editor/config quick-http.git/<br />
And that MIGHT get you on your way... but I would no longer trust ANY of your local repos...<br />
This is a serious pita.<br />
|}<br />
{| class="mw-collapsible mw-collapsed wikitable"<br />
! Windows configure notepad++ editor<br />
|-<br />
| <br />
git config --global core.editor "'C:/Program Files (x86)/Notepad++/notepad++.exe' -multiInst -notabbar -nosession -noPlugin"<br />
|}<br />
{| class="mw-collapsible mw-collapsed wikitable"<br />
! Fix push behavior - ONLY PUSH CURRENT doh<br />
|-<br />
|<br />
git config --global push.default current<br />
|}<br />
{| class="mw-collapsible mw-collapsed wikitable"<br />
! Multiple upstreams<br />
|-<br />
| Use this to cause AUTOMATIC push/pull to a second origin:<br />
git remote set-url origin --push --add user1@repo1<br />
git remote set-url origin --push --add user2@repo2<br />
git remote -v show<br />
Leave out --push if you want to pull as well... but I'd be careful, it's better if code is changed in one client with this config, and then pushed to the multiple origins from there. Otherwise, things are GOING TO GET SYNCY-STINKY.<br />
|}<br />
<br />
=== Git branching strategies ===<br />
==== Simplified Gitflow ====<br />
This is awesome, tight, and well-capable of handling any app with a single primary release (like a website).<br />
RELEASE TAG<br />
o----------------------------o-----------------o------------o------> MASTER<br />
\ / \ \----------/ HOTFIX<br />
\ / \ \<br />
\----------------------/ \--------------------o-----o------> DEVELOP<br />
\ /<br />
\----------------/ FEATURE<br />
<br />
Read more [https://medium.com/goodtogoat/simplified-git-flow-5dc37ba76ea8 here] and [https://gist.github.com/vxhviet/9c4a522921ad857406033c4125f343a5 here].<br />
<br />
==== Gitflow ====<br />
I was a die-hard believer in gitflow for a while. It's very capable. Too capable. You MIGHT need it if you are supporting multiple versions in production... but in all my cases, it is overkill, compared to Simplified Gitflow. The classic diagram, originally from [http://nvie.com/posts/a-successful-git-branching-model/ here]...<br />
[[File:Git for nice release planning.png]]<br />
<br />
=== LFS ===<br />
Just don't use it. It's shite. If you get stuck working with a repo that requires it, and you are using ssh on linux which just won't work with LFS... check out like this, so that LFS does not try to grab the large files (and fail on auth, stupid SHIT)...<br />
git read-tree HEAD && GIT_LFS_SKIP_SMUDGE=1 git checkout -f HEAD<br />
<br />
=== My git pages (older) ===<br />
[[Track your changes to an open-source project with git]]<br />
<br />
[[Using git on Windows]]<br />
<br />
[[Portable git]]</div>Mhttps://bitpost.com/w/index.php?title=Git&diff=7427Git2023-12-20T19:41:21Z<p>M: </p>
<hr />
<div>=== TASKS ===<br />
{| class="mw-collapsible mw-collapsed wikitable"<br />
! New shared central bare repo<br />
|-<br />
| On central server (aka morosoph):<br />
cd development(...)<br />
git init --bare --shared mynewthang.git<br />
On development box<br />
cd development(...)<br />
git clone morosoph:development/mynewthang.git<br />
# this will create a new empty repo with no branch yet<br />
# TO CREATE MASTER BRANCH (this is the only way):<br />
# create files, git add, git commit, git push<br />
Back on bitpost<br />
git clone mynewthang.git # to create a working copy on server, if desired<br />
|}<br />
{| class="mw-collapsible mw-collapsed wikitable"<br />
! create shared central repo for existing code<br />
|-<br />
| Create a bare repo with .git suffix<br />
git init --bare --shared mything.git<br />
Go to existing code and clone the repo next to it, with a temp name. <br />
Move .git into the existing code.<br />
Add code, add a .gitignore as needed, and you're all set.<br />
cd mything/..<br />
git clone (bare-repo-host-and-path)mything.git mything-temp<br />
mv mything-temp/.git mything/<br />
rm -rf mything-temp<br />
cd mything<br />
subl .gitignore # as needed<br />
git add (whatever you want to track)<br />
git commit -a -m "init repo" && git push<br />
|}<br />
{| class="mw-collapsible mw-collapsed wikitable"<br />
! Fetch a branch from remote without checking it out<br />
|-<br />
| Good for when you are rebasing a feature branch.<br />
git fetch origin develop:develop<br />
You would think <code>git fetch --all</code> would do it but does not (it fetches the active branch from ''all origins'' - seriously wtf, who ever wants THAT??).<br />
|}<br />
{| class="mw-collapsible mw-collapsed wikitable"<br />
! Completely reset an out-of-sync branch to 100% match the remote<br />
|-<br />
| Sometimes some other idiot rebased the remote branch on you. Make sure you are on the right branch, locally. Then to completely force-reset it:<br />
git reset --hard origin/thebranchname<br />
|}<br />
{| class="mw-collapsible mw-collapsed wikitable"<br />
! Set default branch of a bare repo<br />
|-<br />
| If you want the default branch of a bare repo to be something other than master:<br />
git branch<br />
* master<br />
moodboom-quick-http<br />
git symbolic-ref HEAD refs/heads/moodboom-quick-http<br />
git branch<br />
master<br />
* moodboom-quick-http<br />
|}<br />
{| class="mw-collapsible mw-collapsed wikitable"<br />
! Find a file across branches<br />
|-<br />
| It's clunky, two steps, and you have to glob out the whole fucking name:<br />
# get the commits that involved the filename<br />
git log --all -- '**/*namebits*'<br />
# even better, get the filenames too, and see if it was added or removed:<br />
git log --all --stat -- '**/*namebits*'<br />
# now find the branch with one of those commits:<br />
git branch -a --contains #commithash#<br />
|}<br />
{| class="mw-collapsible mw-collapsed wikitable"<br />
! gitflow<br />
|-<br />
| Gitflow is awesome, using it myself and everywhere I work these days (2020).<br />
* Devs work out of develop branch<br />
* Devs create feature branches off develop for any decent-sized work<br />
* Once develop is stable and you are ready for a release:<br />
git tag -a -m "#MAJOR#.#MINOR#".0 #MAJOR#.#MINOR#.0<br />
git checkout -b release/release_#MAJOR#.#MINOR#<br />
git push --set-upstream origin release_#MAJOR#.#MINOR#<br />
git checkout master && git merge release/release_#MAJOR#.#MINOR# && git push<br />
git checkout develop # and get back to it!<br />
* Do hotfixes as needed in release branch, tagged #MAJOR#.#MINOR#.++, merged back into master and develop<br />
|}<br />
{| class="mw-collapsible mw-collapsed wikitable"<br />
! Interactive rebase with squash<br />
|-<br />
| Excellent to do when on your own feature branch. Illegal to do if branch is shared AT ALL!<br />
git rebase -i myparentbranch<br />
# work through squash and merge - gitlens may help with squash if you use vscode for EDITOR<br />
git push -f<br />
|}<br />
{| class="mw-collapsible mw-collapsed wikitable"<br />
! Push any branch from bare to origin<br />
|-<br />
| Good for when you are force-pushing a branch rebase.<br />
git push [-f] origin mybranch:mybranch<br />
|}<br />
{| class="mw-collapsible mw-collapsed wikitable"<br />
! Fetch from one origin (eg gitlab) and push to another<br />
|-<br />
| My GitLab is usually a mirror, and therefore a push target. But if you edit a file in GitLab, you may need to pull it from remote "gitlab" and push it to remote "sc", like this:<br />
git fetch gitlab develop:develop<br />
git push sc<br />
This assumes you have something like the following in your config:<br />
[remote "sc"]<br />
url = git@shitcutter.com:the-digital-age/rad-scripts.git<br />
fetch = +refs/heads/*:refs/remotes/origin/*<br />
[remote "github.com"]<br />
url = git@github.com:moodboom/rad-scripts.git<br />
fetch = +refs/heads/*:refs/remotes/origin/*<br />
[remote "gitlab.com"]<br />
url = git@gitlab.com:moodboom/rad-scripts.git<br />
fetch = +refs/heads/*:refs/remotes/origin/*<br />
|}<br />
{| class="mw-collapsible mw-collapsed wikitable"<br />
! Merging conflicts after diverging<br />
|-<br />
| Revert local changes in a file to HEAD<br />
git checkout -- path/to/file.txt<br />
Discard ALL LOCAL COMMITS and get the (possibly diverged) remote instead<br />
git reset --hard origin/master<br />
|}<br />
{| class="mw-collapsible mw-collapsed wikitable"<br />
! Create and push a feature branch<br />
|-<br />
| This will move recent commits AND uncommitted changes into a new branch (but you probably want to finish by cleaning out commits from starting branch, and repulling after you merge the feature).<br />
git checkout -b feature/whiz-bang<br />
<br />
# Do this ONCE: git config --global push.default current<br />
# From then on:<br />
git push -u<br />
<br />
# OR, if you don't want the config, you have to be more specific:<br />
# git push -u origin feature/whiz-bang<br />
|}<br />
{| class="mw-collapsible mw-collapsed wikitable"<br />
! getting upstream commits into your GitLab GITFLOW fork<br />
|-<br />
| NOTE: This doesn't work from a bitpost bare repo, so you need to make a new direct clone of your GitLab fork, first, if you don't have one yet.<br />
<br />
This is very easy if you have left the master branch alone for the parent's commits, and keep your add-on commits in a release-x.x branch, as we have for SWS and SWSS.<br />
<br />
cdl<br />
# Clone local repo directly from GitLab (already done on cobra)<br />
git clone git@gitlab.com:moodboom/Simple-Web-Server.git SWS-gitlab-moodboom<br />
cd SWS-gitlab-moodboom<br />
# make sure master is checked out<br />
git branch<br />
# add parent as remote<br />
git remote add ole-upstream git@gitlab.com:eidheim/Simple-Web-Server.git<br />
git fetch ole-upstream<br />
git rebase ole-upstream/master<br />
git push -f origin master<br />
You can now delete the fresh clone, it has done its job. Or leave it for ease-of-use for next rebase.<br />
<br />
Now update your bare repo on bitpost, to keep things in sync.<br />
git fetch origin master:master -f<br />
<br />
Next, go to dev repo, pull master. Check out release, create a new release from that, and rebase master. (or just create a new release branch off master if that's what you want) It is the gitflow way!<br />
<br />
Push your new branch to bare, then push bare to GitLab via something like:<br />
git push --set-upstream origin release/abt-0.0.3<br />
<br />
To complete SW(S)S rebase, update mh-install-sws to use the new branch. Then run it on all your dev boxes, whoop. (Then, code up any fixes, sigh... and push em.. .sigh... and get on with it!)<br />
|}<br />
{| class="mw-collapsible mw-collapsed wikitable"<br />
! getting upstream commits into your GitLab fork<br />
|-<br />
| NOTE: This doesn't work from a bitpost bare repo, so you need to make a new direct clone of your GitLab fork, first, if you don't have one yet.<br />
Add the remote, call it something specific:<br />
git remote add someauthor-upstream https://gitlab.com/someauthor/theprojectiforked<br />
Fetch all the branches of that remote into remote-tracking branches, such as upstream/master:<br />
git fetch someauthor-upstream<br />
Get on the branch where you are tracking the rebase. Typically your master branch but can be whatever:<br />
git checkout tlsv12 # or master or next or...<br />
Rewrite your branch so that any commits of yours that aren't already in upstream are replayed on top of that other branch (if you do a straight merge instead of rebase you'll screw up the upstream!):<br />
git rebase someauthor-upstream/master<br />
If you haven't done so, in GitLab, go to [https://gitlab.com/moodboom/Simple-WebSocket-Server/-/settings/repository#js-protected-branches-settings the "Protected Branch" settings] and remove protection from master - it's just a fact that you're going to need to force-push to master.<br />
<br />
IF the branch that was the target of the rebase existed, force the push in order to push it to your own forked repository on GitLab. You only need to use the -f the first time after you've rebased:<br />
git push -f origin master<br />
ELSE if the branch you merged into is a new creation, set its upstream when you push:<br />
git push --set-upstream origin tlsv12<br />
You will want to force-fetch to update the bare repo you may have on bitpost, DO THIS NOW or you will screw things up badly later:<br />
[Simple-WebSocket-Server.git] git fetch origin master:master -f<br />
You should also force-update ALL your dev repos, NOW, for the same reason:<br />
git reset --hard HEAD^^^^^^ && git pull<br />
NOTE that you may need to remove a remote-tracking branch if you don't need it any more. It's stupidly painful to get right, eg:<br />
[Simple-WebSocket-Server.git] git branch -rd eidheim/Simple-WebSocket-Server/master<br />
|}<br />
{| class="mw-collapsible mw-collapsed wikitable"<br />
! getting upstream commits into your GitHub fork<br />
|-<br />
| From [http://stackoverflow.com/a/7244456/717274 so...]<br />
Add the remote, call it something specific:<br />
git remote add someauthor-upstream https://github.com/someauthor/theprojectiforked.git<br />
Fetch all the branches of that remote into remote-tracking branches, such as upstream/master:<br />
git fetch someauthor-upstream<br />
Get on the branch where you are tracking the rebase. Typically your master branch but can be whatever:<br />
git checkout tlsv12 # or master or next or...<br />
Rewrite your branch so that any commits of yours that aren't already in upstream are replayed on top of that other branch (if you do a straight merge instead of rebase you'll screw up the upstream!):<br />
git rebase someauthor-upstream/master<br />
IF the branch that was the target of the rebase existed, force the push in order to push it to your own forked repository on GitHub. You only need to use the -f the first time after you've rebased:<br />
git push -f origin master<br />
ELSE if the branch you merged into is a new creation, set its upstream when you push:<br />
git push --set-upstream origin tlsv12<br />
Now master has the latest commits from the fork origin. You can rebase onto it, if you've been working in a branch (good):<br />
git checkout mybranch<br />
git rebase master<br />
|}<br />
{| class="mw-collapsible mw-collapsed wikitable"<br />
! Clone a bare repo (eg github, gitlab, bb) into a bare repo<br />
|-<br />
| Say you want a bare repo to be shared by all your dev environments, but you need to push/pull THAT from a central bare repo, too.<br />
git clone --bare --shared git@bitbucket.org:equityshift/es-demo.git<br />
cd es-demo.git<br />
git config remote.origin.fetch "+*:*"<br />
git fetch --all<br />
I was surprised that this was difficult at all, and may still have some lessons to learn...<br />
|}<br />
{| class="mw-collapsible mw-collapsed wikitable"<br />
! Create new branch on server, pull to client<br />
|-<br />
| <br />
# ON CENTRAL SERVER<br />
git checkout master # as needed; we are assuming that master is clean enough as a starting point<br />
git checkout -b mynewbranchy<br />
<br />
# HOWEVER, use this instead if you need a new "clean" repo and even master is dirty...<br />
# You need the rm because git "leaves your working folder intact".<br />
git checkout --orphan mynewbranchy<br />
git rm -rf .<br />
<br />
# ON CLIENT<br />
git pull<br />
git checkout -b mynewbranchy origin/mynewbranchy<br />
# if files are in the way from the previously checked-out branch, you can force it...<br />
git checkout -f -b mynewbranchy origin/mynewbranchy<br />
|}<br />
{| class="mw-collapsible mw-collapsed wikitable"<br />
! Merge changes in a single file<br />
|-<br />
| Explanation is [https://stackoverflow.com/a/11593308/717274 here]. <br />
git checkout mybranch<br />
git checkout --patch develop my/single/file.cpp<br />
|}<br />
{| class="mw-collapsible mw-collapsed wikitable"<br />
! Remove old branches<br />
|-<br />
| Explanation is [https://stackoverflow.com/a/23961231/717274 here]. <br />
Remote:<br />
git push origin --delete <branch><br />
Local:<br />
git branch -d <branch><br />
git fetch <remote> --prune # Delete multiple obsolete tracking branches<br />
|}<br />
{| class="mw-collapsible mw-collapsed wikitable"<br />
! Work with two local repos<br />
|-<br />
| Set up a remote, then fetch it as master.<br />
cd repoToChange<br />
git remote add otherplace ../../wherever/../gitrepo<br />
git ls-remote otherplace # verify it looks ok, figure out which branch you like (if not master)<br />
git fetch otherplace # gets it all<br />
git checkout --track otherplace/master # or other branch as needed; this creates the branch and sets remote in one step, cool<br />
Set up a remote, then fetch it into a non-master branch, and push it to the active origin.<br />
cd repoToChange<br />
git remote add otherplace ../../wherever/../gitrepo<br />
git ls-remote otherplace # verify it looks ok, figure out which branch you like (if not master)<br />
git fetch otherplace # gets it all<br />
git checkout otherplace/master # creates it detached, good because we need to name the new branch something other than master<br />
git checkout -b new_otherplace_branchname # creates new local branch with a good name<br />
git push --set-upstream origin new_otherplace_branchname # takes the branch from the OLD origin and pushes it to the ACTIVE origin, cool!<br />
|}<br />
{| class="mw-collapsible mw-collapsed wikitable"<br />
! Pull when untracked files are in the way<br />
|-<br />
| This will pull, forcing untracked files to be overwritten by newly tracked ones in the repo:<br />
git fetch --all<br />
git reset --hard origin/mymatchingbranch<br />
|}<br />
{| class="mw-collapsible mw-collapsed wikitable"<br />
! Create new branch when untracked files are in the way<br />
|-<br />
|<br />
git checkout -b bj143 origin/bj143<br />
git : error: The following untracked working tree files would be overwritten by checkout:<br />
(etc)<br />
<br />
TOTAL PURGE FIX (too much):<br />
git clean -d -fn ""<br />
-d dirs too<br />
-f force, required<br />
-x include ignored files (don't use this)<br />
-n dry run<br />
<br />
BEST FIX (just overwrite what is in the way):<br />
git checkout -f -b bj143 origin/bj143<br />
|}<br />
{| class="mw-collapsible mw-collapsed wikitable"<br />
! Recreate repo<br />
|-<br />
|<br />
git clone ssh://m@thedigitalmachine.com/home/m/development/thedigitalage/ampache-with-hangthedj-module<br />
cd ampache-with-hangthedj-module<br />
git checkout -b daily_grind origin/daily_grind<br />
If you already have the daily_grind branches and just need to connect them:<br />
git branch -u origin/daily_grind daily_grind<br />
|}<br />
<br />
{| class="mw-collapsible mw-collapsed wikitable"<br />
! Connect to origin after the fact<br />
|-<br />
|<br />
git remote add origin ssh:// m@bitpost.com/home/m/development/logs<br />
git fetch<br />
From ssh:// bitpost/home/m/development/logs<br />
* [new branch] daily_grind -> origin/daily_grind<br />
* [new branch] master -> origin/master<br />
git branch -u origin/daily_grind daily_grind<br />
git checkout master<br />
git branch -u origin/master master<br />
|}<br />
<br />
{| class="mw-collapsible mw-collapsed wikitable"<br />
! Ignore local and remote changes to a file <br />
|-<br />
| This is helpful for conf files that need local-specific modifications that shouldn't be pushed. You have to toggle it on/off as needed to get updates! See [http://stackoverflow.com/questions/4348590/how-can-i-make-git-ignore-future-revisions-to-a-file/39776107#39776107 my SO answer].<br />
<br />
PREVENT COMMIT OF CHANGES TO A LOCAL FILE<br />
-----------------------------------------<br />
git update-index --skip-worktree apps/views/_partials/jsIncludes.scala.html<br />
<br />
RESET TO GET CHANGES AGAIN<br />
--------------------------<br />
git update-index --no-skip-worktree apps/views/_partials/jsIncludes.scala.html<br />
<br />
LIST SKIPPED FILES<br />
------------------<br />
git ls-files -v . | grep ^S<br />
S app/views/_partials/jsIncludes.scala.html<br />
-----------------------------------------<br />
|}<br />
{| class="mw-collapsible mw-collapsed wikitable"<br />
! Simple tagging (when not using git-sync)<br />
|-<br />
| To add and push a tag to HEAD:<br />
git tag -a 1.2.0 -m "1.2.0"<br />
git push origin 1.2.0<br />
<br />
To add and push a tag attached to a commit with current time as timestamp:<br />
git tag -a 1.2.0 1bc92e2f -m "1.2.0"<br />
git push origin 1.2.0<br />
<br />
To tag a commit while ensuring the timestamp matches is slightly more complicated (but not bad). More details [https://stackoverflow.com/a/21759466/717274 here]:<br />
# Set the HEAD to the old commit that we want to tag<br />
git checkout 9fceb02<br />
<br />
# temporarily set the date to the date of the HEAD commit, and add the tag<br />
GIT_COMMITTER_DATE="$(git show --format=%aD | head -1)" \<br />
git tag -a v1.2 -m"v1.2"<br />
<br />
# push to origin<br />
git push origin --tags<br />
<br />
# set HEAD back to whatever you want it to be<br />
git checkout master<br />
|}<br />
{| class="mw-collapsible mw-collapsed wikitable"<br />
! Changing branches in a project with submodules<br />
|-<br />
|<br />
# always reset the @*$ submodules to proper commits<br />
git checkout develop && git submodule update<br />
|}<br />
{| class="mw-collapsible mw-collapsed wikitable"<br />
! Hard-reset a misbehaving submodule to parent commit version<br />
|-<br />
|<br />
git submodule deinit -f .<br />
git submodule update --init<br />
|}<br />
<br />
=== CONFIGURATION ===<br />
{| class="mw-collapsible mw-collapsed wikitable"<br />
! Set name and email<br />
|-<br />
| Globally on one machine (note the machine name at end of user name):<br />
git config --global user.email m@thedigitalmachine.com; git config user.name "Michael Behrns-Miller [cast]"<br />
Override for a repository:<br />
git config user.email mbm@equityshift.io; git config user.name "MBM [cast]"<br />
|}<br />
{| class="mw-collapsible mw-collapsed wikitable"<br />
! Visual difftool and mergetool setup<br />
|-<br />
| Meld is purdy, let's kick its tires. Hope it actually works...<br />
git config --global diff.tool meld<br />
git config --global merge.tool meld<br />
git config --global --add difftool.prompt false<br />
I used to set up kdiff3 manually, like this... (gross)<br />
* LINUX - put this in ~/.gitconfig<br />
[diff]<br />
tool = kdiff3<br />
<br />
[merge]<br />
tool = kdiff3<br />
* WINDOZE<br />
[difftool "kdiff3"]<br />
path = C:/Progra~1/KDiff3/kdiff3.exe<br />
trustExitCode = false<br />
[difftool]<br />
prompt = false<br />
[diff]<br />
tool = kdiff3<br />
[mergetool "kdiff3"]<br />
path = C:/Progra~1/KDiff3/kdiff3.exe<br />
trustExitCode = false<br />
[mergetool]<br />
keepBackup = false<br />
[merge]<br />
tool = kdiff3<br />
* LINUX Before - What a ridiculous pita... copy this into .git/config...<br />
[difftool "kdiff3"]<br />
path = /usr/bin/kdiff3<br />
trustExitCode = false<br />
[difftool]<br />
prompt = false<br />
[diff]<br />
tool = kdiff3<br />
[mergetool "kdiff3"]<br />
path = /usr/bin/kdiff3<br />
trustExitCode = false<br />
[mergetool]<br />
keepBackup = false<br />
[merge]<br />
tool = kdiff3<br />
|}<br />
{| class="mw-collapsible mw-collapsed wikitable"<br />
! Convert to a bare repo<br />
|-<br />
| Start with a normal git repo via [git init]; add your files, get it all set up. Then do this:<br />
cd repo<br />
Now you can copy-paste this...<br />
mv .git .. && rm -fr *<br />
mv ../.git .<br />
mv .git/* .<br />
rmdir .git<br />
git config --bool core.bare true<br />
cd ..<br />
Don't copy/paste these, you need to change repo name...<br />
mv repo repo.git # rename it for clarity<br />
git clone repo.git # (optional, if you want a live repo on the server where you have the bare repo)<br />
Then you can clean up old branches like daily and daily_grind, as needed.<br />
|}<br />
{| class="mw-collapsible mw-collapsed wikitable"<br />
! Convert bare to a mirror of remote (github, facebook, etc)<br />
|-<br />
| You need a bare mirror repo if you want to take someone else's repo and create your own bare to work from.<br />
If you did NOT specify --mirror when you first created the bare repo, you can convert to a mirror by adding these last two lines to config, underneath url:<br />
[remote "origin"]<br />
url = git@github.com:facebook/proxygen.git<br />
fetch = +refs/*:refs/*<br />
mirror = true<br />
Now you can fetch from the bare repo:<br />
git fetch<br />
|}<br />
{| class="mw-collapsible mw-collapsed wikitable"<br />
! Create merge-to command<br />
|-<br />
| Add this handy alias command to all git repos' .config file...<br />
[alias]<br />
merge-to = "!gitmergeto() { export tmp_branch=`git branch | grep '* ' | tr -d '* '` && git checkout $1 && git merge $tmp_branch && git checkout $tmp_branch; unset tmp_branch; }; gitmergeto"<br />
<br />
|}<br />
{| class="mw-collapsible mw-collapsed wikitable"<br />
! Fix github diverge from local bare repo following README.md edit<br />
|-<br />
| Yes editing the README.md file on github will FUCK UP your downstream bare repo if you meanwhile push to it before pulling.<br />
Fixing it is a PAIN in the ASS, you have to create a new local repo and pull github into that, pull in from your other local repo, push to github, pull to your bare... <br />
git clone git@github.com:moodboom/quick-http.git quick-http-with-readme-conflict<br />
git remote add local ../quick-http<br />
git fetch local<br />
git merge local/master # merge in changes, likely trivial<br />
git push # pushes back to github<br />
cd ..<br />
mv quick-http.git quick-http.git__gone-out-of-sync-fu-github-readme-editor<br />
git clone git@github.com:moodboom/quick-http.git --bare<br />
cp quick-http.git__gone-out-of-sync-fu-github-readme-editor/config quick-http.git/<br />
And that MIGHT get you on your way... but I would no longer trust ANY of your local repos...<br />
This is a serious pita.<br />
|}<br />
{| class="mw-collapsible mw-collapsed wikitable"<br />
! Windows configure notepad++ editor<br />
|-<br />
| <br />
git config --global core.editor "'C:/Program Files (x86)/Notepad++/notepad++.exe' -multiInst -notabbar -nosession -noPlugin"<br />
|}<br />
{| class="mw-collapsible mw-collapsed wikitable"<br />
! Fix push behavior - ONLY PUSH CURRENT doh<br />
|-<br />
|<br />
git config --global push.default current<br />
|}<br />
{| class="mw-collapsible mw-collapsed wikitable"<br />
! Multiple upstreams<br />
|-<br />
| Use this to cause AUTOMATIC push/pull to a second origin:<br />
git remote set-url origin --push --add user1@repo1<br />
git remote set-url origin --push --add user2@repo2<br />
git remote -v show<br />
Leave out --push if you want to pull as well... but I'd be careful, it's better if code is changed in one client with this config, and then pushed to the multiple origins from there. Otherwise, things are GOING TO GET SYNCY-STINKY.<br />
|}<br />
<br />
=== Git branching strategies ===<br />
==== Simplified Gitflow ====<br />
This is awesome, tight, and well-capable of handling any app with a single primary release (like a website).<br />
RELEASE TAG<br />
o----------------------------o-----------------o------------o------> MASTER<br />
\ / \ \----------/ HOTFIX<br />
\ / \ \<br />
\----------------------/ \--------------------o-----o------> DEVELOP<br />
\ /<br />
\----------------/ FEATURE<br />
<br />
Read more [https://medium.com/goodtogoat/simplified-git-flow-5dc37ba76ea8 here] and [https://gist.github.com/vxhviet/9c4a522921ad857406033c4125f343a5 here].<br />
<br />
==== Gitflow ====<br />
I was a die-hard believer in gitflow for a while. It's very capable. Too capable. You MIGHT need it if you are supporting multiple versions in production... but in all my cases, it is overkill, compared to Simplified Gitflow. The classic diagram, originally from [http://nvie.com/posts/a-successful-git-branching-model/ here]...<br />
[[File:Git for nice release planning.png]]<br />
<br />
=== LFS ===<br />
Just don't use it. It's shite. If you get stuck working with a repo that requires it, and you are using ssh on linux which just won't work with LFS... check out like this, so that LFS does not try to grab the large files (and fail on auth, stupid SHIT)...<br />
git read-tree HEAD && GIT_LFS_SKIP_SMUDGE=1 git checkout -f HEAD<br />
<br />
=== My git pages (older) ===<br />
[[Track your changes to an open-source project with git]]<br />
<br />
[[Using git on Windows]]<br />
<br />
[[Portable git]]</div>Mhttps://bitpost.com/w/index.php?title=Sendgrid&diff=7426Sendgrid2023-12-20T13:58:41Z<p>M: /* [Domain Authentication] */</p>
<hr />
<div>Sendgrid is great but takes some fiddling.<br />
<br />
== Configuration ==<br />
<br />
=== Email Template ===<br />
I am using a Dynamic Template from [https://mc.sendgrid.com/dynamic-templates here].<br />
<br />
I am building it from HTML snippets in Reusable, here:<br />
development/Reusable/HTML_CSS/basicEmailHtml_header.html<br />
<br />
=== Domain Authentication ===<br />
<br />
[https://app.sendgrid.com/settings/sender_auth Go here] to set up DNS records to authenticate your domain. <br />
<br />
You will have to add something like 5 CNAME and 1 TXT DNS records, all kinda ugly. Just do it!<br />
<br />
Make sure you don't proxy them if you are using Cloudflare.<br />
<br />
Remove the old ones! Even if they are still "verified", they will not work! And Sendgrid will try to use them and fail. Keep it clean with only one of each these that actually have DNS records:<br />
* Domain Authentication<br />
* Link Branding<br />
* Single Sender (this should be good to reuse)<br />
<br />
Repeat if you change registrars!</div>Mhttps://bitpost.com/w/index.php?title=Sendgrid&diff=7425Sendgrid2023-12-20T13:50:31Z<p>M: /* [Domain Authentication] */</p>
<hr />
<div>Sendgrid is great but takes some fiddling.<br />
<br />
== Configuration ==<br />
<br />
=== Email Template ===<br />
I am using a Dynamic Template from [https://mc.sendgrid.com/dynamic-templates here].<br />
<br />
I am building it from HTML snippets in Reusable, here:<br />
development/Reusable/HTML_CSS/basicEmailHtml_header.html<br />
<br />
=== [Domain Authentication] ===<br />
<br />
[https://app.sendgrid.com/settings/sender_auth Go here] to set up DNS records to authenticate your domain. <br />
<br />
You will have to add something like 5 CNAME and 1 TXT DNS records, all kinda ugly. Just do it!<br />
<br />
Make sure you don't proxy them if you are using Cloudflare.<br />
<br />
Repeat if you change registrars!</div>Mhttps://bitpost.com/w/index.php?title=Sendgrid&diff=7424Sendgrid2023-12-20T13:50:02Z<p>M: /* Email Template */</p>
<hr />
<div>Sendgrid is great but takes some fiddling.<br />
<br />
== Configuration ==<br />
<br />
=== Email Template ===<br />
I am using a Dynamic Template from [https://mc.sendgrid.com/dynamic-templates here].<br />
<br />
I am building it from HTML snippets in Reusable, here:<br />
development/Reusable/HTML_CSS/basicEmailHtml_header.html<br />
<br />
=== [Domain Authentication] ===<br />
<br />
[https://app.sendgrid.com/settings/sender_auth Go here] to set up DNS records to authenticate your domain. <br />
<br />
You will have to add something like 5 CNAME and 1 TXT DNS records, all kinda ugly. Just do it!<br />
<br />
Repeat if you change registrars!</div>M