Docker: Difference between revisions

From Bitpost wiki
No edit summary
 
(43 intermediate revisions by the same user not shown)
Line 3: Line 3:
Keith: Alpine is a stripped down linux distro.  Need to learn about how to handle persistent volumes, container secrets (don't put in container, but it can prompt for things).  Dockerfile -v (volume).  Container should output to stdin/out, then host can manage logging.  Terraform can build your arch (can use a proxmox template), ansible is great for actual tasks.  GCP has managed kubernetes (wait until you understand why you need it).  Check out hashicorp vault FOSS version for awesome secret storage that is docker-compatible.
Keith: Alpine is a stripped down linux distro.  Need to learn about how to handle persistent volumes, container secrets (don't put in container, but it can prompt for things).  Dockerfile -v (volume).  Container should output to stdin/out, then host can manage logging.  Terraform can build your arch (can use a proxmox template), ansible is great for actual tasks.  GCP has managed kubernetes (wait until you understand why you need it).  Check out hashicorp vault FOSS version for awesome secret storage that is docker-compatible.


=== Install ===
=== Maintenance ===
 
==== Restart on reboot ====
If you are using <code>docker compose</code>, you should add this to your containers in compose.yml:
restart: always
 
To restart a single container on reboot, once it is running, update its config:
docker update --restart unless-stopped container_id
 
==== Prune regularly ====
ALWAYS prune your host's containers and images!  Or docker will eat your drive alive.
Do this in crontab:
0 3 * * * docker container prune -f && docker image prune -f
On occasion you may also need to clean up strays, with this super-prune:
docker system prune --all
It will remove all unused images not just dangling ones.  Make sure the ones you want to keep are running!  But DO THIS whenever you've been dicking around for a while, you're sure to have splorched some schtumm!
 
Also don't forget to [[Systemd#Log_limit|prune your system log]].
 
=== Commands ===
docker build -t name .            # builds an image from curr dir Dockerfile
docker images                    # lists images
docker run --name cont-name image # to create and start a container from an image, which you can then stop and start
                                  # -it to run in a terminal, then Ctrl-C to stop it; use -d to run detached
docker logs --follow cont-name    # tail container logging
docker logs -f cont --tail 100    # tail but clip log - REQUIRED on long-running containers!
docker ps                        # to see what containers are running
docker ps -a                      # to see what containers are running (including recently stopped containers)
docker start|stop name            # to start/stop a container
docker exec -it cont-name cmd    # to run cmd on running container
docker exec -it cont-name bash    # get bash prompt on running container
docker exec -u root -it cont-name /bin/bash # to run bash as root (so eg you can `apt install ...`)
docker exec -u 0 -it cont-name bash # similar to ^
docker cp ./myfile mycont:/dest  # copy file into container
docker cp mycont:/src /home/      # copy file out of container
docker cp $(docker create --rm ${imageBaseUrl}):/image/path/files /local/path # copy image files
docker rm name                    # to remove a stopped container
docker container prune            # to remove all stopped containers
docker images                    # lists images
docker rmi REPOSITORY/TAG        # to remove an image
docker image prune                # remove all dangling images
docker push|pull                  # push to / pull from hub.docker.com (for subsequent pull elsewhere!)
 
==== See files in an image ====
You should REALLY peek at images before blindly running containers.  Stupid docker doesn't make this OOTB-easy, but there's [https://stackoverflow.com/a/53481010/717274 always a way].
docker create --name="tmp_$$" image:tag ls
docker export tmp_$$ | tar t
docker rm tmp_$$
 
Or to just fucking untar them all:
docker save nginx > nginx.tar
tar -xvf nginx.tar
 
==== Pretty ps ====
Use this to show containers in a nice format (you can also add this as default, in ~/.docker/config.json):
docker ps -a --format 'table {{.ID}}\t{{.Status}} \t{{.Names}}\t{{.Command}}'
docker ps -a --format 'table {{.ID}}\t{{.Status}} \t{{.Names}}\t{{.Command}}' | grep #mycontainer#
 
==== Restart container with new image ====
 
This is best practice, especially for large containers that are hosted at another location.  It removes the image retrieval time from the overall container downtime:
docker pull mysql
docker stop my-mysql-container
docker rm my-mysql-container
docker run --name=my-mysql-container --restart=always ...
 
=== Containers ===
 
Find nirvana [https://hub.docker.com/search?type=image here.]
 
==== Debian slim ====
 
Debian slim containers are much smaller than standard installs.  They are stripped of things like documentation, while still maintaining a full linux kernel and C++ stack.
 
You can use apt to bake in what you need from there.  Nice!
 
==== Node ====
 
The official node container is huge (1GB), the alpine one is relatively tiny.  See the list [https://hub.docker.com/_/node here.]
 
==== alpine ====
Alpine is the best TINY base linux container.  But it runs BusyBox and musl so many things (nvm, meteor) won't work (at least without a TON of effort).
 
===== Node on alpine =====
 
Here's a good starting point for a node app, but remember meteor won't work:


* Install docker
<pre>
<pre>
sudo apt-get install apt-transport-https ca-certificates curl gnupg lsb-release
FROM alpine/git
echo  "deb [arch=amd64 signed-by=/usr/share/keyrings/docker-archive-keyring.gpg] https://download.docker.com/linux/ubuntu \
RUN apk --update add curl bash tar sudo npm
  $(lsb_release -cs) stable" | sudo tee /etc/apt/sources.list.d/docker.list > /dev/null
SHELL ["/bin/bash", "-c"]
sudo apt update && sudo apt-get install docker-ce docker-ce-cli containerd.io
 
sudo docker run hello-world
ENV NEWUSER='m'
sudo docker container ls -all # to see previous run-and-teardown
RUN adduser -g "$NEWUSER" -D -s /bin/bash $NEWUSER \
sudo usermod -aG docker m # to add m to docker group for complete access, no more need for [sudo docker]
&& echo "$NEWUSER ALL=(ALL) ALL" > /etc/sudoers.d/$NEWUSER && chmod 0440 /etc/sudoers.d/$NEWUSER
</pre>
 
=== Node container ===
USER m
* Install a node container.  The official node one is HUGE (1GB), the alpine one is relatively tiny.  See the list [https://hub.docker.com/_/node here.]
WORKDIR /home/m
<pre>
 
docker pull node
COPY --chown=m my-code /home/m/my-code
docker image pull node:current-alpine3.11
 
# details: https://github.com/nodejs/docker-node/blob/8d77359e4f20c45829f7d7399b76a5eb99eff4da/16/alpine3.11/Dockerfile
RUN npm install -g whatevah
docker image ls
 
docker run -it node
EXPOSE 3000
Ctrl-D
CMD [ "my_app", "param1" ]
docker image ls
</pre>
</pre>
=== More examples ===
=== More examples ===
* Example dockerfile for [https://hub.docker.com/r/linuxserver/nextcloud nextcloud]
* Example dockerfile for [https://hub.docker.com/r/linuxserver/nextcloud nextcloud]
* MDMDockerfile attempt one
<pre>
m@matryoshka:~$ cat MDMDockerfile
FROM node:current-alpine3.11


RUN curl https://install.meteor.com/ | sh
==== LetsEncrypt SSL certificate generator ====
&& mkdir -p development
docker pull zerossl/client
&& cd development
# well that experiment went to shit... we tried to add a TXT domain record but it wasn't found
&& git clone es-platform
# tom thought we needed a full resolving A record  before TXT would work
&& cd /home/m/development/es-platform
# either way, we can use a self-signed cert with gitlab and forego the constant need to renew
&& meteor npm install
 
&& cd /home/m/development/es-config/scripts/node/es
=== Networking ===
&& npm install -g
 
Bridge networking (the default) allows connections between containers running on the same docker host.
 
docker network create my-nw # defaults to --driver bridge
docker run (...) --network my-nw (...) # to create and start a container on the network
docker network connect my-nw container-name # to attach a container to the network after it is started
docker network inspect my-nw
 
=== Install ===
 
==== [https://docs.docker.com/engine/install/ubuntu/#install-using-the-repository Install docker with apt] ====
 
This now includes docker compose.  It should be all you need.  I had to do this shit that is fucking always a problem and never documented - FU.
* add yourself to docker group
sudo usermod -aG docker ${USER}
* fix the GOD DAMNED socket permission. assholes...
sudo chmod 666 /var/run/docker.sock
 
==== Proxmox CPU config ====


COPY docker-entrypoint.sh /usr/local/bin/
Some images (like Meteor 5.0) require more-advanced CPU capabilities than Proxmox grants by default. Specifically, Mongo 5.0 requires AVX cpu instructions. To enable them:
ENTRYPOINT ["docker-entrypoint.sh"]


CMD [ "es r" ]
Proxmox > VM > Edit > Processor > Type: "host"
</code></pre>


=== Commands ===
Note that my Proxmox docker VM is called matryoshka.
* show containers in a nice format
sudo docker ps -a --format "table {{.Names}}\t{{.Status}}"
sudo docker ps -a --format "table {{.Names}}\t{{.Status}}" |grep #mycontainer#

Latest revision as of 20:46, 27 December 2023

Thanks Keith for the intro!

Keith: Alpine is a stripped down linux distro. Need to learn about how to handle persistent volumes, container secrets (don't put in container, but it can prompt for things). Dockerfile -v (volume). Container should output to stdin/out, then host can manage logging. Terraform can build your arch (can use a proxmox template), ansible is great for actual tasks. GCP has managed kubernetes (wait until you understand why you need it). Check out hashicorp vault FOSS version for awesome secret storage that is docker-compatible.

Maintenance

Restart on reboot

If you are using docker compose, you should add this to your containers in compose.yml:

restart: always

To restart a single container on reboot, once it is running, update its config:

docker update --restart unless-stopped container_id

Prune regularly

ALWAYS prune your host's containers and images! Or docker will eat your drive alive. Do this in crontab:

0 3 * * * docker container prune -f && docker image prune -f

On occasion you may also need to clean up strays, with this super-prune:

docker system prune --all

It will remove all unused images not just dangling ones. Make sure the ones you want to keep are running! But DO THIS whenever you've been dicking around for a while, you're sure to have splorched some schtumm!

Also don't forget to prune your system log.

Commands

docker build -t name .            # builds an image from curr dir Dockerfile
docker images                     # lists images
docker run --name cont-name image # to create and start a container from an image, which you can then stop and start
                                  # -it to run in a terminal, then Ctrl-C to stop it; use -d to run detached
docker logs --follow cont-name    # tail container logging
docker logs -f cont --tail 100    # tail but clip log - REQUIRED on long-running containers!
docker ps                         # to see what containers are running
docker ps -a                      # to see what containers are running (including recently stopped containers)
docker start|stop name            # to start/stop a container
docker exec -it cont-name cmd     # to run cmd on running container
docker exec -it cont-name bash    # get bash prompt on running container
docker exec -u root -it cont-name /bin/bash # to run bash as root (so eg you can `apt install ...`)
docker exec -u 0 -it cont-name bash # similar to ^
docker cp ./myfile mycont:/dest   # copy file into container
docker cp mycont:/src /home/      # copy file out of container
docker cp $(docker create --rm ${imageBaseUrl}):/image/path/files /local/path # copy image files
docker rm name                    # to remove a stopped container
docker container prune            # to remove all stopped containers
docker images                     # lists images
docker rmi REPOSITORY/TAG         # to remove an image
docker image prune                # remove all dangling images

docker push|pull                  # push to / pull from hub.docker.com (for subsequent pull elsewhere!)

See files in an image

You should REALLY peek at images before blindly running containers. Stupid docker doesn't make this OOTB-easy, but there's always a way.

docker create --name="tmp_$$" image:tag ls
docker export tmp_$$ | tar t
docker rm tmp_$$

Or to just fucking untar them all:

docker save nginx > nginx.tar
tar -xvf nginx.tar

Pretty ps

Use this to show containers in a nice format (you can also add this as default, in ~/.docker/config.json):

docker ps -a --format 'table Template:.ID\tTemplate:.Status \tTemplate:.Names\tTemplate:.Command'
docker ps -a --format 'table Template:.ID\tTemplate:.Status \tTemplate:.Names\tTemplate:.Command' | grep #mycontainer#

Restart container with new image

This is best practice, especially for large containers that are hosted at another location. It removes the image retrieval time from the overall container downtime:

docker pull mysql
docker stop my-mysql-container
docker rm my-mysql-container
docker run --name=my-mysql-container --restart=always ...

Containers

Find nirvana here.

Debian slim

Debian slim containers are much smaller than standard installs. They are stripped of things like documentation, while still maintaining a full linux kernel and C++ stack.

You can use apt to bake in what you need from there. Nice!

Node

The official node container is huge (1GB), the alpine one is relatively tiny. See the list here.

alpine

Alpine is the best TINY base linux container. But it runs BusyBox and musl so many things (nvm, meteor) won't work (at least without a TON of effort).

Node on alpine

Here's a good starting point for a node app, but remember meteor won't work:

FROM alpine/git
RUN apk --update add curl bash tar sudo npm 
SHELL ["/bin/bash", "-c"]

ENV NEWUSER='m'
RUN adduser -g "$NEWUSER" -D -s /bin/bash $NEWUSER \
&& echo "$NEWUSER ALL=(ALL) ALL" > /etc/sudoers.d/$NEWUSER && chmod 0440 /etc/sudoers.d/$NEWUSER

USER m
WORKDIR /home/m

COPY --chown=m my-code /home/m/my-code

RUN npm install -g whatevah

EXPOSE 3000
CMD [ "my_app", "param1" ]

More examples

LetsEncrypt SSL certificate generator

docker pull zerossl/client
# well that experiment went to shit... we tried to add a TXT domain record but it wasn't found
# tom thought we needed a full resolving A record  before TXT would work
# either way, we can use a self-signed cert with gitlab and forego the constant need to renew

Networking

Bridge networking (the default) allows connections between containers running on the same docker host.

docker network create my-nw # defaults to --driver bridge
docker run (...) --network my-nw (...) # to create and start a container on the network
docker network connect my-nw container-name # to attach a container to the network after it is started
docker network inspect my-nw

Install

Install docker with apt

This now includes docker compose. It should be all you need. I had to do this shit that is fucking always a problem and never documented - FU.

  • add yourself to docker group
sudo usermod -aG docker ${USER}
  • fix the GOD DAMNED socket permission. assholes...
sudo chmod 666 /var/run/docker.sock

Proxmox CPU config

Some images (like Meteor 5.0) require more-advanced CPU capabilities than Proxmox grants by default. Specifically, Mongo 5.0 requires AVX cpu instructions. To enable them:

Proxmox > VM > Edit > Processor > Type: "host"

Note that my Proxmox docker VM is called matryoshka.